In recent years, space agencies and private companies have shown a renewed interest in Moon exploration, with the ultimate goal of having a permanent human presence on the surface by the 2030s. A crucial step in this direction involves deploying robotic missions. To this purpose, at Thales Alenia Space Italia S.p.A., a versatile, multipurpose four-wheeled robotic platform with active suspensions has been designed as a common reference mobility system able to perform a multitude of tasks on the Moon, spanning from South Pole exploration to In Situ Resource Utilization. In this context, we propose a state estimation framework based on factor graph optimization to perform sensor fusion and estimation of rover odometry. The implementation relies on the ROS 2 package Fuse, which has been optimized and extended with 3D sensors and motion models. The paper contributions are twofold: firstly, we developed a method for estimating the rover s linear and angular body velocities based on data from the wheel-steer-suspension assembly encoders. Three-dimensional components of the body velocity and the associated covariance matrix are computed by accurately determining the plane of instantaneous motion of the rover. Secondly, dedicated sensor models are used to fuse the estimated body velocities with readings from the on-board IMU and with odometry input computed from a visual pipeline. The latter can be obtained both from a stereo camera by matching visual features, or by registering point clouds gathered by time-of-flight sensors, allowing autonomous navigation in any lighting condition. Constraints derived from different sensors are joined by leveraging a motion model that encapsulates the entire span of locomotion modalities allowed by the rover geometry. The method has been validated in simulations built on Project Chrono and with the rover prototype navigating in a representative facility. Results demonstrate that the framework is highly optimized, efficiently facilitating the integration of multiple sensor readings from various sources, delivering fused odometry outputs at a high frequency, and ensuring accurate and real-time state updates.

A novel state estimation framework for a four-wheeled lunar rover with active articulated suspensions / Franchini, Giacomo; Roncagliolo, Patrick; Graziato, Davide; Chiminelli, Alessandro Ruggiero; Merlo, Andrea; Chiaberge, Marcello. - In: ACTA ASTRONAUTICA. - ISSN 0094-5765. - ELETTRONICO. - 244:(2026), pp. 54-70. [10.1016/j.actaastro.2026.02.010]

A novel state estimation framework for a four-wheeled lunar rover with active articulated suspensions

Franchini, Giacomo;Graziato, Davide;Chiaberge, Marcello
2026

Abstract

In recent years, space agencies and private companies have shown a renewed interest in Moon exploration, with the ultimate goal of having a permanent human presence on the surface by the 2030s. A crucial step in this direction involves deploying robotic missions. To this purpose, at Thales Alenia Space Italia S.p.A., a versatile, multipurpose four-wheeled robotic platform with active suspensions has been designed as a common reference mobility system able to perform a multitude of tasks on the Moon, spanning from South Pole exploration to In Situ Resource Utilization. In this context, we propose a state estimation framework based on factor graph optimization to perform sensor fusion and estimation of rover odometry. The implementation relies on the ROS 2 package Fuse, which has been optimized and extended with 3D sensors and motion models. The paper contributions are twofold: firstly, we developed a method for estimating the rover s linear and angular body velocities based on data from the wheel-steer-suspension assembly encoders. Three-dimensional components of the body velocity and the associated covariance matrix are computed by accurately determining the plane of instantaneous motion of the rover. Secondly, dedicated sensor models are used to fuse the estimated body velocities with readings from the on-board IMU and with odometry input computed from a visual pipeline. The latter can be obtained both from a stereo camera by matching visual features, or by registering point clouds gathered by time-of-flight sensors, allowing autonomous navigation in any lighting condition. Constraints derived from different sensors are joined by leveraging a motion model that encapsulates the entire span of locomotion modalities allowed by the rover geometry. The method has been validated in simulations built on Project Chrono and with the rover prototype navigating in a representative facility. Results demonstrate that the framework is highly optimized, efficiently facilitating the integration of multiple sensor readings from various sources, delivering fused odometry outputs at a high frequency, and ensuring accurate and real-time state updates.
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0094576526000962-main.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 4.77 MB
Formato Adobe PDF
4.77 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3007528