LU Chunxiao, ZHONG Huan, LIU Wei, ZHOU Yong, CUI Zhiquan, LI Weihua. Multi-sensor Fusion SLAM in Complex Terrain Environments[J]. ROBOT, 2024, 46(4): 425-435. DOI: 10.13973/j.cnki.robot.230288
Citation: LU Chunxiao, ZHONG Huan, LIU Wei, ZHOU Yong, CUI Zhiquan, LI Weihua. Multi-sensor Fusion SLAM in Complex Terrain Environments[J]. ROBOT, 2024, 46(4): 425-435. DOI: 10.13973/j.cnki.robot.230288

Multi-sensor Fusion SLAM in Complex Terrain Environments

  • For the problems of precision degradation, localization drift, and even failure of simultaneous localization and mapping (SLAM) algorithms in complex environments such as field, forest, mountain or construction sites, a multi-sensor fusion SLAM algorithm for complex terrains is proposed. Firstly, an adaptive sub-frame segmentation method for radar frames is introduced to address severe point cloud distortion caused by intense motion. This method utilizes IMU (inertial measurement unit) pre-integration to compensate for point cloud distortion, reducing intra-frame distortion and improving the robustness of the SLAM algorithm during intense motion. Secondly, the iterative error-state Kalman filter (IESKF) is employed in the front-end of the algorithm to fuse LiDAR and IMU data for state estimation, providing accurate initial poses for the back-end. In the back-end, the front-end LiDAR-inertial odometry factors, loop closure detection factors, and global positioning system (GPS) factors are integrated based on a factor graph to improve the accuracy and global consistency of the SLAM algorithm. Finally, the proposed method is tested in intense motion scenes, comprehensive campus scenes, and outdoor forest scenes. Experimental results demonstrate that compared to the FAST-LIO2 and LIO-SAM algorithms, the proposed method achieves higher localization accuracy, clearer mapping, and greater robustness in intense motion scenes.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return