Abstract:
There exist some problems in 2D laser SLAM (simultaneous localization and mapping) based robot navigation, such as the large calculation amount in laser point cloud matching, the poor effect of trajectory closure, the large cumulative error of pose, and the insufficient use of data observed by sensors in each link. To solve these problems, a multi-level sensor data fusion method for real-time positioning and mapping is proposed, named Multilevel-SLAM. Firstly, coordinates are transformed based on the preintegration results of IMU (inertial measurement unit) data in data pre-processing, to register the laser point cloud. Features of laser point cloud are sampled to reduce the computation of point cloud matching. Secondly, the robot pose is obtained by combining IMU and LiDAR observations with unscented Kalman filter algorithm, to improve the loop closure detection effect. Finally, laser point cloud registration constraints, closed-loop constraints and IMU pre-integration constraints are added to back-end optimization of SLAM, to provide constraint registration for the pose node estimation in global map and to realize multi-level data fusion. In the experiment, performances of Karto-SLAM, Cartographer, and Multilevel-SLAM algorithms are tested and compared on LiDAR-IMU open datasets. The positioning accuracy is consistently kept within 5 cm by Multilevel-SLAM algorithm, while there are positioning deviations to different degrees by the contrast methods. Experimental results show that Multilevel-SLAM algorithm effectively improves the effect of trajectory closure at the closing point of the loop, and has a lower positioning error without any significant increment of computation.