Abstract:
Most of the existing SLAM (simultaneous localization and mapping) methods assume that the environment is static, and the moving objects in the scene are considered as interference, which will lead to a reduced accuracy of positioning and mapping or even failure. Although the detection and tracking of moving objects are necessary in many applications, they are ignored when solving SLAM problems. For this problem, a method combining LiDAR and IMU (inertial measurement unit) is proposed to perform SLAM and detection and tracking of moving objects simultaneously. Firstly, the motion distortion caused by LiDAR motion in the scanning process is compensated by the inertial sensor. All possible moving targets are detected by the FCNN (full convolutional neural network) based on the point cloud after motion compensation. By UKF (unscented Kalman filter), the moving targets are tracked, and the static and dynamic targets are distinguished. Then the remaining static background point clouds are applied to data association and motion estimation to realize positioning and mapping. To further improve the accuracy and consistence of mapping results, the closed-loop detection is integrated, and the global optimization of the trajectory and map is realized based on graph optimization method. Many experiments are carried out on the open dataset KITTI and the dataset collected on the self-developed experimental platform. The experimental results show that, compared with the traditional SLAM methods, the proposed method can not only effectively detect and track moving objects, but also complete vehicle pose estimation and map building in a real-time, robust, low-drift manner, and the mapping accuracy is significantly better than other existing methods.