1. College of Artificial Intelligence, Nankai University, Tianjin 300071, China; 2. Tianjin Key Laboratory of Intelligent Robotics, Tianjin 300071, China
Abstract:Due to the difficulty in descriptor calculation, loop closure detection using 3D lidar point cloud data is a challenging problem in simultaneous localization and mapping (SLAM) systems. Therefore, a novel 3D lidar point cloud descriptor based on structural unit soft-encoding is proposed in this paper, which can be applied to loop closure detection in structured environments. For the difficulty in extracting 3D line segments caused by the sparsity and independence of 3D lidar point cloud, the line segments perpendicular to the ground are extracted firstly by geometric filtering to preserve the structural information of 3D space. Then, a set of structural units is constructed based on the space geometric relationship among the obtained line segments, and the feature vectors are calculated by soft encoding, which are used as the descriptors of the 3D lidar point cloud, Finally, loop closure detection is achieved through the descriptors matching of the two frames of point cloud. Comparative experiments on the KITTI dataset and self-collected dataset show that the proposed approach presents superior performances over the state-of-the-art 3D lidar loop closure detection methods in terms of effectiveness and robustness.
[1] Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping:Toward the robust-perception age[J]. IEEE Transactions on Robotics, 2016, 32(6):1309-1332. [2] Zhang J, Singh S. Low-drift and real-time lidar odometry and mapping[J]. Autonomous Robots, 2017, 41(2):401-416. [3] Mur-Artal R, Montiel J M M, Tardós J D. ORB-SLAM:A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163. [4] Qin T, Li P L, Shen S J. VINS-Mono:A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 43(4):1004-1020. [5] Muhammad N, Lacroix S. Loop closure detection using small-sized signatures from 3D lidar data[C]//IEEE International Symposium on Safety Security and Rescue Robotics. Piscataway, USA:IEEE, 2011:333-338. [6] Rusu R B, Bradski G, Thibaux R, et al. Fast 3D recognition and pose using the viewpoint feature histogram[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2010:2155-2162. [7] Rusu R B, Blodow N, Beetz M. Fast point feature histograms (FPFH) for 3D registration[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2009:3212-3217. [8] Aldoma A, Markus V, Blodow N, et al. CAD-model recognition and 6DOF pose estimation using 3D cues[C]//IEEE International Conference on Computer Vision Workshops. Piscataway, USA:IEEE, 2011:585-592. [9] Wohlkinger W, Vincze M. Ensemble of shape functions for 3D object classification[C]//IEEE International Conference on Robotics and Biomimetics. Piscataway, USA:IEEE, 2011:2987-2992. [10] He L, Wang X L, Zhang H. M2DP:A novel 3D point cloud descriptor and its application in loop closure detection[C]//IEEE International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2016:231-237. [11] Tombari F, Salti S, Stefano L D, et al. A combined texture-shape descriptor for enhanced 3D feature matching[C]//18th IEEE International Conference on Image Processing. Piscataway, USA:IEEE, 2011:809-812. [12] Johnson A E, Hebert M. Using spin images for efficient object recognition in cluttered 3D scenes[J]. IEEE Transactions Pattern Analysis and Machine Intelligence, 1999, 21(5):433-449. [13] Cao F K, Zhuang Y, Zhang H, et al. Robust place recognition and loop closing in laser-based SLAM for UGVs in urban environments[J]. IEEE Sensors Journal, 2018, 18(10):4242-4252. [14] Rublee E, Rabaud V, Konolige K, et al. ORB:An efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Piscataway, USA:IEEE, 2011:2564-2571. [15] Yin H, Ding X, Tang L, et al. Efficient 3D lidar based loop closing using deep neural network[C]//IEEE International Conference on Robotics and Biomimetics. Piscataway, USA:IEEE, 2017:481-486. [16] Bogoslavskyi I, Stachniss C. Efficient online segmentation for sparse 3D laser scans[J]. PFG-Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 2017, 85:41-52. [17] Douillard B, Underwood J, Kuntz N, et al. On the segmentation of 3D lidar point clouds[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2011:2798-2805. [18] Engel J, Stückler J, Cremers D. Large-scale direct SLAM with stereo cameras[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2015:1935-1942. [19] 董星亮,苑晶,黄枢子,等. 室内环境下基于平面与线段特征的RGB-D视觉里程计[J].机器人,2018,40(6):921-932.Dong X L, Yuan J, Huang S Z, et al. RGB-D visual odometry based on features of planes and line segments in indoor environments[J]. Robot, 2018, 40(6):921-932. [20] Pfister S T, Roumeliotis S I, Burdick J W. Weighted line fitting algorithms for mobile robot map building and efficient data representation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2003:1304-1311. [21] Gao H M, Zhang X B, Fang Y C, et al. A line segment extraction algorithm using laser data based on seeded region growing[J]. International Journal of Advanced Robotic Systems, 2018, 15(1). DOI:10.1177/1729881418755245. [22] Röhling T, Mack J, Schulz D. A fast histogram-based similarity measure for detecting loop closures in 3-D lidar data[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2015:736-741. [23] Guo J, Borges P V K, Park C, et al. Local descriptor for robust place recognition using lidar intensity[J]. IEEE Robotics and Automation Letters, 2019, 4(2):1470-1477. [24] Gao H M, Zhang X B, Yuan J, et al. A novel global localization approach based on structural unit encoding and multiple hypothesis tracking[J]. IEEE Transactions on Instrumentation and Measurement, 2019, 68(11):4427-4442. [25] Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? the KITTI vision benchmark suite[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA:IEEE, 2012:3354-3361.