Abstract:In visual SLAM (simultaneous localization and mapping) algorithm with fusion of point and line features, new errors are introduced due to the poor accuracy of line feature matching. The accumulation of point and line feature errors leads to the failure of data association. To solve the problem, a line feature matching method based on line point invariants is proposed, which encodes the local geometric relationship between the line segment and two adjacent feature points. Line matching process is completed directly by the existing feature points, which can improve the accuracy and efficiency of line matching effectively. In addition, weights are used in the fusion process of point and line features. When constructing the error function, the weights of point and line features are reasonably distributed according to the richness of scene features. Experiments on the TUM indoor dataset and KITTI road dataset show that compared with the existing point line SLAM system, the proposed point-line SLAM system can improve the accuracy of line matching in visual SLAM effectively and the operating efficiency in feature matching, which makes the line features play an active role in the SLAM process, and the stability of data association is improved.
[1] Durrant W H, Bailey T. Simultaneous localization and mapping:Part I[J]. IEEE Robotics & Automation Magazine, 2006, 13(2):99-110. [2] Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping:Toward the robustperception age[J]. IEEE Transactions on Robotics, 2016, 32(6):1309-1332. [3] Lee J H, Zhang G, Lim J, et al. Place recognition using straight lines for vision-based SLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:3799-3806. [4] Mur-Artal R, Tardos J D. ORB-SLAM2:An open-sourcé SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5):1255-1262. [5] 李海丰,胡遵河,陈新伟. PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J].机器人, 2017, 39(2):214-220. Li H F, Hu Z H, Chen X W. PLP-SLAM:A visual SLAM method based on point-line-plane feature fusion[J]. Robot, 2017, 39(2):214-220. [6] Smith P, Reid I D, Davison A J. Real-time monocular SLAM with straight lines[C]//Proceedings of British Conference on Machine Vision. Edinburgh, UK:BMVA Press, 2006:17-26. [7] Neira J, Ribeiro M I, Tardós J D. Mobile robot localization and map building using monocular vision[C/OL]//5th Symposium for Intelligent Robotics Systems. 1997:275-284. http://webdiis.unizar.es/GRPTR/pubs/NeiraSIRS1997.pdf. [8] Jeong W Y, Lee K M. Visual SLAM with line and corner features[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2006:2570-2575. [9] von Gioi R G, Jakubowicz J, Morel J M, et al. LSD:A fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(4):722-732. [10] Zhang L, Koch R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation, 2013, 24(7):794-805. [11] Zhang G, Lee J H, Lim J, et al. Building a 3-D line-based map using stereo SLAM[J]. IEEE Transactions on Robotics, 2015, 31(6):1364-1377. [12] Zuo X X, Xie X J, Liu Y, et al. Robust visual SLAM with point and line features[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2017:1775-1782. [13] Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2016:2521-2526. [14] Pumarola A, Vakhitov A, Agudo A, et al. PL-SLAM:Real-time monocular visual SLAM with points and lines[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:4503-4508. [15] 王丹,黄鲁,李垚.基于点线特征的单目视觉同时定位与地图构建算法[J].机器人, 2019, 41(3):392-403. Wang D, Huang L, Li Y. A monocular visual SLAM algorithm based on point-line feature[J]. Robot, 2019, 41(3):392-403. [16] Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO:Semi-direct monocular visual odometry by combining points and line segments[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2016:4211-4216. [17] Fan B, Wu F, Hu Z. Robust line matching through line-point invariants[J]. Pattern Recognition, 2012, 45(2):794-805. [18] Sturm J, Engelhard N, Endres F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2012:573-580. [19] Geiger A, Lenz P, Stiller C, et al. Vision meets robotics:The KITTI dataset[J]. International Journal of Robotics Research, 2013, 32(11):1231-1237. [20] Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Piscataway, USA:IEEE, 2007:10pp. [21] Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE Transactions on Robotics, 2013, 30(1):177-187. [22] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM:A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163.