PLP-SLAM: A Visual SLAM Method Based on Point-Line-Plane Feature Fusion
LI Haifeng1,2, HU Zunhe1, CHEN Xinwei2
1. College of Computer Science and Technology, Civil Aviation University of China, Tianjin 300300, China;
2. Fujian Provincial Key Laboratory of Information Processing and Intelligent Control(Minjiang University), Fuzhou 350121, China
Abstract:To reduce the computation and memory cost, and improve the localization accuracy of point-based visual simultaneous localization and mapping (SLAM) methods, a novel visual SLAM algorithm, named PLP-SLAM, is proposed by fusing the point, line segment and plane features. Under the EKF (extended Kalman filter) framework, the current robot pose is estimated using the point features firstly. Then, the observation models of point, line segment and plane are set up. Finally, the data association of line segments and state-updating models with plane constraints are formulated, and the unknown environment is described by line segment and plane feature map. The experiments are carried out on a public dataset, and the results show that the proposed PLP-SLAM can accomplish the SLAM task with a mean localization error of 2.3 m, which is better than the point-based SLAM method. Furthermore, the SLAM experiment results with different feature combinations illustrate the superiority of the proposed point-line-plane feature fusion.
[1] Fuentes-Pacheco J, Ruiz-Ascencio J, Manuel Rendon-Mancha J. Visual simultaneous localization and mapping:A survey[J]. Artificial Intelligence Review, 2012, 43(1):55-81.
[2] 梁明杰,闵华清,罗荣华.基于图优化的同时定位与地图创建综述[J].机器人,2013,35(4):500-512.Liang M J, Min H Q, Luo R H. Graph-based SLAM:A survey[J]. Robot, 2013, 35(4):500-512.
[3] 孙凤池,黄亚楼,康叶伟.基于视觉的移动机器人同时定位与建图研究进展[J].控制理论与应用,2010,27(4):488-494.Sun F C, Huang Y L, Kang Y W. Review on the achievements in simultaneous localization and mapping for mobile robot based on vision sensor[J]. Control Theory and Applications, 2010, 27(4):488-494.
[4] Tardif J P, Pavlidis Y, Daniilidis K. Monocular visual odometry in urban environments using an omnidirectional camera[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2008:2531-2538.
[5] Civera J, Grasa O G, Davison A J, et al. 1-point RANSAC for extended Kalman filtering:Application to real-time structure from motion and visual odometry[J]. Journal of Field Robotics, 2010, 27(5):609-631.
[6] Mullane J, Vo B N, Adams M D, et al. A random-finite-set approach to Bayesian SLAM[J]. IEEE Transactions on Robotics, 2011, 27(2):268-282.
[7] Adams M, Vo B N, Mahler R, et al. SLAM gets a PHD:New concepts in map estimation[J]. IEEE Robotics and Automation Magazine, 2014, 21(2):26-37.
[8] Klein G, Murray D. Parallel tracking and mapping for smallAR workspaces[C]//6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Piscataway, USA:IEEE, 2007:250-259.
[9] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM:A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163.
[10] Zhang G X, Kang D H, Suh I H. Loop closure through vanishing points in a line-based monocular SLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2012:4565-4570.
[11] Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE Transactions on Robotics, 2014, 30(1):177-187.
[12] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110.
[13] 付梦印,吕宪伟,刘彤,等.基于RGB-D数据的实时SLAM算法[J].机器人,2015,37(6):683-692.Fu M Y, Lü X W, Liu T, et al. Real-time SLAM algorithm based on RGB-D data[J]. Robot, 2015, 37(6):683-692.
[14] Grompone von Gioi R, Jakubowicz J, Morel, J M, et al. LSD:A fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(4):722-732.
[15] Lemaire T, Lacroix S. Monocular-vision based SLAM using line segments[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2007:2791-2796.
[16] Sola J, Vidal-Calleja T, Devy M. Undelayed initialization of line segments in monocular SLAM[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2009:1553-1558.
[17] 武涛,孙凤池,苑晶,等.一种基于线段特征的室内环境主动SLAM方法[J].机器人,2009,31(2):166-170,178.Wu T, Sun F C, Yuan J, et al. An active SLAM approach based on line segment feature in indoor environment[J]. Robot, 2009, 31(2):166-170,178.
[18] Zhang G X, Lee J H, Lim J W, et al. Building a 3-D line-based map using stereo SLAM[J]. IEEE Transactions on Robotics, 2015, 31(6):1364-1377.
[19] Gee A P, Chekhlov D, Calway A, et al. Discovering higher level structure in visual SLAM[J]. IEEE Transactions on Robotics, 2008, 24(5):980-990.
[20] Flint A, Mei C, Reid I, et al. Growing semantically meaningful models for visual SLAM[C]//IEEE Computer Society International Conference on Computer Vision and Pattern Recognition. Piscataway, USA:IEEE, 2010:467-474.
[21] 王文斐,熊蓉,褚健.基于粒子滤波和点线相合的未知环境地图构建方法[J].自动化学报,2009,35(9):1185-1192. Wang W F, Xiong R, Chu J. A simultaneous localization and mapping approach by combining particle filter and dot-line congruence[ J]. Acta Automatica Sinica, 2009, 35(9):1185-1192.
[22] Tretyak E, Barinova O, Kohli P, et al. Geometric image parsing in man-made environments[J]. International Journal of Computer Vision, 2012, 97(3):305-321.
[23] Forster C, Pizzoli M, Scaramuzza D. SVO:Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014: 15-22.
[24] Engel J, Schöps T, Cremers D. LSD-SLAM:Large-scale direct monocular SLAM[C]//13th European Conference on Computer Vision. Berlin, Germany:Springer, 2014:834-849.
[25] Fan B,Wu F C, Hu Z Y. Robust line matching through line-point invariants[J]. Pattern Recognition, 2012, 45(2):794-805.
[26] Lu Y, Song D Z. Visual navigation using heterogeneous landmarks and unsupervised geometric constraints[J]. IEEE Transactions on Robotics, 2015, 31(3):736-749.