Abstract:
To reduce the computation and memory cost, and improve the localization accuracy of point-based visual simultaneous localization and mapping (SLAM) methods, a novel visual SLAM algorithm, named PLP-SLAM, is proposed by fusing the point, line segment and plane features. Under the EKF (extended Kalman filter) framework, the current robot pose is estimated using the point features firstly. Then, the observation models of point, line segment and plane are set up. Finally, the data association of line segments and state-updating models with plane constraints are formulated, and the unknown environment is described by line segment and plane feature map. The experiments are carried out on a public dataset, and the results show that the proposed PLP-SLAM can accomplish the SLAM task with a mean localization error of 2.3 m, which is better than the point-based SLAM method. Furthermore, the SLAM experiment results with different feature combinations illustrate the superiority of the proposed point-line-plane feature fusion.