Abstract:
In the circumstances with blurred images owing to the fast movement of camera or in low-textured scenes, it is difficult for the SLAM (simultaneous localization and mapping) algorithm based on point features to track sufficient effective point features, which leads to low accuracy and robustness, and even causes the system can't work normally. For this problem, a monocular visual SLAM algorithm based on the point and line features and the wheel odometer data is designed. Firstly, the data association accuracy is improved by using the complementation of point-feature and line-feature. Based on this, an environmental-feature map with geometric information is constructed, and meanwhile the wheel odometer data is incorporated to provide prior and scale information for the visual localization algorithm. Then, the more accurate visual pose is estimated by minimizing reprojection errors of points and line segments in the local map. When the visual localization fails, the localization system still works normally with the wheel odometer data. The simulation results on various public datasets show that the proposed algorithm outperforms the multi-state constraint Kalman filter (MSCKF) algorithm and large-scale direct monocular SLAM (LSD-SLAM) algorithm, which demonstrates the correctness and effectiveness of the algorithm. Finally, the algorithms is applied to a self-developed physical robot system. The root mean square error (RMSE) of the monocular visual localization algorithm is about 7 cm, and the processing time is about 90 ms per frame (640×480) on the embedded platform with 4-core processor of 1.2 GHz main frequency.