Abstract:In the circumstances with blurred images owing to the fast movement of camera or in low-textured scenes, it is difficult for the SLAM (simultaneous localization and mapping) algorithm based on point features to track sufficient effective point features, which leads to low accuracy and robustness, and even causes the system can't work normally. For this problem, a monocular visual SLAM algorithm based on the point and line features and the wheel odometer data is designed. Firstly, the data association accuracy is improved by using the complementation of point-feature and line-feature. Based on this, an environmental-feature map with geometric information is constructed, and meanwhile the wheel odometer data is incorporated to provide prior and scale information for the visual localization algorithm. Then, the more accurate visual pose is estimated by minimizing reprojection errors of points and line segments in the local map. When the visual localization fails, the localization system still works normally with the wheel odometer data. The simulation results on various public datasets show that the proposed algorithm outperforms the multi-state constraint Kalman filter (MSCKF) algorithm and large-scale direct monocular SLAM (LSD-SLAM) algorithm, which demonstrates the correctness and effectiveness of the algorithm. Finally, the algorithms is applied to a self-developed physical robot system. The root mean square error (RMSE) of the monocular visual localization algorithm is about 7 cm, and the processing time is about 90 ms per frame (640×480) on the embedded platform with 4-core processor of 1.2 GHz main frequency.
[1] Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping:Toward the robustperception age[J]. IEEE Transactions on Robotics, 2017, 32(6):1309-1332.
[2] Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//IEEE and ACM International Symposium on Mixed and Augmented Reality. Piscataway, USA:IEEE, 2008:10pp.
[3] Mur-Artal R, Montiel J M M, Tardós J D. ORB-SLAM:A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163.
[4] Engel J, Schöps T, Cremers D. LSD-SLAM:Large-scale direct monocular SLAM[C]//European Conference on Computer Vision. Berlin, Germany:Springer-Verlag, 2014:834-849.
[5] Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3):611-625.
[6] Forster C, Pizzoli M, Scaramuzza D. SVO:Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:15-22.
[7] von Gioi R G, Jakubowicz J, Morel J M, et al. LSD:A fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(4):722-732.
[8] 李海丰,胡遵河,陈新伟. PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J].机器人, 2017, 39(2):214-220. Li H F, Hu Z H, Chen X W. PLP-SLAM:A visual SLAM method based on point-line-plane feature fusion[J]. Robot, 2017, 39(2):214-220.
[9] Pumarola A, Vakhitov A. PL-SLAM:Real-time monocular visual SLAM with points and lines[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:4503-4508.
[10] Scaramuzza D, Achtelik M C, Doitsidis L, et al. Visioncontrolled micro flying robots:From system design to autonomous navigation and mapping in GPS-denied environments[J]. IEEE Robotics & Automation Magazine, 2014, 21(3):26-40.
[11] Mourikis A I, Roumeliotis S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2007:3565-3572.
[12] Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visualinertial odometry using nonlinear optimization[J]. International Journal of Robotics Research, 2015, 34(3):314-334.
[13] Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE Transactions on Robotics, 2014, 30(1):177-187.
[14] Sturm J, Engelhard N, Endres F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2012:573-580.
[15] Krombach N, Droeschel D, Behnke S. Combining feature-based and direct methods for semi-dense real-time stereo visual odometry[C]//International Conference on Intelligent Autonomous Systems. Berlin, Germany:Springer, 2016:855-868.
[16] Hartley R, Zisserman A. Multiple view geometry in computer vision[M]. Cambridge, UK:Cambridge University Press, 2003.
[17] Sola J, Vidal-Calleja T, Civera J, et al. Impact of landmark parametrization on monocular EKF-SLAM with points and lines[J]. International Journal of Computer Vision, 2012, 97(3):339-368.
[18] Bartoli A, Sturm P. The 3D line motion matrix and alignment of line reconstructions[J]. International Journal of Computer Vision, 2004, 57(3):159-178.
[19] Forster C, Zhang Z, Gassner M, et al. SVO:Semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2017, 33(2):249-265.
[20] Rosten E, Drummond T. Machine learning for high-speed corner detection[C]//European Conference on Computer Vision. Berlin, Germany:Springer, 2006:430-443.
[21] Baker S, Matthews I. Lucas-Kanade 20 years on:A unifying framework[J]. International Journal of Computer Vision, 2004, 56(3):221-255.
[22] 高翔,张涛,颜沁睿,等.视觉SLAM十四讲:从理论到实践[M].北京:电子工业出版社, 2017. Gao X, Zhang T, Yan Q R, et al. 14 lectures on visual SLAM:From theory to practice[M]. Beijing:Publishing House of Electronics Industry, 2017.
[23] Burri M, Nikolic J, Gohl P, et al. The EuRoC micro aerial vehicle datasets[J]. International Journal of Robotics Research, 2016, 35(10):1157-1163.
[24] Handa A, Whelan T, McDonald J, et al. A benchmark for RGBD visual odometry, 3D reconstruction and SLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:1524-1531.
[25] 谢晓佳.基于点线综合特征的双目视觉SLAM方法[D]. 杭州:浙江大学, 2017. Xie X J. Stereo visual SLAM using point and line features[D]. Hangzhou:Zhejiang University, 2017.
[26] Delmerico J, Scaramuzza D. A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2018:2502-2509.