刘艳娇, 张云洲, 荣磊, 姜浩, 邓毅. 基于直接法与惯性测量单元融合的视觉里程计[J]. 机器人, 2019, 41(5): 683-689.DOI: 10.13973/j.cnki.robot.180601.
LIU Yanjiao, ZHANG Yunzhou, RONG Lei, JIANG Hao, DENG Yi. Visual Odometry Based on the Direct Method and the Inertial Measurement Unit. ROBOT, 2019, 41(5): 683-689. DOI: 10.13973/j.cnki.robot.180601.
Abstract:Traditionally, the direct method is prone to fall into local optimum because it estimates the camera pose depending on gradient search entirely. For this problem, the IMU (inertial measurement unit) data is tightly associated with camera tracking to provide accurate short-term motion constraints and initial value of gradient direction, and the visual pose tracking result is corrected to improve the tracking accuracy of the monocular visual odometry. Then, the sensor data fusion model is established based on camera and IMU measurements, and the sliding window is used to optimize the solution. During the marginalization process, the state variables which should be marginalized or added to the sliding window are selected according to the camera motion between the current frame and the previous keyframes. In this way, accurate enough prior states for optimization can be ensured to achieve better optimization and fusion performance. Experimental results show that the total orientation error of the proposed algorithm is about 3° and the total translation error is less than 0.4 m compared with the existing visual odometry algorithms.
[1] 张国良,林志林,姚二亮,等.考虑多位姿估计约束的双目视觉里程计[J].控制与决策,2018,33(6):1008-1016.Zhang G L, Lin Z L, Yao E L, et al. Stereo visual odometry with multi-pose estimation constraints[J]. Control and Decision, 2018, 33(6):1008-1016.
[2] 林辉灿,吕强,张洋,等.稀疏和稠密的VSLAM的研究进展[J].机器人,2016,38(5):621-631.Lin H C, Lü Q, Zhang Y, et al. The sparse and dense VSLAM:A survey[J]. Robot, 2016, 38(5):621-631.
[3] Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2015:298-304.
[4] Mourikis A I, Roumeliotis S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//IEEE InternationalConference on Robotics and Automation. Piscataway, USA:IEEE, 2007:3565-3572.
[5] Jones E S, Soatto S. Visual-inertial navigation, mapping and localization:A scalable real-time causal approach[J]. Interna-tional Journal of Robotics Research, 2011, 30(4):407-430.
[6] Usenko V, Engel J, Stuckler J, et al. Direct visual-inertial odometry with stereo cameras[C]//IEEE International Conference onRobotics and Automation. Piscataway, USA:IEEE, 2016:1885-1892.
[7] 林辉灿,吕强,王国胜,等.鲁棒的非线性优化的立体视觉-惯导SLAM[J].机器人,2018,40(6):911-920.Lin H C, Lü Q, Wang G S, et al. Robust stereo visual-inertial SLAM using nonlinear optimization[J]. Robot, 2018, 40(6):911-920.
[8] Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. International Journal of Robotics Research, 2015, 34(3):314-334.
[9] Konolige K, Agrawal M, Solá J. Large-scale visual odometry for rough terrain[C]//13th International Symposium on Robotics Research. Berlin, Germany:Springer, 2010:201-212.
[10] Kitt B, Rehder J, Chambers A, et al. Monocular visual odometry using a planar road model to solve scale ambiguity[C]//Euro-pean Conference on Mobile Robots. 2011:43-48.
[11] Klein G, Murray D. Parallel tracking and mapping for smallAR workspaces[C]//IEEE and ACM International Symposium on Mixed and Augmented Reality. Los Alamitos, USA:IEEE Computer Society, 2007. DOI:10.1109/ISMAR.2007.4538852.
[12] Stühmer J, Gumhold S, Cremers D. Real-time dense geometry from a handheld camera[M]//Lecture Notes in Computer Science, Vol.6376. Berlin, Germany:Springer-Verlag, 2010:11-20.
[13] Graber G, Pock T, Bischof H. Online 3D reconstruction usingconvex optimization[C]//IEEE International Conference on Computer Vision. Piscataway, USA:IEEE, 2011:708-711.
[14] Newcombe R A, Lovegrove S J, Davison A J. DTAM:Dense tracking and mapping in real-time[C]//IEEE International Conference on Computer Vision. Piscataway, USA:IEEE, 2011:2320-2327.
[15] Forster C, Pizzoli M, Scaramuzza D. SVO:Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:15-22.
[16] Forster C, Zhang Z, Gassner M, et al. SVO:Semidirect visualodometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2017, 33(2):249-265.
[17] Engel J, Schöps T, Cremers D. LSD-SLAM:Large-scale directmonocular SLAM[M]//Lecture Notes in Computer Science, Vol.8690. Berlin, Germany:Springer, 2014:834-849.
[18] Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3):611-625.
[19] Concha A, Loianno G, Kumar V, et al. Visual-inertial directSLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2016:1331-1338.
[20] von Stumberg L, Usenko V, Cremers D. Direct sparse visual-inertial odometry using dynamic marginalization[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2018:2510-2517.
[21] Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Transac-tions on Robotics, 2017, 33(1):1-21.
[22] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM:A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2017, 31(5):1147-1163.
[23] Mur-Artal R, Tardos J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2):796-803.
[24] Qin T, Li P L, Shen S J. VINS-Mono:A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4):1004-1020.
[25] 祝朝政,何明,杨晟,等.单目视觉里程计研究综述[J].计算机工程与应用,2018,54(7):20-28, 55. Zhu C Z, He M, Yang S, et al. Survey of monocular visualodometry[J]. Computer Engineering and Applications, 2018, 54(7):20-28, 55.