李永锋, 张国良, 王蜂, 汤文俊, 姚二亮. 基于快速视觉里程计和大回环局部优化模型的改进VSLAM算法[J]. 机器人, 2015, 37(5): 557-565.DOI: 10.13973/j.cnki.robot.2015.0557.
LI Yongfeng, ZHANG Guoliang, WANG Feng, TANG Wenjun, YAO Erliang. Improved VSLAM Algorithm Based on Fast Visual Odometry and Large Loop Local Optimization Model. ROBOT, 2015, 37(5): 557-565. DOI: 10.13973/j.cnki.robot.2015.0557.
摘要
从提高移动机器人视觉同时定位与地图创建(visual simultaneous localization and mapping,VSLAM)过程中的自主定位精度和实时性的角度出发,提出了一种基于快速视觉里程计和大回环局部优化模型的改进 VSLAM 算法.首先,在对特征点进行不确定性分析的基础上,对 color GICP(color supported generalized iterative closest point)误差函数进行改进,采用帧到模型(frame-to-model)方式实现对数据集和模型集的快速配准,并结合卡尔曼滤波和加权方法实现对模型集的更新,提高位姿估计精度;其次,提出一种基于模型到模型(model-to-model)配准的大回环局部优化模型,并结合 g2o 图优化方法对位姿估计累积误差进行快速局部优化,从而进一步提高自主定位精度和效率.数据集离线对比实验和实际场景在线实验均表明,本文方法不但有效提高了移动机器人 VSLAM 过程中的自主定位和建图精度,而且具有较好的实时性.
An improved visual simultaneous localization and mapping (VSLAM) algorithm based on fast visual odometry and large loop local optimization model is proposed in terms of the accuracy and real-time performance of autonomous localization in mobile robot VSLAM.First of all, the error function of color GICP (color supported generalized iterative closest point) is improved based on the uncertainty analysis on the feature points.Frame-to-model approach is utilized to achieve fast registration between data sets and model sets.And the model sets are updated through Kalman filtering and the weighting method.The accuracy of pose estimation is improved through the above steps.Secondly, a large local loop optimization model based on model-to-model registration is proposed and g2o is combined to optimize the accumulated error of the pose estimation quickly, which improves the accuracy and efficiency of autonomous localization further.The offline contrast experiments based on the datasets and the online experiments based on actual scenes show that, with the proposed algorithm, not only the accuracy of autonomous localization and map in mobile robot VSLAM are improved effectively, but also the real-time performance is guaranteed.
[1] Fuentes-Pacheco J, Ruiz-Ascencio J, Rendón-Mancha J M.Visual simultaneous localization and mapping: A survey[J].Artificial Intelligence Review, 2012, 43(1): 55-81.[2] 梁明杰,闵华清,罗荣华.基于图优化的同时定位与地图创建综述 [J].机器人,2013,35(4):500-512.Liang M J, Min H Q, Luo R H.Graph-based SLAM: A survey[J].Robot, 2013, 35(4): 500-512.[3] 孙凤池,黄亚楼,康叶伟.基于视觉的移动机器人同时定位与建图研究进展 [J].控制理论与应用,2010,27(4):488-494.Sun F C, Huang Y L, Kang Y W.The research progress of visual simultaneous localization and mapping[J].Control Theory and Applications, 2010, 27(4): 488-494.[4] Merriaux P, Dupuis Y, Vasseur P, et al.Wheel odometry-based car localization and tracking on vectorial map[C]//17th International Conference on Intelligent Transportation Systems.Piscataway, USA: IEEE, 2014: 1890-1891.[5] Shen J, Tick D, Gans N.Localization through fusion of discrete and continuous epipolar geometry with wheel and IMU odometry[C]//American Control Conference.Piscataway, USA: IEEE, 2011: 1292-1298.[6] Ohno K, Tsubouchi T, Shigematsu B, et al.Differential GPS and odometry-based outdoor navigation of a mobile robot[J].Advanced Robotics, 2004, 18(6): 611-635. [7] Nistér D, Naroditsky O, Bergen J.Visual odometry[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.1.Piscataway, USA: IEEE, 2004: 652-659.[8] Scaramuzza D, Fraundorfer F.Visual odometry: Part I: The first 30years and fundamentals[J].IEEE Robotics & Automation Magazine, 2011, 18(4): 80-92. [9] Fraundorfer F, Scaramuzza D.Visual odometry: Part II: Matching, robustness, optimization, and applications[J].IEEE Robotics & Automation Magazine, 2012, 19(2): 78-90. [10] Kerl C, Sturm J, Cremers D.Robust odometry estimation for RGB-D cameras[C]//IEEE International Conference on Robotics and Automation.Piscataway, USA: IEEE, 2013: 3748-3754.[11] Huang A S, Bachrach A, Henry P, et al.Visual odometry and mapping for autonomous flight using an RGB-D camera [C]//International Symposium on Robotics Research.2011: 1-16.[12] Dryanovski I, Valenti R G, Xiao J.Fast visual odometry and mapping from RGB-D data[C]//IEEE International Conference on Robotics and Automation.Piscataway, USA: IEEE, 2013: 2305-2310.[13] Tardif J P, Pavlidis Y, Daniilidis K.Monocular visual odometry in urban environments using an omnidirectional camera [C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway, USA: IEEE, 2008: 2531-2538.[14] Howard A.Real-time stereo visual odometry for autonomous ground vehicles[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway, USA: IEEE, 2008: 3946-3952.[15] Korn M, Holzkothen M, Pauli J.Color supported generalized-ICP[C]//International Conference on Computer Vision Theory and Applications.Setubal, Portugal: INSTICC Press, 2014: 592-599.[16] Kümmerle R, Grisetti G, Strasdat H, et al.g2o: A general framework for graph optimization[C]//IEEE International Conference on Robotics and Automation.Piscataway, USA: IEEE, 2011: 3607-3613.[17] Khoshelham K, Elberink S O.Accuracy and resolution of Kinect depth data for indoor mapping applications[J].Sensors, 2012, 12(2): 1437-1454.[18] Sturm J, Engelhard N, Endres F, et al.A benchmark for the evaluation of RGB-D SLAM systems[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway, USA: IEEE, 2012: 573-580.[19] Bay H, Ess A, Tuytelaars T, et al.Speeded-up robust features (SURF)[J].Computer Vision and Image Understanding, 2008, 110(3): 346-359.