1. College of Computer and Control Engineering, Nankai University, Tianjin 300071, China;
2. College of Software, Nankai University, Tianjin 300071, China
董星亮, 苑晶, 张雪波, 黄亚楼. 室内环境下基于图像序列拓扑关系的移动机器人全局定位[J]. 机器人, 2019, 41(1): 83-94,103.DOI: 10.13973/j.cnki.robot.180100.
DONG Xingliang, YUAN Jing, ZHANG Xuebo, HUANG Yalou. Mobile Robot Global Localization Based on Topological Relationship between Image Sequences in Indoor Environments. ROBOT, 2019, 41(1): 83-94,103. DOI: 10.13973/j.cnki.robot.180100.
摘要针对室内环境结构相似的特点,提出一种基于图像序列拓扑关系的移动机器人全局定位算法.首先,提取图像的Gist描述子,并提出一种局部极值算法,将环境划分成若干组不同的图像序列.然后,使用ESN(echo state network)对每一组图像序列在时间上进行双序训练,提取鲁棒的图像序列特征,再利用空间上的双向匹配策略实现图像序列特征的匹配.最后,采用HMM(hidden Markov model)对图像序列间的拓扑关系进行建模,将移动机器人全局定位问题转化成有向无环图中最长路径求解问题,并通过实验对该图像序列划分和序列建模方法进行验证.与基于单帧图像匹配的算法、SeqSLAM算法以及Fast-SeqSLAM算法相比,该算法在室内走廊环境和办公环境中均可实现100%的定位.特别是在室内办公环境中,机器人仅需要运动0.80m便可以对自身进行准确定位.实验结果表明,该算法具有较强的鲁棒性、较高的定位准确性和定位效率.
Abstract:Considering the similarity characteristic of indoor environmental structures, a mobile robot global localization algorithm based on topological relationship between image sequences is proposed. Firstly, Gist descriptors are extracted from images, and a local extremum algorithm is applied which divides the environment into several groups of different image sequences. Then, each group image sequence is modeled by the echo state network (ESN), which is trained on bidirectional time series, to extract robust features of image sequences, and the bidirectional matching strategy in space domain is applied to matching the sequence features. Finally, the topological relationship between image sequences is modeled by hidden Markov model (HMM), and the problem of global localization of the mobile robot is transformed into the process of finding the longest path in a directed acyclic graph. By comparing with single-frame-based matching algorithms, SeqSLAM (sequence simultaneous localization and mapping) and Fast-SeqSLAM, the proposed algorithm can achieve 100% localization in both indoor corridor and office environments, and the robot can accurately locate itself by only moving 0.80m especially in indoor office environments. The experiment results show that the proposed algorithm has more strong robustness, higher localization accuracy and efficiency.
[1] Song J, Liu M. A hidden Markov model approach for Voronoi localization[C]//IEEE International Conference on Robotics and Biomimetics. Piscataway, USA:IEEE, 2013:462-467.
[2] Bando S, Hara Y, Tsubouchi T. Global localization of a mobile robot in indoor environment using special frequency analysis of 2D range data[C]//IEEE International Conference on Mechatronics and Automation. Piscataway, USA:IEEE, 2013:488-493.
[3] Torii A, Sivic J, Pajdla T. Visual localization by linear combination of image descriptors[C]//IEEE International Confe-rence on Computer Vision Workshops. Piscataway, USA:IEEE, 2011:102-109.
[4] Floros G, Zander B V D, Leibe B. OpenStreetSLAM:Global vehicle localization using OpenStreetMaps[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:1054-1059.
[5] Yang S, Jiang R, Wang H, et al. Road constrained monocular visual localization using Gaussian-Gaussian cloud model[J]. IEEE Transactions on Intelligent Transportation Systems, 2017, 18(12):3449-3456.
[6] Alonso I P, Llorca D F, Gavilán M, et al. Accurate global localization using visual odometry and digital maps on urban environments[J]. IEEE Transactions on Intelligent Transportation Systems, 2012, 13(4):1535-1545.
[7] 付梦印,吕宪伟,刘彤,等. 基于RGB-D数据的实时SLAM算法[J]. 机器人,2015,37(6):683-692.Fu M Y, Lü X W, Liu T, et al. Real-time SLAM algorithm based on RGB-D data[J]. Robot, 2015, 37(6):683-692.
[8] 李海丰,胡遵河,陈新伟. PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J]. 机器人,2017,39(2):214-220.Li H F, Hu Z H, Chen X W. PLP-SLAM:A visual SLAM method based on point-line-plane feature fusion[J]. Robot, 2017, 39(2):214-220.
[9] 李维鹏,张国良,姚二亮,等. 基于场景显著区域的改进闭环检测算法[J]. 机器人,2017,39(1):23-35.Li W P, Zhang G L, Yao E L, et al. An improved loop closure detection algorithm based on scene salient regions[J]. Robot, 2017, 39(1):23-35.
[10] 刘国忠,胡钊政. 基于SURF和ORB全局特征的快速闭环检测[J]. 机器人,2017,39(1):36-45.Liu G Z, Hu Z Z. Fast loop closure detection based on holistic features from SURF and ORB[J]. Robot, 2017, 39(1):36-45.
[11] 董星亮,苑晶,黄枢子,等. 室内环境下基于平面与线段特征的RGB-D视觉里程计[J]. 机器人,2018,DOI:10.13973/j.cnki.robot.170621.Dong X L, Yuan J, Huang S Z, et al. RGB-D visual odometry based on planes and line segments in indoor environments[J]. Robot, 2018, DOI:10.13973/j.cnki.robot.170621.
[12] Tamimi H, Zell A. Global visual localization of mobile robots using kernel principal component analysis[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2004:1896-1901.
[13] Ayala V, Hayet J B, Lerasle F, et al. Visual localization of a mobile robot in indoor environments using planar landmarks[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2000:275-280.
[14] Falliat D. A visual bag of words method for interactive qualitative localization and mapping[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2007:3921-3926.
[15] Angeli A, Doncieux S, Meyer J A, et al. Visual topological SLAM and global localization[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2009:4300-4305.
[16] Cummins M J, Newman P M. FAB-MAP:Probabilistic loca-lization and mapping in the space of appearance[J]. International Journal of Robotics Research, 2008, 27(6):647-665.
[17] Cummins M, Newman P M. Appearance-only SLAM at large scale with FAB-MAP 2.0[J]. International Journal of Robotics Research, 2011, 30(9):1100-1123.
[18] Schlegel D, Grisetti G. Visual localization and loop closing using decision trees and binary features[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Pisca-taway, USA:IEEE, 2016:4616-4623.
[19] Lachéze L, Benosman R. Visual localization using an optimal sampling of bags-of-features with entropy[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2007:1332-1338.
[20] Micusik B, Wildenauer H. Descriptor free visual indoor localization with line segments[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA:IEEE, 2015:3165-3173.
[21] Bürki M, Gilitschenski I, Stumm E, et al. Appearance-based landmark selection for efficient long-term visual localization[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2016:4137-4143.
[22] Johns E, Yang G Z. Global localization in a dense continuous topological map[C]//IEEE International Conference on Roboti-cs and Automation. Piscataway, USA:IEEE, 2011:1032-1037.
[23] Campos F M, Correia L, Calado J M F. Mobile robot global localization with non-quantized SIFT features[C]//15th International Conference on Advanced Robotics. Piscataway, USA:IEEE, 2011:582-587.
[24] Kim P, Coltin B, Alexandrov O, et al. Robust visual localization in changing lighting conditions[C]//IEEE International Confe-rence on Robotics and Automation. Piscataway, USA:IEEE, 2017:5447-5452.
[25] Milford M J, Wyeth G F. SeqSLAM:Visual route-based navigation for sunny summer days and stormy winter nights[C]//IEEE International Conference on Robotics and Automation. Pisca-taway, USA:IEEE, 2012:1643-1649.
[26] Hansen P, Browning B. Visual place recognition using HMM sequence matching[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2014:4549-4555.
[27] Siam S M, Zhang H. Fast-SeqSLAM:A fast appearance based place recognition algorithm[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:5702-5708.
[28] Oliva A, Torralba A. Modeling the shape of the scene:A holistic representation of the spatial envelope[J]. International Journal of Computer Vision, 2001, 42(3):145-175.
[29] Jaeger H. The "echo state" approach to analysing and training recurrent neural networks[R]. Sankt Augustin, Germany:German National Research Institute for Computer Science, 2001.
[30] Konolige K, Grisetti G, Kümmerle R, et al. Efficient sparse pose adjustment for 2D mapping[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2010:22-29.
[31] Sunderhauf N, Neubert P, Protzel P. Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:3pp.