RGB-D Visual Odometry in Dynamic Environments Using Line Features
ZHANG Huijuan1,2,3, FANG Zaojun2,3, YANG Guilin2,3
1. University of Chinese Academy of Sciences, Beijing 100049, China;
2. Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China;
3. Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology, Ningbo 315201, China
张慧娟, 方灶军, 杨桂林. 动态环境下基于线特征的RGB-D视觉里程计[J]. 机器人, 2019, 41(1): 75-82.DOI: 10.13973/j.cnki.robot.180020.
ZHANG Huijuan, FANG Zaojun, YANG Guilin. RGB-D Visual Odometry in Dynamic Environments Using Line Features. ROBOT, 2019, 41(1): 75-82. DOI: 10.13973/j.cnki.robot.180020.
摘要基于RGB-D的视觉SLAM(同时定位与建图)算法基本都假设环境是静态的,然而在实际环境中经常会出现动态物体,导致SLAM算法性能的下降.为此,本文提出一种基于线特征的RGB-D视觉里程计方法,通过计算直线特征的静态权重来剔除动态直线特征,并根据剩余的直线特征估计相机位姿.本文方法既可以减小动态物体的影响,又能避免点特征过少而导致的跟踪失效.公共数据集实验结果表明,与现有的基于ORB(oriented FAST and rotated BRIEF)点特征的方法相比,本文方法减小了动态环境下的跟踪误差约30%,提高了视觉里程计在动态环境下的精度和鲁棒性.
Abstract:Most of RGB-D SLAM (simultaneous localization and mapping) methods assume that the environments are static. However, there are often dynamic objects in real world environments, which can degrade the SLAM performance. In order to solve this problem, a line feature-based RGB-D (RGB-depth) visual odometry is proposed. It calculates static weights of line features to filter out dynamic line features, and uses the rest of line features to estimate the camera pose. The proposed method not only reduces the influence of dynamic objects, but also avoids the tracking failure caused by few point features. The experiments are carried out on a public dataset. Compared with state-of-the-art methods like ORB (oriented FAST and rotated BRIEF) method, the results demonstrate that the proposed method reduces the tracking error by about 30% and improves the accuracy and robustness of visual odometry in dynamic environments.
[1] Huang A S, Bachrach A, Henry P, et al. Visual odometry and mapping for autonomous flight using an RGB-D camera[C]//15th International Symposium of Robotics Research. Berlin, Germany:Springer, 2017:235-252.
[2] 王飞,崔金强,陈本美,等.一套完整的基于视觉光流和激光扫描测距的室内无人机导航系统[J].自动化学报,2013,39(11):1889-1900.Wang F, Cui J Q, Chen B M, et al. A comprehensive UAV indoor navigation system based on vision optical flow and laser FastSLAM[J]. Acta Automatica Sinica, 2013, 39(11):1889-1900.
[3] Schöps T, Engel J, Cremers D. Semi-dense visual odometry for AR on a smartphone[C]//IEEE International Symposium on Mixed and Augmented Reality. Piscataway, USA:IEEE, 2014:145-150.
[4] Häne C, Heng L, Lee G H, et al. 3D visual perception for self-driving cars using a multi-camera system:Calibration, mapping, localization, and obstacle detection[J]. Image and Vision Computing, 2017, 68(S1):14-27.
[5] Kerl C, Sturm J, Cremers D. Robust odometry estimation for RGB-D cameras[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:3748-3754.
[6] Whelan T, Johannsson H, Kaess M, et al. Robust real-time visual odometry for dense RGB-D mapping[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:5724-5731.
[7] Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras[C]//IEEE/RSJ International Conference on IntelligentRobots and Systems. Piscataway, USA:IEEE, 2013:2100-2106.
[8] Gutierrez-Gomez D, Mayol-Cuevas W, Guerrero J J. Dense RGB-D visual odometry using inverse depth[J]. Robotics and Autonomous Systems, 2016, 75:571-583.
[9] Jaimez M, Kerl C, Gonzalez-Jimenez J, et al. Fast odometry and scene flow from RGB-D cameras based on geometric clustering[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:3992-3999.
[10] 付梦印,吕宪伟,刘彤,等.基于RGB-D数据的实时SLAM算法[J].机器人,2015,37(6):683-692.Fu M Y, Lü X W, Liu T, et al. Real-time SLAM algorithm based on RGB-D data[J]. Robot, 2015, 37(6):683-692.
[11] Sun Y X, Liu M, Meng M Q H. Improving RGB-D SLAM in dynamic environments:A motion removal approach[J]. Robotics and Autonomous Systems, 2017, 89:110-122.
[12] Wang Y B, Huang S D. Towards dense moving object segmentation based robust dense RGB-D SLAM in dynamic scenarios[C]//13rd International Conference on Control, Automation, Robotics, and Vision. Piscataway, USA:IEEE, 2014:1841-1846.
[13] Kim D H, Kim J H. Effective background model-based RGB-Ddense visual odometry in a dynamic environment[J]. IEEE Transactions on Robotics, 2016, 32(6):1565-1573.
[14] Ng P C, Henikoff S. SIFT:Predicting amino acid changes that affect protein function[J]. Nucleic Acids Research, 2003, 31(13):3812-3814.
[15] Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features (SURF)[J]. Computer Vision and Image Understanding, 2008, 110(3):346-359.
[16] Rublee E, Rabaud V, Konolige K, et al. ORB:An efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Piscataway, USA:IEEE, 2011:2564-2571.
[17] Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE Transactions on Robotics, 2014, 30(1):177-187.
[18] Henry P, Krainin M, Herbst E, et al. RGB-D mapping:Using depth cameras for dense 3D modeling of indoor environments[C]//12th International Symposium on Experimental Robotics. Berlin, Germany:Springer, 2014:477-491.
[19] Wang Y, Huang S D, Xiong R, et al. A framework for multi-session RGBD SLAM in low dynamic workspace environment[J]. CAAI Transactions on Intelligence Technology, 2016, 1(1):90-103.
[20] 辛菁,苟蛟龙,马晓敏,等.基于Kinect的移动机器人大视角3维V-SLAM[J].机器人,2014,36(5):560-568.Xin J, Gou J L, Ma X M, et al. A large viewing angle 3-dimensional V-SLAM algorithm with a Kinect-based mobile robot system[J]. Robot, 2014, 36(5):560-568.
[21] 康轶非,宋永端,宋宇,等.动态环境下基于旋转-平移解耦的立体视觉里程计算法[J].机器人,2014,36(6):758-768.Kang Y F, Song Y D, Song Y, et al. Stereo visual odometry algorithm with rotation-translation decoupling for dynamic environments[J]. Robot, 2014, 36(6):758-768.
[22] Li S, Lee D. RGB-D SLAM in dynamic environments using static point weighting[J]. IEEE Robotics and Automation Letters, 2017, 2(4):2263-2270.
[23] Hirose K, Saito H. Fast line description for line-based SLAM[C]//23rd British Machine Vision Conference. Guildford, UK:B M V A Press, 2012.
[24] Lu Y, Song D Z. Robust RGB-D odometry using point and line features[C]//IEEE International Conference on Computer Vision. Piscataway, USA:IEEE, 2015:3934-3942.
[25] Gomez-Ojeda R, Moreno F A, Scaramuzza D, et al. PL-SLAM:A stereo SLAM system through the combination of points and line segments[EB/OL]. (2017-05-29)[2018-01-01]. http://arxiv.org/pdf/1705.09479.pdf.
[26] Pumarola A, Vakhitov A, Agudo A, et al. PL-SLAM:Real-time monocular visual SLAM with points and lines[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:4503-4508.
[27] Yang S, Scherer S. Direct monocular odometry using points and lines[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:3871-3877.
[28] 李海丰,胡遵河,陈新伟.PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J].机器人,2017,39(2):214-220,229.Li H F, Hu Z H, Chen X W. PLP-SLAM:A visual SLAM method based on point-line-plane feature fusion[J]. Robot, 2017, 39(2):214-220,229.
[29] van Gioi R G, Jakubowicz J, Morel J M, et al. LSD:A fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(4):722-732.
[30] Zhang L, Koch R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometricconsistency[J]. Journal of Visual Communication and Image Representation, 2013, 24(7):794-805.
[31] Bartoli A, Sturm P. The 3D line motion matrix and alignmentof line reconstructions[J]. International Journal of Computer Vision, 2004, 57(3):159-178.
[32] Sturm J, Engelhard N, Endres F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//IEEE/RSJ InternationalConference on Intelligent Robots and Systems. Piscataway,USA:IEEE, 2012:573-580.