FU Mengyin1,2, LÜ Xianwei1,2, Liu Tong1,2, YANG Yi1,2, LI Xinghe1,2, LI Yu1,2
1. Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing 100081, China;
2. School of Automation, Beijing Institute of Technology, Beijing 100081, China
A real-time SLAM(simultaneous localization and mapping) algorithm based on RGB-D(RGB-depth) data is proposed, which can get 6D poses of the robot and build the 3D map of the environment. Firstly, key features of the RGB images are extracted, whose 3D coordinates and colors, along with their corresponding covariance matrices, are then calculated using the Gaussian mixture model. Then the transformation matrix Tt between features in the current frame and feature set of the environment model can be gotten by using ICP(iterative closest point) algorithm, and sensor pose matrix Pt for optimal observation is obtained by sampling around Tt. Based on Pt, the dense point clouds in the current frame are then transformed into global coordinate to build the 3D map. Finally, the feature set of the environment model is updated using Kalman filter. To decrease the error of ICP, especially when the features are poor, particle sampling is conducted to improve the localization accuracy. Moreover, the color information is used to increase the success rate of data association between the features in the current frame and those in the environment model. It costs about 31 ms for each frame. In the mean time, the minimum localization error for FR1 benchmark is 1.7 cm and the average error is 11.9 cm. Therefore, the algorithm is accurate and fast enough for real-time SLAM.
[1] Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with Rao-Blackwellized particle filters[J]. IEEE Transactions on Robotics, 2007, 23(1):34-46. [2] Dissanayake M W M G, Newman P, Clark S, et al. A solution to the simultaneous localization and map building(SLAM) problem[J]. IEEE Transactions on Robotics and Automation, 2001, 17(3):229-241. [3] Nikolic J, Rehder J, Burri M, et al. A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:431-437.[4] Hornung A, Phillips M, Jones E G, et al. Navigation in three-dimensional cluttered environments for mobile manipulation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2012:423-429.[5] Liu M, Siegwart R. Navigation on point-cloud- A Riemannian metric approach[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:4088-4093.[6] Civera J, Davison A J, Montiel J M M. Inverse depth parametrization for monocular SLAM[J]. IEEE Transactions on Robotics, 2008, 24(5):932-945. [7] Sadat S A, Chutskoff K, Jungic D, et al. Feature-rich path planning for robust navigation of MAVs with mono-SLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:3870-3875.[8] 辛菁, 苟蛟龙, 马晓敏, 等. 基于 Kinect 的移动机器人大视角 3 维 V-SLAM[J].机器人, 2014, 36(5): 560-568.Xiao J, Gou J L, Ma X M, et al. A large viewing angle 3-dimensional V-SLAM algorithm with a Kinect-based mobile robot system[J]. Robot, 2014, 36(5):560-568.[9] Khoshelham K, Elberink S O. Accuracy and resolution of kinectdepth data for indoor mapping applications[J]. Sensors, 2012, 12(2):1437-1454.[10] Högman V. Building a 3D map from RGB-D sensors[D]. Stockholm, Sweden:Royal Institute of Technology, 2012.[11] Henry P, Krainin M, Herbst E, et al. RGB-D mapping:Using depth cameras for dense 3D modeling of indoor environments[C]//12th International Symposium on Experimental Robotics. Berlin, Germany:Springer, 2014:477-491.[12] Endres F, Hess J, Engelhard N, et al. An evaluation of the RGB-D SLAM system[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2012:1691-1696.[13] Sturm J, Magnenat S, Engelhard N, et al. Towards a benchmark for RGB-D SLAM evaluation[C]//RGB-D Workshop on Advanced Reasoning with Depth Cameras at Robotics:Science and Systems Conference. Cambridge, USA:MIT, 2011:1-2.[14] Dryanovski I, Valenti R G, Xiao J Z. Fast visual odometry and mapping from RGB-D data[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:2305-2310.[15] Rublee E, Rabaud V, Konolige K, et al. ORB:An efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Piscataway, USA:IEEE, 2011:2564-2571.[16] Shi J B, Tomasi C. Good features to track[C]//IEEE Computer-Society Conference on Computer Vision and Pattern Recognition. Piscataway, USA:IEEE, 1994:593-600.[17] Chen Y, Medioni G. Object modeling by registration of multiple range images[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 1991:2724-2729.[18] Hornung A, Wurm K M, Bennewitz M. Humanoid robot localization in complex indoor environments[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2010:1690-1695.[19] Meinhold R J, Singpurwalla N D. Understanding the Kalman filter[J]. The American Statistician, 1983, 37(2):123-127.[20] Hornung A, Wurm K M, Bennewitz M, et al. OctoMap:An efficient probabilistic 3D mapping framework based on octrees[J]. Autonomous Robots, 2013, 34(3):189-206.