Robust Stereo Visual-Inertial SLAM Using Nonlinear Optimization
LIN Huican1, LÜ Qiang1, WANG Guosheng1, WEI Heng1, LIANG Bing2
1. Department of Weapon and Control, Academy of Army Armored Forces, Beijing 100072, China;
2. School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
Abstract:In the case of image blurring, excessive motion and lack of features, the robustness and accuracy of feature-based simultaneous localization and mapping (SLAM) system decline dramatically or even fail. For this problem, a tightly coupled stereo visual-inertial SLAM system using nonlinear optimization is proposed. Firstly, taking the pose of the keyframes as a constraint, the biases of the inertial measurement unit (IMU) are estimated using the divide and conquer strategy. In the front-end, in order to solve the problem that ORB-SLAM2 fails to use the constant velocity motion model due to excessive motion during the tracking process, the initial pose of the current frame is predicted by pre-integrating the IMU data from the previous frame to the current frame, and IMU pre-integration constraints are added to pose optimization. Then, in the back-end optimization, keyframe poses, map points, and IMU pre-integration are optimized within a sliding window and IMU deviations are updated. Finally, the performance of the system is verified with the EuRoC dataset. Comparing with the ORB-SLAM2 system, VINS-Mono system and OKVIS system, the accuracy of the proposed system is improved by 1.14, 1.48 and 4.59 times, respectively. Comparing with the state-of-the-art SLAM systems, the robustness of the system is improved under the conditions of excessive motion, image blurring, and lack of features.
[1] Zhou Q Q, Lei S Y, Yu Z G, et al. Indoor positioning system using axis alignment and complementary IMUs for robot localization[J]. Robot, 2017, 39(3):316-323.
[2] Martinelli A. Vision and IMU data fusion:Closed-form solutions for attitude, speed, absolute scale, and bias determination[J]. IEEE Transactions on Robotics, 2012, 28(1):44-60.
[3] Lin Y, Gao F, Qin T, et al. Autonomous aerial navigation using monocular visual-inertial fusion[J]. Journal of Field Robotics, 2017, 35(1):23-51.
[4] Li P L, Qin T, Hu B T, et al. Monocular visual-inertial state estimation for mobile augmented reality[C]//IEEE International Symposium on Mixed and Augmented Reality. Piscataway, USA:IEEE, 2017:11-21.
[5] Weiss S, Achtelik M W, Lynen S, et al. Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2012:957-964.
[6] Lynen S, Achtelik M W, Weiss S, et al. A robust and modular multi-sensor fusion approach applied to MAV navigation[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2013:3923-3929.
[7] 周绍磊,吴修振,刘刚,等:2016} 周绍磊,吴修振,刘刚,等. 一种单目视觉ORB-SLAM/INS组合导航方法[J]. 中国惯性技术学报,2016,24(5):633-637. Zhou S L, Wu X Z, Liu G, et al. Integrated navigation method of monocular ORB-SLAM/INS[J]. Journal of Chinese Inertial Technology, 2016, 24(5):633-637.
[8] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM:A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163.
[9] Concha A, Loianno G, Kumar V, et al. Visual-inertial direct SLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2016:1331-1338.
[10] Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2):796-803.
[11] Paul M K, Wu K, Hesch J A, et al. A comparative analysis of tightly-coupled monocular, binocular, and stereo VINS[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:165-172.
[12] Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visualinertial odometry using nonlinear optimization[J]. International Journal of Robotics Research, 2015, 34(3):314-334.
[13] Usenko V, Engel J, Stückler J, et al. Direct visual-inertial odom-etry with stereo cameras[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2016:1885-1892.
[14] Huai J Z, Toth C K, Grejner-Brzezinska D A. Stereo-inertial odometry using nonlinear optimization[C]//28th International Technical Meeting of the Satellite Division of the Institute of Navigation. Tampa, USA:Institute of Navigation, 2015:2087-2097.
[15] Lupton T, Sukkarieh S. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions[J]. IEEE Transactions on Robotics, 2012, 28(1):61-76.
[16] Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Transactions on Robotics, 2017, 33(1):1-21.
[17] Tateno K, Tombari F, Laina I, et al. CNN-SLAM:Real-time dense monocular SLAM with learned depth prediction[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA:IEEE, 2017:6565-6574.
[18] Clark R, Wang S, Wen H, et al. VINet:Visual-inertial odometry as a sequence-to-sequence learning problem[C]//AAAI Conference on Artificial Intelligence. Palo Alto, USA:AAAI, 2017:3995-4001.
[19] Mur-Artal R, Tardos J D. ORB-SLAM2:An open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5):1255-1262.
[20] Furgale P, Rehder J, Siegwart R. Unified temporal and spatial calibration for multi-sensor systems[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2013:1280-1286.
[21] Burri M, Nikolic J, Gohl P, et al. The EuRoC micro aerial vehicle datasets[J]. International Journal of Robotics Research, 2016, 35(10):1157-1163.
[22] Sturm J, Engelhard N, Endres F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2012:573-580.
[23] 赵洋,刘国良,田国会,等:2017赵洋,刘国良,田国会,等. 基于深度学习的视觉SLAM综述[J]. 机器人,2017,39(6):889-896. Zhao Y, Liu G L, Tian G H, et al. A survey of visual SLAM based on deep learning[J]. Robot, 2017, 39(6):889-896.