BAI Shiyu, LAI Jizhou, LÜ Pin, JI Bowen, ZHENG Xinyue, FANG Wei, CEN Yiting. Mobile Robot Localization and Perception Method for Subterranean Space Exploration[J]. ROBOT, 2022, 44(4): 463-470. DOI: 10.13973/j.cnki.robot.210141
Citation: BAI Shiyu, LAI Jizhou, LÜ Pin, JI Bowen, ZHENG Xinyue, FANG Wei, CEN Yiting. Mobile Robot Localization and Perception Method for Subterranean Space Exploration[J]. ROBOT, 2022, 44(4): 463-470. DOI: 10.13973/j.cnki.robot.210141

Mobile Robot Localization and Perception Method for Subterranean Space Exploration

More Information
  • Received Date: April 22, 2021
  • Revised Date: September 06, 2021
  • Accepted Date: June 15, 2021
  • Available Online: October 24, 2022
  • A mobile robot localization and perception method for subterranean space exploration is proposed. Firstly, a tightly-coupled LiDAR-odometer-IMU (inertial measurement unit) fusion framework based on factor graph is designed for the problem of structural degeneration in subterranean space. A high-precision IMU/odometer pre-integration model is derived, and the factor graph is utilized to achieve the simultaneous estimation for motion states and sensor parameters of mobile robots. Meanwhile, an object detection method based on LiDAR and infrared camera is proposed to conduct the recognition and relative localization of multiple objects in weak light environment. The experimental results show that the proposed method can improve the positioning accuracy of mobile robots by 50% in structurally degenerated environments and achieve a centimeter-level relative localization accuracy for objects in weak light environment.
  • [1]
    陈少飞. 无人机集群系统侦察监视任务规划方法[D]. 长沙: 国防科学技术大学, 2016.

    Chen S F. Planning for reconnaissance and monitoring using UAV swarms[D]. Changsha: National University of Defense Technology, 2016.
    [2]
    Ebadi K, Chang Y, Palieri M, et al. LAMP: Large-scale autonomous mapping and positioning for exploration of perceptually-degraded subterranean environments[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2020: 80-86. doi: 10.1109/ICRA40945.2020.9197082
    [3]
    Tzafestas S G. Mobile robot control and navigation: A global overview[J]. Journal of Intelligent and Robotic Systems, 2018, 91(1): 35-58. doi: 10.1007/s10846-018-0805-9
    [4]
    Ellery A A. Robotic astrobiology-prospects for enhancing scientific productivity of Mars rover missions[J]. International Journal of Astrobiology, 2018, 17(3): 203-217. doi: 10.1017/S1473550417000180
    [5]
    Aravind K R, Raja P, Pérez-Ruiz M. Task-based agricultural mobile robots in arable farming: A review[J]. Spanish Journal of Agricultural Research, 2017, 15(1). doi: 10.5424/sjar/2017151-9573
    [6]
    Rouček T, Pecka M, Čížek P, et al. DARPA subterranean challenge: Multi-robotic exploration of underground environments[M]//Lecture Notes in Computer Science, Vol. 11995. Berlin, Germany: Springer, 2019: 274-290. doi: 10.1007/978-3-030-43890-6_22
    [7]
    Zhang J, Singh S. LOAM: Lidar odometry and mapping in realtime[A/OL]. (2014-07-15)[2019-02-15]. http://www.roboticsproceedings.org/rss10/p07.pdf.
    [8]
    Shan T X, Englot B. LeGO-LOAM: Lightweight and groundoptimized lidar odometry and mapping on variable terrain[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2018: 4758-4765. doi: 10.1109/IROS.2018.8594299
    [9]
    Shan T X, Englot B, Meyers D, et al. LIO-SAM: Tightlycoupled lidar inertial odometry via smoothing and mapping[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2020: 5135-5142. doi: 10.1109/IROS45743.2020.9341176
    [10]
    Tang J, Chen Y W, Niu X J, et al. LiDAR scan matching aided inertial navigation system in GNSS-denied environment[J]. Sensors, 2015, 15(7): 16710-16728. doi: 10.3390/s150716710
    [11]
    Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Transactions on Robotics, 2016, 33(1): 1-21. doi: 10.1109/TRO.2016.2597321
    [12]
    Qin T, Li P L, Shen S J. VINS-Mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. doi: 10.1109/TRO.2018.2853729
    [13]
    Viola P, Jones M. Rapid object detection using a boosted cascade of simple features[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2001. doi: 10.1109/CVPR.2001.990517
    [14]
    王顺飞, 闫钧华, 王志刚. 改进的基于局部联合特征的运动目标检测方法[J]. 仪器仪表学报, 2015, 36(10): 2241-2248. doi: 10.3969/j.issn.0254-3087.2015.10.011

    Wang S F, Yan J H, Wang Z G. Improved moving object detection algorithm based on local united feature[J]. Chinese Journal of Scientific Instrument, 2015, 36(10): 2241-2248. doi: 10.3969/j.issn.0254-3087.2015.10.011
    [15]
    Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60: 91-110. doi: 10.1023/B:VISI.0000029664.99615.94
    [16]
    Vapnik V N. The nature of statistical learning theory[M]. Berlin, Germany: Springer, 2000.
    [17]
    Felzenszwalb P F, Girshick R B, McAllester D, et al. Object detection with discriminatively trained part-based models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627-1645. doi: 10.1109/TPAMI.2009.167
    [18]
    Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2014: 580-587. doi: 10.1109/CVPR.2014.81
    [19]
    Girshick R. Fast R-CNN[C]//IEEE International Conference on Computer Vision. Piscataway, USA: IEEE, 2015: 1440-1448.
    [20]
    Ren S Q, He K M, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/TPAMI.2016.2577031
    [21]
    Redmon J, Divvala S, Girshick R, et al. You Only Look Once: Unified, real-time object detection[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2016: 779-788. doi: 10.1109/CVPR.2016.91
    [22]
    Liu W, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector[M]//Lecture Notes in Computer Science, Vol. 9905. Berlin, Germany: Springer, 2016: 21-37. doi: 10.1007/978-3-319-46448-0_2

Catalog

    Article views (301) PDF downloads (193) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return