WANG Hao, LU Dejiu, FANG Baofu. RGB-D SLAM Method Based on Enhanced Segmentation in Dynamic Environment[J]. ROBOT, 2022, 44(4): 418-430. DOI: 10.13973/j.cnki.robot.210256
Citation: WANG Hao, LU Dejiu, FANG Baofu. RGB-D SLAM Method Based on Enhanced Segmentation in Dynamic Environment[J]. ROBOT, 2022, 44(4): 418-430. DOI: 10.13973/j.cnki.robot.210256

RGB-D SLAM Method Based on Enhanced Segmentation in Dynamic Environment

  • The current visual SLAM (simultaneous localization and mapping) method is prone to the problem of missed removal of dynamic objects in dynamic environment, which affects the accuracy of camera pose estimation and the availability of the map. So, an RGB-D SLAM method based on enhanced segmentation is proposed. Firstly, the results of instance segmentation network and depth image clustering are combined to verify whether the current frame misses segmentation. The segmentation results are repaired according to the multi-frame information if incomplete segmentation occurs. At the same time, the Shi-Tomasi corner points extracted from the current frame are screened out through the symmetrical transfer error to select the dynamic corner point sets. Then the motion state of each instance object in the scene is determined based on the repaired instance segmentation results. Finally, static feature points are used to track the camera pose and construct the instance-level semantic octree map. The proposed method is evaluated on the TUM public dataset and in real scenes. Compared with the current advanced visual SLAM methods, the proposed method can achieve better camera pose estimation accuracy and map construction effect, showing stronger robustness.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return