动态环境下基于增强分割的RGB-D SLAM方法

RGB-D SLAM Method Based on Enhanced Segmentation in Dynamic Environment

  • 摘要: 目前视觉SLAM(同步定位与地图创建)方法在动态环境下易出现漏剔除动态物体的问题,影响相机位姿估计精度以及地图的可用性。为此,本文提出一种基于增强分割的RGB-D SLAM方法。首先结合实例分割网络与深度图像聚类的结果,判断当前帧是否出现漏分割现象,若出现则根据多帧信息对分割结果进行修补,同时,提取当前帧的Shi-Tomasi角点并通过对称转移误差筛选出动态角点集合。然后结合修补后的实例分割结果判定场景中每个实例对象的运动状态。最后利用静态特征点追踪相机位姿并构建实例级语义八叉树地图。在TUM公共数据集以及真实场景中评估了该方法,相比较于当前先进的视觉SLAM方法,本文方法取得了更好的相机位姿估计精度以及地图构建效果,展现出了更强的鲁棒性。

     

    Abstract: The current visual SLAM (simultaneous localization and mapping) method is prone to the problem of missed removal of dynamic objects in dynamic environment, which affects the accuracy of camera pose estimation and the availability of the map. So, an RGB-D SLAM method based on enhanced segmentation is proposed. Firstly, the results of instance segmentation network and depth image clustering are combined to verify whether the current frame misses segmentation. The segmentation results are repaired according to the multi-frame information if incomplete segmentation occurs. At the same time, the Shi-Tomasi corner points extracted from the current frame are screened out through the symmetrical transfer error to select the dynamic corner point sets. Then the motion state of each instance object in the scene is determined based on the repaired instance segmentation results. Finally, static feature points are used to track the camera pose and construct the instance-level semantic octree map. The proposed method is evaluated on the TUM public dataset and in real scenes. Compared with the current advanced visual SLAM methods, the proposed method can achieve better camera pose estimation accuracy and map construction effect, showing stronger robustness.

     

/

返回文章
返回