结合运动模糊检测与自适应极线约束的动态V-SLAM算法

Dynamic V-SLAM Algorithm Combining Motion Blur Detection and Adaptive Epipolar Constraint

  • 摘要: 为了减小动态场景中运动模糊以及传统固定阈值几何约束对系统判别先验和非先验区域动态物体的不利影响,提出了一种适用于动态环境的实时V-SLAM(视觉同步定位与地图构建)算法,该算法融合语义分割、运动模糊检测和自适应阈值极线约束,能够有效剔除动态特征点。针对实际场景中相机或物体快速运动导致的图像局部模糊问题,利用局部图像的模糊程度和自适应阈值的几何约束识别先验区域的动态物体。此外,为避免非先验区域物体影响系统的稳定性,提出一种基于K均值聚类的组合约束,用于判断非先验区域中物体的运动状态。在TUM RGB-D数据集上的实验结果表明:与ORB-SLAM2算法相比,该算法在绝对轨迹误差(ATE)的均方根误差(RMSE)上至少降低了94.26%,平移和旋转相对位姿误差(RPE)的RMSE分别至少降低了87.95%和92.07%,证明了所提算法在室内动态环境中具有较高的定位精度和稳定性。

     

    Abstract: To mitigate the adverse effects of motion blur phenomena in dynamic scenes and the limitations of traditional fixed-threshold geometric constraints on the system ability to distinguish between prior dynamic objects and non-prior dynamic objects, a real-time V-SLAM(visual simultaneous localization and mapping) algorithm for dynamic environments is proposed. This algorithm integrates semantic segmentation, motion blur detection, and adaptive threshold epipolar constraints to effectively eliminate dynamic feature points. To address the issue of local image blur caused by rapid motion of camera or object in real-world scenarios, the method leverages the blur degree of local image and geometric constraints with adaptive thresholds to identify dynamic objects in prior regions. Additionally, a combined constraint based on K-means clustering is introduced to prevent non-prior objects from compromising system stability, and to assess the motion state of objects in non-prior regions. Experimental results on TUM RGB-D dataset demonstrate that, compared to ORB-SLAM2 algorithm, the proposed algorithm reduces the root mean square error(RMSE) of the absolute trajectory error(ATE) by at least 94.26%. For the relative pose error(RPE) in translation and rotation, the RMSE metrics are reduced by at least 87.95%and 92.07%, respectively. These results indicate that the proposed algorithm achieves high accuracy and stability in indoor dynamic environments.

     

/

返回文章
返回