Abstract:
To mitigate the adverse effects of motion blur phenomena in dynamic scenes and the limitations of traditional fixed-threshold geometric constraints on the system ability to distinguish between prior dynamic objects and non-prior dynamic objects, a real-time V-SLAM(visual simultaneous localization and mapping) algorithm for dynamic environments is proposed. This algorithm integrates semantic segmentation, motion blur detection, and adaptive threshold epipolar constraints to effectively eliminate dynamic feature points. To address the issue of local image blur caused by rapid motion of camera or object in real-world scenarios, the method leverages the blur degree of local image and geometric constraints with adaptive thresholds to identify dynamic objects in prior regions. Additionally, a combined constraint based on K-means clustering is introduced to prevent non-prior objects from compromising system stability, and to assess the motion state of objects in non-prior regions. Experimental results on TUM RGB-D dataset demonstrate that, compared to ORB-SLAM2 algorithm, the proposed algorithm reduces the root mean square error(RMSE) of the absolute trajectory error(ATE) by at least 94.26%. For the relative pose error(RPE) in translation and rotation, the RMSE metrics are reduced by at least 87.95%and 92.07%, respectively. These results indicate that the proposed algorithm achieves high accuracy and stability in indoor dynamic environments.