基于改进的点线融合和关键帧选择的视觉SLAM方法

A Visual SLAM Method Based on the Improved Point-line Fusion and Keyframe Selection

  • 摘要: 针对弱纹理环境下双目视觉SLAM(同步定位与地图构建)系统存在的特征信息提取不足及关键帧选取冗余问题,提出了一种改进的视觉SLAM方法——PLKF-SLAM。首先,针对点线特征融合问题,对距离和角度测量值的误差函数进行加权融合,并引入自适应因子以平衡线特征在光束平差过程中的参与程度,增强了系统对复杂环境的适应性。其次,采用动态阈值策略改进关键帧的选择机制,提升SLAM系统的定位精度。使用开源数据集EuRoC和UMA-Ⅵ对所提出的算法进行实验验证,结果显示,PLKF-SLAM算法在多数测试序列上显著提升了视觉SLAM性能。与ORB-SLAM3算法相比,在EuRoC数据集下平均定位精度提高了52.64%,在UMA-Ⅵ数据集下平均定位精度提高了63.20%。最后,在真实场景下对PLKF-SLAM算法进行环境适应性试验,其定位误差为0.05 m,验证了改进的点线融合和关键帧选择策略的有效性。

     

    Abstract: An improved visual SLAM (simultaneous localization and mapping) method, named PLKF-SLAM (point-line feature fusion and keyframe selection based SLAM), is proposed to address the issues of insufficient feature information extraction and redundant keyframe selection in binocular visual SLAM systems under weak texture conditions. Firstly, the error functions of distance and angle measurements are weighted and integrated to address the issue of point-line feature fusion, and an adaptive factor is introduced to balance the involvement of line features in the bundle adjustment process, thereby enhancing the system adaptability to complex environments. Secondly, a dynamic threshold strategy is employed to refine the keyframe selection mechanism, improving the positioning accuracy of SLAM system. The proposed algorithm is experimentally validated on open-source datasets EuRoC and UMA-Ⅵ, and the results show that the PLKF-SLAM algorithm significantly improves the performance of visual SLAM on most test sequences. Compared with ORB-SLAM3 algorithm, the average positioning accuracy is improved by 52.64% on EuRoC dataset, and by 63.20% on UMA-Ⅵ dataset. Finally, an environmental adaptability test of PLKF-SLAM is conducted in real-world scenarios, achieving a positioning error of 0.05 m, which demonstrates the effectiveness of the improved point-line fusion and keyframe selection strategy.

     

/

返回文章
返回