Abstract:
An improved visual SLAM (simultaneous localization and mapping) method, named PLKF-SLAM (point-line feature fusion and keyframe selection based SLAM), is proposed to address the issues of insufficient feature information extraction and redundant keyframe selection in binocular visual SLAM systems under weak texture conditions. Firstly, the error functions of distance and angle measurements are weighted and integrated to address the issue of point-line feature fusion, and an adaptive factor is introduced to balance the involvement of line features in the bundle adjustment process, thereby enhancing the system adaptability to complex environments. Secondly, a dynamic threshold strategy is employed to refine the keyframe selection mechanism, improving the positioning accuracy of SLAM system. The proposed algorithm is experimentally validated on open-source datasets EuRoC and UMA-Ⅵ, and the results show that the PLKF-SLAM algorithm significantly improves the performance of visual SLAM on most test sequences. Compared with ORB-SLAM3 algorithm, the average positioning accuracy is improved by 52.64% on EuRoC dataset, and by 63.20% on UMA-Ⅵ dataset. Finally, an environmental adaptability test of PLKF-SLAM is conducted in real-world scenarios, achieving a positioning error of 0.05 m, which demonstrates the effectiveness of the improved point-line fusion and keyframe selection strategy.