PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法

PLP-SLAM: A Visual SLAM Method Based on Point-Line-Plane Feature Fusion

  • 摘要: 基于点特征的视觉SLAM(同时定位与地图构建)算法存在计算量大、环境存储空间负荷高、定位误差较大的问题,为此,提出了一种基于点、线段、平面特征融合的视觉SLAM算法—PLP-SLAM.在扩展卡尔曼滤波(EKF)框架下,首先利用点特征估计机器人当前位姿,然后构建了基于点、线、平面特征的观测模型,最后建立了带平面约束的线段特征数据关联方法及系统状态更新模型,并利用线段和平面特征描述环境信息.在公开数据集上进行了实验,结果表明,本文PLP-SLAM算法能够成功完成SLAM任务,平均定位误差为2.3 m,优于基于点特征的SLAM方法,并通过基于不同特征的SLAM实验表明了本文提出的点、线、面特征融合的优越性.

     

    Abstract: To reduce the computation and memory cost, and improve the localization accuracy of point-based visual simultaneous localization and mapping (SLAM) methods, a novel visual SLAM algorithm, named PLP-SLAM, is proposed by fusing the point, line segment and plane features. Under the EKF (extended Kalman filter) framework, the current robot pose is estimated using the point features firstly. Then, the observation models of point, line segment and plane are set up. Finally, the data association of line segments and state-updating models with plane constraints are formulated, and the unknown environment is described by line segment and plane feature map. The experiments are carried out on a public dataset, and the results show that the proposed PLP-SLAM can accomplish the SLAM task with a mean localization error of 2.3 m, which is better than the point-based SLAM method. Furthermore, the SLAM experiment results with different feature combinations illustrate the superiority of the proposed point-line-plane feature fusion.

     

/

返回文章
返回