方正, 赵世博, 李昊来. 一种融合稀疏几何特征与深度流的深度视觉SLAM算法[J]. 机器人, 2019, 41(2): 185-196,241. DOI: 10.13973/j.cnki.robot.180222
引用本文: 方正, 赵世博, 李昊来. 一种融合稀疏几何特征与深度流的深度视觉SLAM算法[J]. 机器人, 2019, 41(2): 185-196,241. DOI: 10.13973/j.cnki.robot.180222
FANG Zheng, ZHAO Shibo, LI Haolai. A Depth Vision SLAM Algorithm Combining Sparse Geometric Features with Range Flow[J]. ROBOT, 2019, 41(2): 185-196,241. DOI: 10.13973/j.cnki.robot.180222
Citation: FANG Zheng, ZHAO Shibo, LI Haolai. A Depth Vision SLAM Algorithm Combining Sparse Geometric Features with Range Flow[J]. ROBOT, 2019, 41(2): 185-196,241. DOI: 10.13973/j.cnki.robot.180222

一种融合稀疏几何特征与深度流的深度视觉SLAM算法

A Depth Vision SLAM Algorithm Combining Sparse Geometric Features with Range Flow

  • 摘要: 为了克服移动机器人在视觉退化场景下的位姿估计问题,通过将稠密的深度流与稀疏几何特征相结合,提出了一种实时、鲁棒和低漂移的深度视觉SLAM(同时定位与地图构建)算法.该算法主要由3个优化层组成,基于深度流的视觉里程计层、基于ICP(迭代最近点)的位姿优化层和基于位姿图的优化层.基于深度流的视觉里程计层通过建立深度变化约束方程实现相机帧间快速的6自由度位姿估计;基于ICP的位姿优化层通过构建局部地图来消除局部漂移;基于位姿图的优化层从深度信息中提取、匹配稀疏几何特征,从而建立闭环约束并通过位姿图来实现全局位姿优化.对本文所提出的算法分别在TUM数据集和实际场景中进行了性能测试.实验结果表明本文的前端算法的性能优于当前深度视觉主流算法,后端算法可以较为鲁棒地建立闭环约束并消除前端位姿估计所产生的全局漂移.

     

    Abstract: In order to solve the pose estimation problem of mobile robots in visually degraded environments, a real-time, robust and low-drift depth vision SLAM (simultaneous localization and mapping) method is proposed by utilizing both dense range flow and sparse geometry features. The proposed method is mainly composed of three optimization layers, namely range flow based visual odometry layer, ICP (iterative closest point) based pose optimization layer and pose graph based optimization layer. In range flow based visual odometry layer, the constraint equation of range variation is used to solve fast 6 DOF (degree of freedom) frame-to-frame pose estimation of camera. Then, local drifts are reduced by applying local map in ICP based pose optimization layer. After that, the loop closure constraints are built by extracting and matching sparse geometric features from depth information and a pose graph is constructed for global pose optimization in pose graph based optimization layer. Performances of the proposed method are tested on TUM datasets and in real-world scenes. Finally, experiment results show that our front-end algorithm outperforms the state-of-the-art depth vision methods, and our back-end algorithm can robustly construct loop closures constraints and reduce the global drift caused by front-end pose estimation.

     

/

返回文章
返回