董星亮, 苑晶, 黄枢子, 杨少坤, 张雪波, 孙凤池, 黄亚楼. 室内环境下基于平面与线段特征的RGB-D视觉里程计[J]. 机器人, 2018, 40(6): 921-932. DOI: 10.13973/j.cnki.robot.170621
引用本文: 董星亮, 苑晶, 黄枢子, 杨少坤, 张雪波, 孙凤池, 黄亚楼. 室内环境下基于平面与线段特征的RGB-D视觉里程计[J]. 机器人, 2018, 40(6): 921-932. DOI: 10.13973/j.cnki.robot.170621
DONG Xingliang, YUAN Jing, HUANG Shuzi, YANG Shaokun, ZHANG Xuebo, SUN Fengchi, HUANG Yalou. RGB-D Visual Odometry Based on Features of Planes and Line Segments in Indoor Environments[J]. ROBOT, 2018, 40(6): 921-932. DOI: 10.13973/j.cnki.robot.170621
Citation: DONG Xingliang, YUAN Jing, HUANG Shuzi, YANG Shaokun, ZHANG Xuebo, SUN Fengchi, HUANG Yalou. RGB-D Visual Odometry Based on Features of Planes and Line Segments in Indoor Environments[J]. ROBOT, 2018, 40(6): 921-932. DOI: 10.13973/j.cnki.robot.170621

室内环境下基于平面与线段特征的RGB-D视觉里程计

RGB-D Visual Odometry Based on Features of Planes and Line Segments in Indoor Environments

  • 摘要: 针对室内环境的结构特点,提出一种使用平面与线段特征的RGB-D视觉里程计算法.首先根据RGB-D扫描点的法向量对3D点云进行聚类,并使用随机抽样一致(RANSAC)算法对每簇3D点集进行平面拟合,抽取出环境中的平面特征;随后利用边缘点检测算法分割出环境中的边缘点集,并提取出环境中的线段特征;然后提出一种基于平面与线段几何约束的特征匹配算法,完成特征之间的匹配.在平面与线段特征匹配结果能提供充足的位姿约束的条件下,利用特征之间的匹配关系直接求解RGB-D相机的位姿;若不能,则利用匹配线段的端点以及线段点集来实现RGB-D相机位姿的估计.在TUM公开数据集中的实验证明了选择平面与线段作为环境特征可以提升视觉里程计估计和环境建图的精度.特别是在fr3/cabinet数据集中,本文算法的旋转、平移的均方根误差分别为2.046°/s、0.034m/s,要显著优于其他经典的视觉里程计算法.最终将本文系统应用到实际的移动机器人室内建图中,系统可以建立准确的环境地图,且系统运行速度可以达到3帧/s,满足实时处理的要求.

     

    Abstract: Considering the structural characteristics of indoor environments, an RGB-D visual odometry algorithm is proposed, which uses features of planes and line segments. Normal vectors of RGB-D scanning points are used to cluster the 3D point cloud. The RANSAC (random sample consensus) algorithm is applied to plane fitting of each cluster of the 3D point set to extract plane features of environments. After that, an edge point detection algorithm is applied to segmenting the edge point set from environments and extracting line segment features of environments. And then, a feature matching algorithm based on the geometric constraint of planes and line segments is proposed to achieve the matching between features. If the results of the plane and line segment feature matching can provide sufficient pose constraints, the RGB-D camera pose shall be directly calculated by the matching relationship between features. Otherwise, the pose of RGB-D camera shall be estimated using the endpoints of matched line segments and the point set of line segments. Experimental results on the TUM (Technical University of Munich) datasets demonstrate that it can improve the accuracy of visual odometry estimation and environmental mapping to use the plane and line segment as the features of environments. Especially on the fr3/cabinet dataset, the root-mean-square error of rotation and translation of the proposed algorithm are 2.046°/s and 0.034m/s, respectively, which are significantly better than other classical visual odometry algorithms. Finally, the system is applied to the mobile robot to realize map building in real indoor environments. The system can build an accurate environmental map. The running speed of the system reaches 3 frames/s, which can meet the need of real-time processing.

     

/

返回文章
返回