Abstract:
Considering the structural characteristics of indoor environments, an RGB-D visual odometry algorithm is proposed, which uses features of planes and line segments. Normal vectors of RGB-D scanning points are used to cluster the 3D point cloud. The RANSAC (random sample consensus) algorithm is applied to plane fitting of each cluster of the 3D point set to extract plane features of environments. After that, an edge point detection algorithm is applied to segmenting the edge point set from environments and extracting line segment features of environments. And then, a feature matching algorithm based on the geometric constraint of planes and line segments is proposed to achieve the matching between features. If the results of the plane and line segment feature matching can provide sufficient pose constraints, the RGB-D camera pose shall be directly calculated by the matching relationship between features. Otherwise, the pose of RGB-D camera shall be estimated using the endpoints of matched line segments and the point set of line segments. Experimental results on the TUM (Technical University of Munich) datasets demonstrate that it can improve the accuracy of visual odometry estimation and environmental mapping to use the plane and line segment as the features of environments. Especially on the fr3/cabinet dataset, the root-mean-square error of rotation and translation of the proposed algorithm are 2.046°/s and 0.034m/s, respectively, which are significantly better than other classical visual odometry algorithms. Finally, the system is applied to the mobile robot to realize map building in real indoor environments. The system can build an accurate environmental map. The running speed of the system reaches 3 frames/s, which can meet the need of real-time processing.