辛菁, 苟蛟龙, 马晓敏, 黄凯, 刘丁, 张友民. 基于Kinect的移动机器人大视角3维V-SLAM[J]. 机器人, 2014, 36(5): 560-568. DOI: 10.13973/j.cnki.robot.2014.0560
引用本文: 辛菁, 苟蛟龙, 马晓敏, 黄凯, 刘丁, 张友民. 基于Kinect的移动机器人大视角3维V-SLAM[J]. 机器人, 2014, 36(5): 560-568. DOI: 10.13973/j.cnki.robot.2014.0560
XIN Jing, GOU Jiaolong, MA Xiaomin, HUANG Kai, LIU Ding, ZHANG Youmin. A Large Viewing Angle 3-Dimensional V-SLAM Algorithm with a Kinect-based Mobile Robot System[J]. ROBOT, 2014, 36(5): 560-568. DOI: 10.13973/j.cnki.robot.2014.0560
Citation: XIN Jing, GOU Jiaolong, MA Xiaomin, HUANG Kai, LIU Ding, ZHANG Youmin. A Large Viewing Angle 3-Dimensional V-SLAM Algorithm with a Kinect-based Mobile Robot System[J]. ROBOT, 2014, 36(5): 560-568. DOI: 10.13973/j.cnki.robot.2014.0560

基于Kinect的移动机器人大视角3维V-SLAM

A Large Viewing Angle 3-Dimensional V-SLAM Algorithm with a Kinect-based Mobile Robot System

  • 摘要: 针对大视角情况下,移动机器人3维视觉同步定位与地图构建(visual simultaneous localization and mapping,V-SLAM)性能下降的问题,提出了一种仿射不变特征匹配算法AORB(affine oriented FAST and rotated BRIEF)并在此基础上构建了基于Kinect的移动机器人大视角3D V-SLAM系统.首先对Kinect相机采集到的彩色RGB数据采用AORB算法实现具有大视角变化的相邻帧图像之间的快速有效匹配以建立相邻帧之间的对应关系;然后根据Kinect相机标定得到的内外参数以及对准校正后的像素点深度值将2D图像点转换为3D彩色点云数据; 接着结合随机抽样一致性算法(RANdom Sample Consensus,RANSAC)去除3D点云中的外点,并利用RANSAC的内点进行最小二乘算法下机器人相邻位姿的估计;最后采用g2o(general graph optimization)优化方法对机器人位姿进行优化,从而建立3D V-SLAM模型.最终实现了移动机器人大视角3D视觉SLAM. 基于标准数据集的离线实验和基于真实环境的机器人在线实验结果表明,本文所提出的匹配算法和所构建的3D V-SLAM系统在大视角情况下能完成局部模型的准确更新,成功地重构出环境模型并有效地估计出机器人的运动轨迹.

     

    Abstract: To solve the performance degradation problem of the mobile robot 3D V-SLAM (visual simultaneous localization and mapping) in the presence of large viewing angle, an affine invariant features matching algorithm AORB (affine oriented FAST and rotated BRIEF) is proposed, and a mobile robot large viewing angle 3D V-SLAM system using Kinect camera is further developed. Firstly, AORB algorithm is adopted to implement the fast and efficient matching between adjacent frames captured by the Kinect RGB camera in the presence of large changes of viewing angle, and the corresponding relationship between adjacent frames is created. Secondly, 2D image points are converted into 3D color cloud data through using the calibrated intrinsic and extrinsic parameters of Kinect, and pixel depth values after alignment correction. Thirdly, the relative pose between adjacent frames is computed by using the least-squares algorithm after removing outliers using RANSAC (RANdom Sample Consensus). Finally, the 3D model is obtained by optimizing the resulting pose using g2o (general graph optimization). Mobile robot large viewing angle 3D V-SLAM is realized ultimately. Both the off-line (based on well-known and available benchmark data sets) and the online (with a developed mobile robot system) experimental testing show that the proposed matching algorithm and the developed 3D V-SLAM system can accurately update the local model, successfully reconstruct the environment model, and effectively estimate the motion trajectory of the mobile robot in the presence of large viewing angle.

     

/

返回文章
返回