刘艳娇, 张云洲, 荣磊, 姜浩, 邓毅. 基于直接法与惯性测量单元融合的视觉里程计[J]. 机器人, 2019, 41(5): 683-689. DOI: 10.13973/j.cnki.robot.180601
引用本文: 刘艳娇, 张云洲, 荣磊, 姜浩, 邓毅. 基于直接法与惯性测量单元融合的视觉里程计[J]. 机器人, 2019, 41(5): 683-689. DOI: 10.13973/j.cnki.robot.180601
LIU Yanjiao, ZHANG Yunzhou, RONG Lei, JIANG Hao, DENG Yi. Visual Odometry Based on the Direct Method and the Inertial Measurement Unit[J]. ROBOT, 2019, 41(5): 683-689. DOI: 10.13973/j.cnki.robot.180601
Citation: LIU Yanjiao, ZHANG Yunzhou, RONG Lei, JIANG Hao, DENG Yi. Visual Odometry Based on the Direct Method and the Inertial Measurement Unit[J]. ROBOT, 2019, 41(5): 683-689. DOI: 10.13973/j.cnki.robot.180601

基于直接法与惯性测量单元融合的视觉里程计

Visual Odometry Based on the Direct Method and the Inertial Measurement Unit

  • 摘要: 针对直接法完全依靠梯度搜索来计算相机位姿、容易陷入局部最优的缺点,本文将惯性测量单元(IMU)数据紧密关联到图像跟踪过程,提供精确的短期运动约束和较好的初始梯度方向信息,并对视觉位姿跟踪结果进行校正,提高单目视觉里程计的跟踪精度.在此基础上,基于相机和IMU测量结果建立传感器数据融合模型,采用滑动窗口的方式优化求解,同时在边缘化过程中根据当前帧与上一关键帧之间的相机帧间运动大小,选择应边缘化的状态量和应加入滑动窗口的状态量,确保在优化过程中有足够准确的先验信息,确保优化融合的效果.实验结果表明,与现有的视觉里程计算法相比,本文算法在数据集上的定位角度累积误差在3°左右,位移累积误差小于0.4 m.

     

    Abstract: Traditionally, the direct method is prone to fall into local optimum because it estimates the camera pose depending on gradient search entirely. For this problem, the IMU (inertial measurement unit) data is tightly associated with camera tracking to provide accurate short-term motion constraints and initial value of gradient direction, and the visual pose tracking result is corrected to improve the tracking accuracy of the monocular visual odometry. Then, the sensor data fusion model is established based on camera and IMU measurements, and the sliding window is used to optimize the solution. During the marginalization process, the state variables which should be marginalized or added to the sliding window are selected according to the camera motion between the current frame and the previous keyframes. In this way, accurate enough prior states for optimization can be ensured to achieve better optimization and fusion performance. Experimental results show that the total orientation error of the proposed algorithm is about 3° and the total translation error is less than 0.4 m compared with the existing visual odometry algorithms.

     

/

返回文章
返回