LIU Yanjiao, ZHANG Yunzhou, RONG Lei, JIANG Hao, DENG Yi. Visual Odometry Based on the Direct Method and the Inertial Measurement Unit[J]. ROBOT, 2019, 41(5): 683-689. DOI: 10.13973/j.cnki.robot.180601
Citation: LIU Yanjiao, ZHANG Yunzhou, RONG Lei, JIANG Hao, DENG Yi. Visual Odometry Based on the Direct Method and the Inertial Measurement Unit[J]. ROBOT, 2019, 41(5): 683-689. DOI: 10.13973/j.cnki.robot.180601

Visual Odometry Based on the Direct Method and the Inertial Measurement Unit

  • Traditionally, the direct method is prone to fall into local optimum because it estimates the camera pose depending on gradient search entirely. For this problem, the IMU (inertial measurement unit) data is tightly associated with camera tracking to provide accurate short-term motion constraints and initial value of gradient direction, and the visual pose tracking result is corrected to improve the tracking accuracy of the monocular visual odometry. Then, the sensor data fusion model is established based on camera and IMU measurements, and the sliding window is used to optimize the solution. During the marginalization process, the state variables which should be marginalized or added to the sliding window are selected according to the camera motion between the current frame and the previous keyframes. In this way, accurate enough prior states for optimization can be ensured to achieve better optimization and fusion performance. Experimental results show that the total orientation error of the proposed algorithm is about 3° and the total translation error is less than 0.4 m compared with the existing visual odometry algorithms.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return