GONG Zhaohui, ZHANG Xiaoli, PENG Xiafu, LI Xin. Semi-Direct Monocular Visual Odometry Based on Visual-Inertial Fusion[J]. ROBOT, 2020, 42(5): 595-605. DOI: 10.13973/j.cnki.robot.190628
Citation: GONG Zhaohui, ZHANG Xiaoli, PENG Xiafu, LI Xin. Semi-Direct Monocular Visual Odometry Based on Visual-Inertial Fusion[J]. ROBOT, 2020, 42(5): 595-605. DOI: 10.13973/j.cnki.robot.190628

Semi-Direct Monocular Visual Odometry Based on Visual-Inertial Fusion

  • Considering the disadvantages of semi-direct monocular visual odometry which lacks scale factor and shows poor robustness in fast motion, a semi-direct monocular visual odometry based on inertial fusion is designed. By using IMU (inertial measurement unit) information to make up for the deficiencies of visual odometry, the tracking accuracy and the system robustness can be effectively improved. Through the joint initialization of visual information and inertial measurement, the environmental scale can be accurately recovered. In order to improve the robustness of motion tracking, an IMU-weighted prior model is proposed. IMU state estimation is obtained by preintegration, the weight coefficient is adjusted according to the IMU prior error, and then the weighted value of IMU state is used to provide accurate initial estimation for the front-end. A tightly coupled graph optimization model is constructed in the back-end, which combines inertia, vision and 3D map point for joint optimization. Using common-view relationships as constraints in sliding window, it can improve the optimization efficiency and accuracy while eliminating the local cumulative error. The experimental results show that the prior model is better than both the uniform motion model and the IMU prior model, and the single frame prior error is less than 1 cm. By improving the back-end method, the calculation efficiency is increased by 0.52 times, and both the trajectory accuracy and the optimization stability are improved at the same time. Experimental results on the public dataset EuRoC demonstrate that the proposed algorithm outperforms the open keyframe-based visual inertial SLAM (OKVIS) algorithm, and the root mean square error of trajectory is reduced to 1/3 compared with the original visual odometry.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return