叶星余, 何元烈, 汝少楠. 基于生成式对抗网络及自注意力机制的无监督单目深度估计和视觉里程计[J]. 机器人, 2021, 43(2): 203-213. DOI: 10.13973/j.cnki.robot.200084
引用本文: 叶星余, 何元烈, 汝少楠. 基于生成式对抗网络及自注意力机制的无监督单目深度估计和视觉里程计[J]. 机器人, 2021, 43(2): 203-213. DOI: 10.13973/j.cnki.robot.200084
YE Xingyu, HE Yuanlie, RU Shaonan. Unsupervised Monocular Depth Estimation and Visual Odometry Based onGenerative Adversarial Network and Self-attention Mechanism[J]. ROBOT, 2021, 43(2): 203-213. DOI: 10.13973/j.cnki.robot.200084
Citation: YE Xingyu, HE Yuanlie, RU Shaonan. Unsupervised Monocular Depth Estimation and Visual Odometry Based onGenerative Adversarial Network and Self-attention Mechanism[J]. ROBOT, 2021, 43(2): 203-213. DOI: 10.13973/j.cnki.robot.200084

基于生成式对抗网络及自注意力机制的无监督单目深度估计和视觉里程计

Unsupervised Monocular Depth Estimation and Visual Odometry Based onGenerative Adversarial Network and Self-attention Mechanism

  • 摘要: 提出了一种基于生成式对抗网络(GAN)和自注意力机制(self-attention mechanism)的单目视觉里程计方法,命名为SAGANVO(SAGAN visual odometry).该方法将生成式对抗网络学习框架应用于深度估计和视觉里程计任务中,通过GAN生成逼真的目标帧来准确求解出场景的深度图和6自由度位姿.与此同时,为了提高深度网络对场景细节、边缘轮廓的学习能力,将自注意力机制结合到网络模型中.最后,在公开数据集KITTI上展现了所提出的模型和方法的高质量结果,并与现有方法进行了对比,证明了SAGANVO在深度估计和位姿估计中的性能优于现有的主流方法.

     

    Abstract: A monocular visual odometry method based on generative adversarial network (GAN) and self-attention mechanism is proposed, named SAGANVO (SAGAN visual odometry). It applies a generative adversarial network learning framework to depth estimation and visual odometry tasks. The GAN generates realistic target frame to accurately solve the depth map of a scene and 6-DoF (degree of freedom) pose. Meanwhile, the self-attention mechanism is incorporated into the network model in order to improve the deep network ability to learn scene details and edge contours. Finally, high-quality results of the proposed model and method are demonstrated on the public dataset KITTI and compared with the existing methods, and it is proved that SAGANVO has better performance in depth estimation and visual odometry than the state-of-art methods.

     

/

返回文章
返回