陈宗海, 洪洋, 王纪凯, 葛振华. 基于循环卷积神经网络的单目视觉里程计[J]. 机器人, 2019, 41(2): 147-155. DOI: 10.13973/j.cnki.robot.180314
引用本文: 陈宗海, 洪洋, 王纪凯, 葛振华. 基于循环卷积神经网络的单目视觉里程计[J]. 机器人, 2019, 41(2): 147-155. DOI: 10.13973/j.cnki.robot.180314
CHEN Zonghai, HONG Yang, WANG Jikai, GE Zhenhua. Monocular Visual Odometry Based on Recurrent Convolutional Neural Networks[J]. ROBOT, 2019, 41(2): 147-155. DOI: 10.13973/j.cnki.robot.180314
Citation: CHEN Zonghai, HONG Yang, WANG Jikai, GE Zhenhua. Monocular Visual Odometry Based on Recurrent Convolutional Neural Networks[J]. ROBOT, 2019, 41(2): 147-155. DOI: 10.13973/j.cnki.robot.180314

基于循环卷积神经网络的单目视觉里程计

Monocular Visual Odometry Based on Recurrent Convolutional Neural Networks

  • 摘要: 提出了一种基于卷积长短期记忆(LSTM)网络和卷积神经网络(CNN)的单目视觉里程计方法,命名为LSTMVO(LSTM visual odometry).LSTMVO采用无监督的端到端深度学习框架,对单目相机的6-DoF位姿以及场景深度进行同步估计.整个网络框架包含位姿估计网络以及深度估计网络,其中位姿估计网络是以端到端方式实现单目位姿估计的深度循环卷积神经网络(RCNN),由基于卷积神经网络的特征提取和基于循环神经网络(RNN)的时序建模组成,深度估计网络主要基于编码器和解码器架构生成稠密的深度图.同时本文还提出了一种新的损失函数进行网络训练,该损失函数由图像序列之间的时序损失、深度平滑度损失和前后一致性损失组成.基于KITTI数据集的实验结果表明,通过在原始单目RGB图像上进行训练,LSTMVO在位姿估计精度以及深度估计精度方面优于现有的主流单目视觉里程计方法,验证了本文提出的深度学习框架的有效性.

     

    Abstract: A monocular visual odometry method based on convolutional long short term memory (LSTM) network and convolutional neural network (CNN) is proposed, named LSTM visual odometry (LSTMVO). LSTMVO uses an unsupervised end-to-end deep learning framework to simultaneously estimate the 6-DoF (degree of freedom) pose and scene depth of monocular cameras. The entire network framework includes a pose estimation network and a depth estimation network. The pose estimation network is a deep recurrent convolutional neural network (RCNN) that implements monocular pose estimation from end to end, consisting of feature extraction based on convolutional neural networks and time-series modeling based on recurrent neural networks (RNN). The depth estimation network generates dense depth maps primarily based on the encoder-decoder architecture. At the same time, a new loss function for network training is proposed. The loss function consists of time series loss, loss of depth smoothness, and loss of consistency before and after the image sequence. The experimental results based on KITTI dataset show that by training on the original monocular RGB image, LSTMVO is superior to the existing mainstream monocular visual odometry methods in terms of pose estimation accuracy and depth estimation accuracy, verifying the effectiveness of the deep learning framework proposed.

     

/

返回文章
返回