基于事件相机的双目视觉SLAM

Binocular Vision SLAM Based on Event Camera

  • 摘要: 现有的基于事件相机的同步定位与地图构建(SLAM)方法大多依赖事件流长时间积累,无法有效利用事件流之间的关联,导致建图与定位精度欠佳等问题。为此,本文探索了一种基于事件相机的双目视觉SLAM系统,通过匹配左右时间曲面上的最近时间戳事件,计算特征深度信息,通过Luacas-Kanede算法匹配深度信息与时间曲面信息来计算位姿的微小增量,以实现跟踪;然后基于Students’t概率模型,使用IRLS(迭代重加权最小二乘)算法加速对深度信息的融合,得到半稠密地图。使用开源数据集模拟极端环境对算法进行性能测试,并对建图与跟踪的结果进行误差分析。试验结果表明,相比于现有的最优异的方法,本文方法拥有更好的建图效果,在不同数据集上的位姿误差降低48.3%,具备更高的定位精度;同时在光线不足、高动态场景下均能鲁棒地工作。

     

    Abstract: Most of the existing simultaneous localization and mapping(SLAM) methods based on event camera rely on the accumulation of event streams for a long time, and cannot effectively uses the correlation between event streams, resulting in poor mapping and positioning accuracy. For this problem, a binocular vision SLAM system based on an event camera is explored in this paper. By matching the nearest timestamp events on the left and right time surfaces, the feature depth information is calculated, and the depth information and time surface information are matched by Luacas-Kanade algorithm to calculate the small increment of pose to achieve tracking. Then, the IRLS(iterative reweighted least squares) algorithm is used to accelerate the fusion of depth information based on the Students’ t probability model, to obtain a semi-dense map.The open source dataset is used to simulate the extreme environment to test the performance of the algorithm, and the error analysis on the mapping and tracking results is carried out. The experimental results show that compared with the state-ofthe-art methods, the proposed method has better mapping effect, and the pose error on different datasets is reduced by 48.3%,demonstrating higher positioning accuracy. Meanwhile, it can work robustly in low light and high dynamic scenes.

     

/

返回文章
返回