面向复杂光照场景的异质SLAM融合方法

Heterogeneous SLAM Fusion Method in Complex Illumination Scenes

  • 摘要: 针对低光照、弱纹理等复杂光照环境中同步定位与地图构建(SLAM)面临的闭环检测失败和机器人轨迹精度低的问题,将传统视觉SLAM方法的高精度地图构建和精确定位能力与仿生SLAM方法在复杂光照环境下的强场景识别能力相结合,提出了一种基于模糊神经网络的异质SLAM融合方法,包括基于标准型模糊神经网络的闭环决策方法以提升复杂光照场景下闭环检测的成功率,以及基于T-S(Takagi-Sugeno)模糊神经网络的轨迹优化方法以提升机器人轨迹估计的精准性,从而实现在复杂光照环境中更准确的定位和更可靠的环境建模。实验结果表明,相较于ORB-SLAM2和RatSLAM方法,提出的异质SLAM融合方法在自采集数据集和公开数据集上能获得更高的闭环检测召回率和更低的绝对轨迹误差(ATE),在复杂场景下展现出较强的鲁棒性,对提升复杂光照场景下机器人自主作业的精准性及稳定导航定位能力具有积极意义。

     

    Abstract: Aiming at the problems of closed-loop detection failure and low robot trajectory accuracy in simultaneous localization and mapping (SLAM) in complex lighting environments such as low light and weak texture, a heterogeneous SLAM fusion method based on fuzzy neural network (FNN) is proposed by combining the high-precision mapping and precise positioning ability of traditional visual SLAM methods with the strong scene recognition ability of bionic SLAM methods in complex illumination environments, including the decision method based on standard FNN which improves the success rate of loop-closure detection, and the trajectory optimization method based on T-S (Takagi-Sugeno) FNN which improves the accuracy of robot trajectory estimation, so as to achieve more accurate positioning and more reliable environment modeling results. Experimental results show that compared with ORB-SLAM2 and RatSLAM methods, the proposed heterogeneous SLAM fusion method has higher recall rate of loop-closure detection and lower absolute trajectory error (ATE) on selfcollected and public datasets, shows strong robustness in complex scenarios, and is promising in promoting the accuracy of autonomous robotic operation and the robustness of navigation and localization in complex illumination scenes.

     

/

返回文章
返回