Abstract:
Traditional SLAM (simultaneous localization and mapping) methods rely on the assumption of photometric consistency, and often fail to handle scenes with complex lighting variations. Therefore, a reflection scene mapping method based on neural radiance field, termed RefN-SLAM, is proposed. Specifically, two neural radiance fields are used to model the high-light and low-light parts of the scene separately. The final scene color representation is obtained by weighting and summing them with fixed tone mapping coefficients. Meanwhile, the depth perception of the scene is further enhanced by combining surface-aware sampling and perspective-aware sampling, and the reconstruction accuracy and computational efficiency are improved through a coarse-to-fine optimization process. Finally, joint optimization of scene representation and camera pose is performed on a global keyframe pixel database. The experimental results demonstrate that RefN-SLAM method achieves satisfactory reconstruction performance in a chemistry laboratory setting and exhibits excellent tracking performance in both synthetic datasets and real-world robotic experiments.