Abstract:
A two-stage extrinsic calibration method as well as a hybrid-residual-based odometry approach for monocular camera and 3D LiDAR (light detection and ranging) fusion are presented. The proposed two-stage camera-LiDAR extrinsic calibration method combines a motion-based approach and a mutual-information-based approach. At the first stage, a motion-based extrinsic parameter calibration approach is adopted to provide a coarse calibration result without a given initial guess. At the second stage, a mutual-information-based approach is adopted. Taking the results from the first stage as the initial values, the reflectivity information of LiDAR and the grey value information of camera images are registered based on the mutual information principle to refine the calibration result. To further improve the calibration accuracy, an occlusion detection algorithm is employed for sparse LiDAR point clouds at the second stage. The proposed two-stage extrinsic calibration method can guarantee a high calibration accuracy with no requirement for the initial guess. Based on the calibration, a hybrid-residual-based LiDAR-camera fused odometry is proposed. The proposed approach exploits both direct and indirect image features to calculate reprojection residuals and photometric residuals. The different residuals are then unified into a non-linear optimization framework for an odometry estimation. To deal with the depth information missing problem caused by sparse LiDAR data, a colour-based depth interpolation method is proposed, which effectively increases the number of feature points. Finally, experiments are conducted using both real-world and public datasets to evaluate the robustness and accuracy of the proposed extrinsic calibration and fused odometry algorithm. The results suggest that the proposed extrinsic calibration method can provide accurate extrinsic parameter estimation without initial values, and the fused odometry approach can achieve competitive estimation accuracy and robustness both on public and self-owned real-world datasets.