Abstract:To meet the demands of fully autonomous operation of unmanned rotorcrafts, an autonomous safe landing system for unmanned rotorcrafts to land on rugged terrain is developed. The proposed system can automatically analyze the terrain of the landing area through onboard real-time calculation, search feasible landing points and then implement autonomous landing. Mounted with a low-cost stereo RGB-D camera for depth sensing and streaming, the system implements real-time 3D terrain reconstruction of landing area and obtains low-noise depth image of the terrain by utilizing a truncated signed distance function (TSDF) based method. A real-time fine search policy is also specifically designed in order to seek out a suitable landing location according to the shape of the landing gear. Finally, the safe landing of the unmanned rotorcraft is completed by a cascaded PID (proportional-integral-differential) controller. Based on the DJI Matrice 100 quadcopter, the autonomous and safe landing is achieved not only in a customized simulator for algorithm evaluation, but also in a realistic rugged terrain situation. With further optimization, this work can provide safe and effective solutions for unmanned rotorcrafts in emergency landing, delivery and post-disaster rescue.
[1] Lee D, Ryan T, Kim H J. Autonomous landing of a VTOL UAV on a moving platform using image-based visual servoing[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2012:971-976. [2] Yang S, Ying J H, Lu Y, et al. Precise quadrotor autonomous landing with SRUKF vision perception[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2015:2196-2201. [3] Seo S H, Won J, Bertino E, et al. A security framework for a drone delivery service[C]//2nd Workshop on Micro Aerial Vehicle Networks, Systems, and Applications for Civilian Use. New York, USA:ACM, 2016:29-34. [4] Mittal M, Valada A, Burgard W. Vision-based autonomous landing in catastrophe-struck environments[DB/OL]. (2018-09-15)[2019-11-15]. https://arxiv.org/pdf/1809.05700.pdf. [5] Sarkisov Y S, Yashin G A, Tsykunov E V, et al. DroneGear:A novel robotic landing gear with embedded optical torque sensors for safe multicopter landing on an uneven surface[J]. IEEE Robotics and Automation Letters, 2018, 3(3):1912-1917. [6] Forster C, Faessler M, Fontana F, et al. Continuous onboard monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2015:111-118. [7] Templeton T, Shim D H, Geyer C, et al. Autonomous visionbased landing and terrain mapping using an MPC-controlled unmanned rotorcraft[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2007:1349-1356. [8] Park J, Kim Y, Kim S. Landing site searching and selection algorithm development using vision system and its application to quadrotor[J]. IEEE Transactions on Control Systems Technology, 2015, 23(2):488-503. [9] Desaraju V R, Michael N, Humenberger M, et al. Vision-based landing site evaluation and informed optimal trajectory generation toward autonomous rooftop landing[J]. Autonomous Robots, 2015, 39(3):445-463. [10] Theodore C, Rowley D, Ansar A, et al. Flight trials of a rotorcraft unmanned aerial vehicle landing autonomously at unprepared sites[C]//American Helicopter Society 62nd Annual Forum. Alexandria, USA:American Helicopter Society, 2006:1249-1263. [11] Yan S, Wu C, Wang L, et al. DDRNet:Depth map denoising and refinement for consumer depth cameras using cascaded CNNs[C]//European Conference on Computer Vision. Cham, Switzerland:Springer, 2018:155-171. [12] Zhang X, Wu R Y. Fast depth image denoising and enhancement using a deep convolutional network[C]//IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway, USA:IEEE, 2016:2499-2503. [13] Shen J, Cheung S C S. Layer depth denoising and completion for structured-light RGB-D cameras[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA:IEEE, 2013:1187-1194. [14] Feng Y H, Liu X M, Zhang Y B, et al. Single depth image superresolution and denoising based on sparse graphs via structure tensor[C]//IEEE International Conference on Image Processing. Piscataway, USA:IEEE, 2017:4063-4067. [15] Whelan T, Salas-Moreno R F, Glocker B, et al. ElasticFusion:Real-time dense SLAM and light source estimation[J]. International Journal of Robotics Research, 2016, 35(14):1697-1716. [16] Labbe M, Michaud F. Online global loop closure detection for large-scale multi-session graph-based SLAM[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2014:2661-2666. [17] Newcombe R A, Izadi S, Hilliges O, et al. KinectFusion:Realtime dense surface mapping and tracking[C]//10th IEEE/ACM International Symposium on Mixed and Augmented Reality. Piscataway, USA:IEEE, 2011:127-136. [18] Whelan T, McDonald J, Kaess M, et al. Kintinuous:Spatially extended KinectFusion[C]//3rd RSS Workshop on RGB-D:Advanced Reasoning with Depth Cameras. 2012. [19] ROS. ROS[CP/OL].[2019-11-15]. http://row.org/. [20] Moore T, Stouch D. A generalized extended Kalman filter implementation for the robot operating system[C]//13th International Conference on Intelligent Autonomous Systems. London, UK:Springer, 2016:335-348. [21] ROS. Robot localization EKF internal motion model[EB/OL].[2019-11-15]. https://answers.ros.org/question/221837/robotlocalization-ekf-internal-motion-model/. [22] Besl P J, McKay H D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2):239-256. [23] Gazebo. Gazebo[CP/OL].[2019-11-15]. http://gazebosim.org/.