Dynamic Obstacle Detection and Representation Approach for Unmanned Vehicles Based on Laser Sensor
XIN Yu1,2, LIANG Huawei2, MEI Tao2, HUANG Rulin1,2, DU Mingbo1,2, WANG Zhiling2, CHEN Jiajia2, ZHAO Pan2
1. Department of Automation, University of Science and Technology of China, Hefei 230027, China;
2. Institute of Advanced Manufacturing Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230027, China
For the data processing delay and inaccurate detection problems of dynamic obstacle detection for laser sensor in outdoor environments, a dynamic obstacle detection and representation approach is proposed based on 3-dimensional laser sensor Velodyne and four-line laser sensor Ibeo. By analyzing and processing the data from Velodyne, this approach accomplishes detection and tracking of dynamic obstacles around the unmanned vehicle. For the sector region in front of unmanned vehicle with high accuracy requirements, this approach adopts confidence distance theory to achieve data fusion of the information processed by Velodyne and the output motion state information provided by Ibeo, significantly improves detection accuracy of obstacle motion state, and performs time-delay revision for the locations of dynamic obstacles based on the fusion result. At last, the occupancy locations of dynamic obstacles and static obstacles are distinguished and marked in the occupancy grid map. This approach can accurately detect the obstacle motion information in outdoor environments, eliminate positional deviation caused by sensor data processing delay and accurately represent the dynamic and static obstacles information in the environment with the occupancy grid map. This approach is applied to our self-developed unmanned vehicle. Large amount of experiments and the outstanding performance of our unmanned vehicle in the "Intelligent Vehicle Future Challenge of China" prove its reliability and accuracy.
[1] Petrovskaya A, Thrun S. Model based vehicle detection and tracking for autonomous urban driving[J]. Autonomous Robots, 2009, 26(2/3): 123-139.[2] Montemerlo M, Becker J, Bhat S, et al. Junior: The Stanford entry in the urban challenge[J]. Journal of Field Robotics, 2008, 25(9): 569-597. [3] Ferguson D, Darms M, Urmson C, et al. Detection, prediction, and avoidance of dynamic obstacles in urban environments[C]//IEEE Intelligent Vehicles Symposium. Piscataway, USA: IEEE, 2008: 1149-1154.[4] Urmson C, Anhalt J, Bagnell D, et al. Autonomous driving in urban environments: Boss and the urban challenge[J]. Journal of Field Robotics, 2008, 25(8): 425-466. [5] Mertz C, Navarro-Serment L E, MacLachlan R, et al. Moving object detection with laser scanners[J]. Journal of Field Robotics, 2013, 30(1): 17-43. [6] Dorai C, Wang G, Jain A K, et al. Registration and integration of multiple object views for 3D model construction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(1): 83-89. [7] Himmelsbach M, Müller A, Lüttel T, et al. LIDAR-based 3D object perception[C]//Proceedings of 1st International Workshop on Cognition for Technical Systems. 2008.[8] 杨飞.基于三维激光雷达的运动目标实时检测与跟踪[D].杭州:浙江大学,2012.Yang F. Moving objects real-time detection and tracking based on 3D LIDAR[D]. Hangzhou: Zhejiang University, 2012.[9] 谌彤童.三维激光雷达在自主车环境感知中的应用研究[D].长沙:国防科学技术大学,2011.Chen T T. Applied research of 3D LIDAR in the environmental perception of autonomous vehicles[D]. Changsha: National University of Defense Technology, 2011.[10] Pears N E. Feature extraction and tracking for scanning range sensors[J]. Robotics and Autonomous Systems, 2000, 33(1): 43-58. [11] Tubbs J D. A note on binary template matching[J]. Pattern Recognition, 1989, 22(4): 359-365. [12] Petrovskaya A, Thrun S. Efficient techniques for dynamic vehicle detection[C]//11th International Symposium on Experimental Robotics. Berlin, Germany: Springer, 2009: 79-91.[13] 陈福增.多传感器数据融合的数学方法[J].数学的实践与认识,1995,25(2):11-15.Chen Z F. Mathematic methods of multi-sensor data fusion[J]. Mathematics in Practice and Theory, 1995, 25(2): 11-15.