Closed Quadrilateral Feature Tracking Algorithm Based on Binocular Vision
FAN Junjie1,2, LIANG Hauwei2, ZHU Hui2, YU Biao2
1. Department of Automation, University of Science and Technology of China, Hefei 230027, China;
2. Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230028, China
A visual tracking algorithm based on binocular matching is proposed to solve the problems such as high error rate, low robustness and poor information fusion existing in current visual tracking methods under dynamic background. Based on the physical structure of binocular camera, the proposed algorithm adopts a closed quadrilateral method to match feature points so as to implement the binocular matching and 3D tracking and optimize the searching structure. The binocular images are filtered by Gauss-Laplace template, and the feature points extracted by the Harris corner detection algorithm are encapsulated orderly into feature descriptor. Then, the sum of absolute difference is used as the matching criteria, a 4-side searching criteria is set up for neighbor searching, and RANSAC(random sample consensus) algorithm is introduced for reliability screening. Finally, the accuracy of feature tracking is improved through the 4 points formed quadrilateral closed-loop detection. To evaluate the performance of the proposed method, images with different resolutions under different road conditions are collected, and test experiments are conducted. The experimental results show that the average tracking precision of the proposed method reaches 99.80%. The robustness and tracking accuracy of the proposed method is superior to the optical flow method.
[1] 候志强,韩崇昭.视觉跟踪技术综述[J].自动化学报,2006,32(4):603-617. Hou Z Q, Han C Z. A survey of visual tracking[J]. Acta Automatica Sinica, 2006, 32(4):603-617.[2] Nourani-Vatani N, Roberts J, Srinivasan M V, et al. Practical visual odometry for car-like vehicles[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2009:3551-3557.[3] Mei C, Sibley G, Cummins M, et al. RSLAM:A system for large scale mapping in constant-time using stereo[J]. International Journal of Computer Vision, 2011, 94(2):198-214. [4] Geiger A, Ziegler J, Stiller C. StereoScan:Dense 3D reconstruction in real-time[C]//IEEE Intelligent Vehicles Symposium. Piscataway, USA:IEEE, 2011:963-968.[5] 叶平,李自亮,孙汉旭.基于立体视觉的球形机器人定位方法[J].控制与决策,2013,28(4):632-636, 640. Ye P, Li Z L, Sun H X. Stereovision-based localization for ball-shaped robot[J]. Control and Decision, 2013, 28(4):632-636, 640.[6] 王保丰,周建亮,唐歌实,等.嫦娥三号巡视器视觉定位方法[J].中国科学:信息科学,2014,44(4):452-460. Wang B F, Zhou J L, Tang G S, et al. Research on visual localization method of lunar rover[J]. Scientia Sinica:Informationis, 2014, 44(4):452-460.[7] Hsieh J W. Fast stitching algorithm for moving object detection and mosaic construction[J]. Image and Vision Computing, 2004, 22(4):291-306. [8] Thomas B, Jitendra M. Large displacement optical flow:Descriptor matching in variational motion estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(3):500-513. [9] Harris C, Stephens M. A combined corner and edge detector[C]//4th Alvey Vision Conference. Piscataway, USA:IEEE, 1988:147-152.[10] Hirschmuller H, Scharstein D. Evaluation of cost functions for stereo matching[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA:IEEE, 2007:2292-2299.[11] Strasdat H, Montiel J M M, Davison A J. Visual SLAM:Why filter?[J]. Image and Vision Computing, 2012, 30(2):65-77. [12] McManus C, Churchill W, Napier A, et al. Distraction suppression for vision-based pose estimation at city scales[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:3762-3769.[13] Mei C, Benhimane S, Malis E, et al. Efficient homography-based tracking and 3-D reconstruction for single viewpoint sensors[J]. IEEE Transactions on Robotics, 2008, 24(6):1352-1364.