To achieve exact positioning and control of the rotor flying robot in indoor environments, an onboard visual positioning (OVP) method based on monocular vision is designed. It can recognize landmark images by moment feature extraction and image classification, and calculate camera coordinates through EPnP method. To verify the effectiveness of the proposed onboard visual positioning method, an external visual positioning system (EPVS) based on Kinect is built, and a visual tracking algorithm for rotorcraft is designed. The experimental results suggest that the positioning deviation of the rotorcraft is within 2 mm, and the OVP method is suitable for rotorcraft positioning.
[1] Volpe National Transportation Systems Center. Unmanned aircraft system (UAS) service demand 2015-2035: Literature review and projections of future usage[R]. Cambridge, USA: Volpe National Transportation Systems Center, 2013[2] Honegger D, Meier L, Tanskanen P, et al. An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2013: 1736-1741.[3] Grzonka S, Grisetti G, Burgard W. A fully autonomous indoor quadrotor[J]. IEEE Transactions on Robotics, 2012, 28(1): 90-100. [4] Zhang T G, Kang Y, Achtelik M, et al. Autonomous hovering of a vision/IMU guided quadrotor[C]//IEEE International Conference on Mechatronics and Automation. Piscataway, USA: IEEE, 2009: 2870-2875.[5] Lange S, Sunderhauf N, Protzel P. A vision based onboard approach for landing and position control of an autonomous multirotor UAV in GPS-denied environments[C]//International Conference on Advanced Robotics. Piscataway, USA: IEEE, 2009: 674-679.[6] Santamaria-Navarro A, Andrade-Cetto J. Uncalibrated image-based visual servoing[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2013: 5247-5252.[7] Guenard N, Hamel T, Mahony R. A practical visual servo control for an unmanned aerial vehicle[J]. IEEE Transactions on Robotics, 2008, 24(2): 331-340. [8] Hu M K. Visual pattern recognition by moment invariants[J]. IRE Transactions on Information Theory, 1962, 8(2): 179-187. [9] Khotanzad A, Hong Y H. Invariant image recognition by Zernike moments[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(5): 489-497.[10] 李响,谭南林,李国正,等.基于 Zernike 矩的人眼定位与状态识别 [J].电子测量与仪器学报,2015,29(3):390-398. Li X, Tan N L, Li G Z, et al. Eye location and status recognition based on Zernike moments[J]. Journal of Electronic Measurement and Instrumentation, 2015, 29(3): 390-398.[11] 毛建旭,刘敏.基于 ICA 的仿射不变 Zernike 矩的交通标志识别 [J].电子测量与仪器学报,2013,27(7):617-623. Mao J X, Liu M. Traffic sign recognition using ICA-based affine invariant Zernike moment[J]. Journal of Electronic Measurement and Instrumentation, 2013, 27(7): 617-623.[12] Breiman L. Bagging predictors[J]. Machine Learning, 1996, 24(2): 123-140.[13] Lepetit V, Moreno-Noguer F, Fua P. {EP}n{P}: An accurate O(n) solution to the {P}n{P} problem[J]. International Journal of Computer Vision, 2009, 81(2): 155-166. [14] Fabian J, Young T, Jones J C P, et al. Integrating the Microsoft Kinect with Simulink: Real-time object tracking example[J]. IEEE/ASME Transactions on Mechatronics, 2014, 19(1): 249-257.