Scene Modeling and Autonomous Navigation for Robots Based on Kinect System
YANG Dongfang1,2, WANG Shicheng1, LIU Huaping2, LIU Zhiguo1, SUN Fuchun2
1. The Second Artillery Engineering University, Xi'an 710025, China;
2. State Key Laboratory of Intelligent Technology and Systems, Tsinghua University, Beijing 100084, China
杨东方, 王仕成, 刘华平, 刘志国, 孙富春 . 基于Kinect系统的场景建模与机器人自主导航[J]. 机器人, 2012, 34(5): 581-589.DOI: 10.3724/SP.J.1218.2012.00581.
YANG Dongfang, WANG Shicheng, LIU Huaping, LIU Zhiguo, SUN Fuchun. Scene Modeling and Autonomous Navigation for Robots Based on Kinect System. ROBOT, 2012, 34(5): 581-589. DOI: 10.3724/SP.J.1218.2012.00581.
Monocular RGB camera and distance-limited RGB-D camera with Microsoft Kinect system are respectively utilized to solve the 6DoF navigation problem for indoor robots. At first, based on the traditional partial-DoF pose estimation algorithms, an incremental parameterized model is proposed for feature-points' parameters, which is able to solve the scale ambiguity problem. This model differs from Euclid and inverse depth parameterized models with the sharply reduced state-dimension and the consistent observability of system state. In addition, based on this proposed incremental parameterized model, the 6DoF motion is estimated by the observed image-sequence that comes from the RGB camera or infrared camera. At the end of this paper, both the RGB and depth image sequences that sampled from the Kinect system are utilized in Euclid parameterized model and the incremental parameterized model. And the corresponding results validate the effectiveness of the proposed autonomous navigation approach.
[1] Cox I J. Blanche - An experiment in guidance and navigation ofan autonomous robot vehicle[J]. IEEE Transactions on Roboticsand Automation, 1991, 7(2): 193-204. [2] Wijesoma W S, Kodagoda K R S, Balasuriya A P. A laser and acamera for mobile robot navigation[C]//7th International Conferenceon Control, Automation, Robotics and Vision. Piscataway,NJ, USA: IEEE, 2002: 740-745.[3] Dedieu D, Cadenat V, Soueres P. Mixed camera-laser basedcontrol for mobile robot navigation[C]//IEEE/RSJ InternationalConference on Intelligent Robots and Systems. Piscataway, NJ,USA: IEEE, 2000: 1081-1086.[4] Vandapel N, Moorehead S J, Whittaker W, et al. Preliminaryresults on the use of stereo, color cameras and laser sensorsin Antarctica[C]//6th International Symposium on ExperimentalRobotics. Berlin, Germany: Springer-Verlag, 1999: 1-6.[5] Reina G, Underwood J, Brooker G, et al. Radar-based perceptionfor autonomous outdoor vehicles[J]. Journal of FieldRobotics, 2011, 28(6): 894-913.[6] Morales Y, Carballo A, Takeuchi E, et al. Autonomous robotnavigation in outdoor cluttered pedestrian walkways[J]. Journalof Field Robotics, 2009, 26(8): 609-633. [7] Schill F, Mahony R, Corke P. Estimating ego-motion inpanoramic image sequences with inertial measurements[M]//Springer Tracts in Advanced Robotics, vol.70. Berlin, Germany:Springer-Verlag, 2011: 87-101.[8] Davison A J, Molton N D, Reid I D, et al. MonoSLAM: Realtimesingle camera SLAM[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence, 2007, 29(6): 1052-1067. [9] Yamaguchi K, Kato T, Ninomiya Y. Vehicle ego-motion estimationand moving object detection using a monocular camera[C]//18th International Conference on Pattern Recognition.Piscataway, NJ, USA: IEEE, 2006: 610-613.[10] Fraundorfer F, Scaramuzza D, Pollefeys M. A constricted bundleadjustment parameterization for relative scale estimation invisual odometry[J]. IEEE International Conference on Roboticsand Automation. Piscataway, NJ, USA: IEEE, 2010: 1899-1904.[11] Davison A J. Real-time simultaneous localisation and mappingwith a single camera[C]//9th IEEE International Conference onComputer Vision. Piscataway, NJ, USA: IEEE, 2003: 1403-1410.[12] Civera J. Real-time EKF-based structure from motion[D].Spain: Zaragoza University, 2009: 37-45.[13] Zhao L, Huang S D, Yan L, et al. Parallax angle parametrizationfor monocular SLAM[C]//IEEE International Conference onRobotics and Automation. Piscataway, NJ, USA: IEEE, 2011:3117-3124.[14] Cunha J, Pedrosa E, Cruz C. Using a depth camera for indoorrobot localization and navigation[Z/OL]. [2011-12-10]. http://www.cs.washington.edu/ai/Mobile-Robotics/rgbd-workshop-2011/camera-ready/cunha-rgbd11-localization.pdf.[15] Koku B. Comparison of Kinect and Bumblebee2 in indoor environments[C/OL]//12th International Workshop on Researchand Education in Mechatronics. [2011-10-16]. http://metu.academia.edu/ABugraKoku/Papers/1242418/Comparison ofKinect and Bumblebee2 in Indoor Environments.[16] Kamel Boulos M N, Blanchard B J,Walker C, et al.Web GIS inpractice X: A Microsoft Kinect natural user interface for GoogleEarth navigation[J]. International Journal of Health Geographics,2011, 10(45): 1-14.[17] Microsoft Kinect Teardown (iFixit, 4 November 2010). [2011-11-29]. http://www.ifixit.com/Teardown/Microsoft Kinect Teardown/4066/1.[18] Michal T, Peter H. The Kinect sensor in robotics education[Z/OL]. [2012-02-16]. http://www.innoc.at/fileadmin/user upload/ temp /RiE/Proceedings/69.pdf.[19] Um D, Ryu D, Kal M, et al. Multiple intensity differentiation for3-D surface reconstruction with mono-vision infrared proximityarray sensor[J]. IEEE Sensors Journal, 2011, 11(12): 3352-3358. [20] Noel R R, Salekin A, Islam R. A natural user interface classroombased on Kinect[J]. IEEE Learning Technology, 2011,13(4): 59-61.[21] Stowers J, Hayes M, Bainbridge-Smith A. Altitude controlof a quadrotor helicopter using depth map from MicrosoftKinect sensor[C]//IEEE International Conference on Mechatronics.Piscataway, NJ, USA: IEEE, 2011: 358-362.[22] Biswas J, Veloso M. Depth camera based localization andnavigation for indoor mobile robots[Z/OL]. [2012-02-19].http://www.cs.cmu.edu/?mmv/papers/11rssw-KinectLocalization.pdf.[23] Huang A, Bachrach A, Henry P, et al. Visual odometry and mappingfor autonomous flight using an RGB-D camera[A/OL].[2012-05-03]. http://www.isrr-2011.org/ISRR-2011//Programfiles/Papers/Huang-ISRR-2011.pdf.[24] Henry P, Krainin M, Herbst E, et al. RGB-D mapping:Using depth cameras for dense 3D modeling of indoorenvironments[A/OL]. [2012-04-22]. http://ml.cs.washington.edu/www/media/papers/tmpl9iodL.pdf.[25] 吴福朝.计算机视觉中的数学方法[M].北京:科学出版社,2008: 213-218.Wu F C. Mathematical methods in computer vision[M]. Beijing:Science Press, 2008: 213-218.[26] Mclnroy J E , Zhen Q. A novel pose estimation algorithm basedon points to regions correspondence[J]. International Journal ofMathematical Imaging and Vision, 2008, 30(2): 195-207. [27] 杨东方,孙富春,王仕成.单目摄像机自运动的滚动时域估计方法[J].模式识别与人工智能,2012,25(5): 782-791.Yang D F, Sun F C, Wang S C. Moving horizon estimation ofego-motion in monocular visual systems[J]. Pattern Recognitionand Artificial Intelligence, 2012, 25(5): 782-791.[28] Coshes-Meskin D, Bar-Itzhack I Y. Observability analysis ofpiece-wise constant systems, Part I: Theory[J]. IEEE Transactionon Aerospace and Electronic Systems, 1992, 28(4): 1056-1067. [29] Lee K W, Wijesoma W S, Javier I G. On the observabilityand observability analysis of SLAM[C]//IEEE/RSJ InternationalConference on Intelligent Robots and Systems. Piscataway,NJ, USA: IEEE, 2006: 3569-3574.[30] Vidal-Calleja T, Bryson M, Sukkarieh S. On the observabilityof bearing-only SLAM[C]//IEEE International Conference onRobotics and Automation. Piscataway, NJ, USA: IEEE, 2007:4114-4119.[31] Bonnifait P, Garcia G. Design and experimental validation ofan odometric and goniometric localization system for outdoorrobot vehicles [J]. IEEE Transactions on Robotics and Automation,1998, 14(4): 541-548.