1. School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001, China; 2. School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
Abstract:A new docking method for mobile robot is proposed. The desired docking state (desired position and orientation) of the robot is defined by a pre-captured reference image. The scale invariant feature transform (SIFT) algorithm and a double direction best bin first (BBF) based feature matching algorithm are used to extract the visual feedback information by matching the current view of the docking station and the reference image. The robot is aligned against the reference image based on a epipole servoing strategy. A centroid tracking is used to avoid the object escaping from the field of view. A random sample consensus (RANSAC) algorithm is used to solve the affine transform between current and reference images, and a control rule at last stage is presented to realize precise docking. No more knowledge of the scene geometry or artificial marks are needed. The experiment results of a realistic indoor environment demonstrate the effectiveness of the proposed method.
[1] Luo R C,Liao C T,Su K L,et al.Automatic docking and recharging system for autonomous security robot[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway,NJ,USA:IEEE,2005:2953-2958.
[2] Yu J X,Cai Z X,Duan Z H.Dead reckoning of mobile robot in complex terrain based on proprioceptive sensors[C]//International Conference on Machine Learning and Cybernetics.Piscataway,NJ,USA:IEEE,2008:1930-1935.
[3] Hashimoto M,Kawashima H,Nakagami T,et al.Sensor fault detection and identification in dead-reckoning system of mobile robot:Interacting multiple model approach[C]/IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway,NJ,USA:IEEE,2001:1321-1325.
[4] Sagu閟 C,Guerrero J J.Visual correction for mobile robot homing[J].Robotics and Autonomous Systems,2005,50(1):41-49.
[5] Comport A,Pressigout M,Marchand E,et al.A visual servoing control law that is robust to image outliers[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway,NJ,USA:IEEE,2003:492-497.
[6] L髉ez-Nicolas G,Sags C,Guerrero J J,et al.Nonholonomic epipolar visual servoing[C]//IEEE International Conference on Robotics and Automation.Piscataway,N J,USA:IEEE,2006:2378-2384.
[7] Matsumoto Y,Inaba M,Inoue H.Visual navigation using viewsequenced route representation[C]//IEEE International Conference on Robotics and Automation.Piscataway,NJ,USA:IEEE,1996:83-88.
[8] Blanc G,Mezouar Y,Martinet P.Indoor navigation of a wheeled mobile robot along visual rontes[C]//IEEE International Conference on Robotics and Automation.Piscataway,NJ,USA:IEEE,2005:3354-3359.
[9] Kim S,Oh S Y.Hybrid position and image based visual servoing for mobile robots[J].Journal of Intelligent and Fuzzy Systems,2007,18(1):73-82.
[10] Mariottini G L,Prattichizzo D,Oriolo G.Image-based visual servoing for nonholonomic mobile robots with central catadioptric camera[C]//IEEE International Conference on Robotics and Automation.Piscataway,NJ,USA:IEEE,2006:538-544.
[11] Lowe D G.Distinctive image features from scale-invariant keypoints[J].International Journal of Computer Vision,2004,60(2):91-110.
[12] Weike K,Azad P,Dillmann R.Fast and robust feature-based recognition of multiple objects[C]//IEEE-RAS International Conference on Humanoid Robots.Piscataway,NJ,USA:IEEE,2006:264-269.