Abstract:When disposing hazardous goods, mobile robots mayn't feed back the joint state of its equipped manipulator due to the damage of measuring devices. To avoid the disposition time delay caused by recalling the robot, an HRRC (human-robot-robot-cooperation) based uncalibrated visual servoing control system is presented for the mobile robotic manipulator without joint-state feedback. Firstly, the virtual exoskeleton (virtual model) for reflecting the joint state of the manipulator is set up through selecting the joint regions artificially with human-computer-interaction (HCI) input devices (such as mouse) on the monitoring picture captured by the camera from another mobile robot. Then, the virtual exoskeleton is combined with the multi-joint tracking algorithm, to steer joints of the manipulator and maintain the posture of the end-effector. In order to guide the virtual exoskeleton using artificial guidance points, the relationship between the terminal of the virtual exoskeleton and joint angles is mapped by GRNN (general regression neural network). In the peg-in-hole experiment, the end-effector completes the task under the artificial guidance, and the posture of the end-effector can be maintained within an error of ±1° using the proposed method. Comparatively, the end-effector can neither complete the task, nor maintain its posture using the conventional single joint control method. Results of the contrast experiment verify that the proposed control system can assist operators to intuitively use the manipulator to dispose targets without any feedback of the joint states, and to maintain the end-effector in a desired posture during the disposition.
[1] Ji P, Song A G, Xiong P W, et al. Egocentric-vision based hand posture control system for reconnaissance robots[J/OL]. Journal of Intelligent and Robotic Systems, 2016. doi:10.1007/s10846-016-0440-2. (2016-11-15)[2016-11-18]. http://link.springer.com/article/10.1007%2Fs10846-016-0440-2.
[2] Piepmeier J A, Mcmurray G V, Lipkin H. A dynamic quasi-Newton method for uncalibrated visual servoing[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 1999:1595-1600.
[3] Jagersand M, Fuentes O, Nelson R. Experimental evaluation of uncalibrated visual servoing for precision manipulation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 1997:2874-2880.
[4] Piepmeier J A, McMurray G V, Lipkin H. Uncalibrated dynamic visual servoing[J]. IEEE Transactions on Robotics and Automation, 2004, 20(1):143-147.
[5] Piepmeier J A, Lipkin H. Uncalibrated eye-in-hand visual servoing[J]. International Journal of Robotics Research, 2003, 22(10/11):805-819.
[6] Hao M, Deuflhard P, Sun Z Q, et al. Model-free uncalibrated visual servoing using recursive least squares[J]. Journal of Computers, 2008, 3(11):42-50.
[7] Sebastian J M, Pari L, Angel L, et al. Uncalibrated visual servoing using the fundamental matrix[J]. Robotics and Autonomous Systems, 2009, 57(1):1-10.
[8] Farahmand A M, Shademan A, Jagersand M. Global visual-motor estimation for uncalibrated visual servoing[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2007:1975-1980.
[9] Zhao Q J, Zhang L Q, Chen Y J. Online estimation technique for Jacobian matrix in robot visual servo systems[C]//IEEE Conference on Industrial Electronics and Applications. Piscataway, USA:IEEE, 2008:1270-1275.
[10] Qian J A, Su J B. Online estimation of image Jacobian matrix by Kalman-Bucy filter for uncalibrated stereo vision feedback[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2002:562-567.
[11] Zhong X G, Zhong X Y, Peng X F. Robots visual servo control with features constraint employing Kalman-neural-network filtering scheme[J]. Neurocomputing, 2015, 151:268-277.
[12] Mohebbi A, Keshmiri M, Xie W F. A comparative study of eye-in-hand image-based visual servoing:Stereo vs. Mono[J]. Journal of Integrated Design and Process Science, 2015, 19(3):25-54.
[13] Zhang F, Leitner J, Milford M, et al. Towards vision-based deep reinforcement learning for robotic motion control[A/OL]. 2015. (2015-11-12)[2016-11-18]. https://arxiv.org/abs/1511.03791.
[14] Qian K, Song A G, Bao J T, et al. Small teleoperated robot for nuclear radiation and chemical leak detection[J]. International Journal of Advanced Robotic Systems, 2012, 9:70-79.
[15] Pernkopf F. Tracking of multiple targets using online learning for reference model adaptation[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B:Cybernetics, 2008, 38(6):1465-1475.
[16] Zhang K H, Zhang L, Liu Q S, et al. Fast visual tracking via dense spatio-temporal context learning[M]//Lecture Notes in Computer Science, vol.8693. Berlin, Germany:Springer-Verlag, 2014:127-141.