Abstract：To realize universal, fast and accurate grasping of 6-DOF (degree of freedom) parts by an industrial robot, a 3D grasping method based on the guidance of monocular vision is proposed. Firstly, the similarity evaluation function between an image and a matching model is established by the Chamfer distance matching algorithm, in which the image is delimited according to direction angles. A genetic algorithm optimized locally by the hill climbing algorithm is applied to searching for the best matching result. Then, an offline 3D model library is established by CAD (computer aided design) model, and the matching algorithm is expanded to the spatial 6-DOF pose measurement of complex-structure parts. Finally, the grasping information is obtained by matrix transformations among all coordinates and the system calibration, so as to realize the 3D grasping of parts. The experiment results show that the optimized pose measurement algorithm improves the speed and accuracy of the matching process. With the proposed measurement algorithm, the position error within 2 mm and rotation error within 2° are achieved in the robotic 3D grasping experiments. So the measurement algorithm can be applied to the part grasping of industrial intelligent robots.
 Jia B X, Liu S, Zhang K X, et al. Survey on robot visual servo control:Vision system and control strategies[J]. Acta Automatica Sinica, 2015, 41(5):861-873.
 Li H, Chen Y L, Chang T H, et al. Binocular vision positioning for robot grasping[C]//IEEE International Conference on Robotics and Biomimetics. Piscataway, USA:IEEE, 2011:1522-1527.
 Hui J Z, Yang Y K, Hui Y, et al. Research on identify matching of object and location algorithm based on binocular vision[J]. Journal of Computational and Theoretical Nanoscience, 2016, 13(3):2006-2013.
 杨贺然,张莉彦.基于末端开环视觉系统的机器人目标抓取研究[J].组合机床与自动化加工技术,2012,12(12):37-44.Yang H R, Zhang L Y. Research of robot grasp based on end-point open loop vision system[J]. Modular Machine Tool & Automatic Manufacturing Technique, 2012, 12(12):37-44.
 Guo Q D, Quan Y M, Zhu Z W, et al. Workpiece posture measurement and intelligent robot grasping based on monocular vision[C]//8th International Conference on Measuring Technology and Mechatronics Automation. Piscataway, USA:IEEE, 2016:919-922.
 Ulrich M, Wiedemann C, Steger C. Combining scale-space and similarity-based aspect graphs for fast 3D object recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(10):1902-1904.
 Liu M Y, Tuzel O, Veeraraghavan A, et al. Fast object localization and pose estimation in heavy clutter for robotic bin picking[J]. International Journal of Robotics Research, 2012, 31(8):951-973.
 刘相,邹北骥,孙家广.基于边界跟踪的快速欧氏距离变换算法[J].计算机学报,2006,29(2):317-323.Liu X, Zou B J, Sun J G. Fast Euclidean distance transform based on contour tracking[J]. Chinese Journal of Computers, 2006, 29(2):317-323.
 胡星火,姚剑敏,林志贤,等.基于改进Chamfer匹配的台标识别[J].计算机工程,2013,39(1):195-199.Liu X H, Yao J M, Lin Z X, et al. TV symbol recognition based on improved Chamfer matching[J]. Computer Engineering, 2013, 39(1):195-199.
 Varadharajan T K, Rajendran C. A multi-objective simulated-annealing algorithm for scheduling in flowshops to minimize the makespan and total flowtime of jobs[J]. European Journal of Operational Research, 2005, 167(3):772-795.
 Choi C, Christensen H I. Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2010:4048-4055.
 贾东永,黄强,田野,等.基于视觉前馈和视觉反馈的仿人机器人抓取操作[J].北京理工大学学报,2009,29(11):983-987.Jia D Y, Huang Q, Tian Y, et al. Object manipulation of a humanoid robot based on visual feedforward and visual feedback[J]. Transactions of Beijing Institute of Technology, 2009, 29(11):983-987.
 许海霞,王耀南,方琴,等.一种机器人手眼关系自标定方法[J].机器人,2008,30(4):373-378.Xu H X, Wang Y N, Fang Q, et al. A self-calibration approach to hand-eye relation of robot[J]. Robot, 2008, 30(4):373-378.
 刘毅,丛明,刘冬,等.基于改进遗传算法与机器视觉的工业机器人猪腹剖切轨迹规划[J].机器人,2017,39(3):377-384.Liu Y, Cong M, Liu D, et al. Trajectory planning for porcine abdomen cutting based on an improved genetic algorithm and machine vision for industrial robot[J]. Robot, 2017, 39(3):377-384.