Recognition and Grabbing System for Workpieces Exceeding theVisual Field Based on Machine Vision
LIU Zhengqiong1, WAN Peng1, LING Lin2, CHEN Li2, LI Xuefei2, ZHOU Wenxia1
1. School of Computer and Information, Hefei University of Technology, Hefei 230009, China;
2. School of Mechanical Engineering, Hefei University of Technology, Hefei 230009, China
刘正琼, 万鹏, 凌琳, 陈莉, 李学飞, 周文霞. 基于机器视觉的超视场工件识别抓取系统[J]. 机器人, 2018, 40(3): 294-300,308.DOI: 10.13973/j.cnki.robot.170365.
LIU Zhengqiong, WAN Peng, LING Lin, CHEN Li, LI Xuefei, ZHOU Wenxia. Recognition and Grabbing System for Workpieces Exceeding theVisual Field Based on Machine Vision. ROBOT, 2018, 40(3): 294-300,308. DOI: 10.13973/j.cnki.robot.170365.
摘要目前,应用机器视觉系统进行工件检测时,对于工件体积相较单个相机视场范围较大的超视场工件,因无法拍摄出工件完整图片,因而存在工件识别困难以及定位不准等问题.针对这些问题,本文提出了一种基于机器视觉的超视场工件识别抓取系统WRGS (workpiece recognition and grabbing system).该系统无需拍摄工件完整图片,采用提取工件上特征点进行形状匹配的方法来实现对超视场工件的有效识别.系统定义了每类工件标准位置用于位姿计算,通过设计由粗到精抓取网络的方法,WRGS解决了超视场工件定位困难的问题,并提高了抓取精度.该系统能通过自学习来适应多种工件,从而提高了系统的柔性.通过实验例证,整个系统抓取时间小于6s,落爪位置与目标在水平方向上的误差≤ 2mm、高度误差≤ 0.2mm、角度误差≤ 0.3°.
Abstract:Currently, a machine vision system cannot take the full image of the workpiece with the size larger than the visual field of a single camera in the process of workpiece inspection. Thus, it has caused some problems such as workpiece recognition difficulty and inaccurate positioning. To solve these problems, this paper proposes an exceeding-field workpiece recognition and grabbing system (WRGS) based on machine vision. The system can realize the effective recognition of the exceeding-field workpiece by extracting the feature points on workpiece and conducting shape matching, without shooting the full image of the workpiece. The system defines the standard location of each type of workpiece for position calculation, and designs the coarse-to-fine grabbing network method to solve the positioning difficulty of exceeding-field workpieces and improve the precision of grabbing. Besides, the system can adapt to a variety of workpieces through self learning, and thus enhances the flexibility. Experiment results show that the system grabbing time is less than 6s; the errors between the claw's position and the target in the horizontal direction, in height and angle are 2mm or less, 0.2mm or less, and 0.3° or less respectively.
[1] Yoshimi B H, Allen P K. Active, uncalibrated visual servoing[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 1994:156-161.
[2] Ciocarlie M, Hsiao K, Jones E G, et al. Towards reliable grasping and manipulation in household environments[J]. Springer Tracts in Advanced Robotics, 2014, 79:241-252.
[3] Ferreira M, Costa P, Rocha L, et al. Stereo-based real-time 6-DoF work tool tracking for robot programing by demonstration[J]. International Journal of Advanced Manufacturing Technology, 2016, 85(1-4):57-69.
[4] Hossain D, Capi G, Jindai M. Evolution of deep belief neural network parameters for robot object recognition and grasping[M]//Procedia Computer Science, vol.105. Amsterdam, Netherlands:Elsevier, 2017:153-158.
[5] Takahashi K, Kim K, Ogata T, et al. Tool-body assimilation model considering grasping motion through deep learning[J]. Robotics and Autonomous Systems, 2017, 91:115-127.
[6] Zheng Y B, Huang X S, Feng S J. An image matching algorithm based on combination of SIFT and the rotation invariantLBP[J]. Journal of Computer-Aided Design & Computer Grap-hics, 2010, 22(2):286-292.
[7] Schmitt R, Cai Y. Recognition of dynamic environments for robotic assembly on moving workpieces[J]. International Journal of Advanced Manufacturing Technology, 2014, 71(5-8):1359-1369.
[8] 伍锡如,黄国明,孙立宁.基于深度学习的工业分拣机器人快速视觉识别与定位算法[J].机器人,2016,38(6):711-719. Wu X R, Huang G M, Sun L N. Fast visual identification and location algorithm for industrial sorting robots based on deep learning[J]. Robot, 2016, 38(6):711-719.
[9] 仲训杲,徐敏,仲训昱,等.基于多模特征深度学习的机器人抓取判别方法[J].自动化学报,2016,42(7):1022-1029. Zhong X G, Xu M, Zhong X Y, et al. Multimodal features deep learning for robotic potential grasp recognition[J]. Acta Automatica Sinica, 2016, 42(7):1022-1029.
[10] Sun K P, Heß R, Xu Z H, et al. Real-time robust six degrees of freedom object pose estimation with a time-of-flight camera and a color camera[J]. Journal of Field Robotics, 2015, 32(1):61-84.
[11] El-Khoury S, Sahbani A. A new strategy combining empirical and analytical approaches for grasping unknown 3D objects[J]. Robotics and Autonomous Systems, 2010, 58(5):497-507.
[12] Papazov C, Haddadin S, Parusel S, et al. Rigid 3D geometry matching for grasping of known objects in cluttered scenes[J]. International Journal of Robotics Research, 2012, 31(4):538-553.
[13] Hutchinson S, Hager G D, Corke P I. A tutorial on visual servo control[J]. IEEE Transactions on Robotics and Automation, 1996, 12(5):651-670.
[14] 张铮,徐超,任淑霞,等.数字图像处理与机器视觉[M].北京:人民邮电出版社,2014. Zhang Z, Xu C, Ren S X, et al. Digital image processing and machine vision[M]. Beijing:Posts & Telecom Press, 2014.
[15] Shapiro L G,Stockman G C.计算机视觉[M].赵清杰,钱芳,蔡利栋,译.北京:机械工业出版社, 2005:110-135. Shapiro L G, Stockman G C. Computer vision[M]. Zhao Q J, Qian F, Cai L D, trans. Beijing:China Machine Press, 2005:110-135.
[16] 徐贵力,刘小霞,田裕鹏,等.一种图像清晰度评价方法[J].红外与激光工程,2009,38(1):180-184. Xu G L, Liu X X, Tian Y P, et al. Image clarity-evaluation-function method[J]. Infrared and Laser Engineering, 2009, 38(1):180-184.