A Grasping Poses Detection Algorithm for Industrial WorkpiecesBased on Grasping Cluster and Collision Voxels
XU Jin1,2, LIU Ning2,3, LI Deping2,3, LIN Longxin1, WANG Gao1,2
1. College of Information Science and Technology, Jinan University, Guangzhou 510632, China; 2. Robotics Research Institute, Jinan University, Guangzhou 510632, China; 3. College of Intelligent Systems Science and Engineering, Jinan University, Zhuhai 519070, China
Abstract:Aiming at the grasping problem of scattered and stacked industrial workpieces,a grasping poses detection algorithm based on grasping cluster and collision voxels is proposed.The proposed grasping cluster is a set of continuous grasping poses on the workpiece,and solves the problems of grasping points losing and screening efficiency decreasing caused by using discrete and fixed grasping points in traditional methods.Firstly,the point cloud of scene and the bin are voxelized.Then,the voxels containing bin or point cloud are marked as collision voxels,and the voxels adjacent to the collision voxels are marked as risk voxels,so as to establish a voxelized collision model.After that,according to the geometric properties of the grasping cluster,the candidate grasping poses and corresponding grasping paths are computed.Finally,the collision detection can be quickly completed by detecting the types of voxels that the grasping paths pass through,so as to select the best grasping pose.Based on the proposed algorithm,an integral Bin-Picking system is built,and simulation experiment and actual grasping experiment are performed on various common workpieces from actual industrial situations.The results show that the algorithm can detect safe grasping poses quickly and accurately.The average success rate of grasping reaches 92.2% and the average emptying rate of bins reaches 87.2%,which obtains a significant improvement compared with traditional methods,and there is no collision during the process of grasping,which can meet the requirements of practical industrial applications.
[1] Buchholz D. Bin-picking:New approaches for a classical problem[M]//Studies in Systems, Decision and Control, Vol.44. Berlin, Germany:Springer, 2016. [2] Du G G, Wang K, Lian S G, et al. Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers:A review[J]. Artificial Intelligence Review, 2021, 54:1677-1734. [3] Rohrdanz F, Wahl F M. Generating and evaluating regrasp operations[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 1997:2013-2018. [4] Borst C, Fischer M, Hirzinger G. A fast and robust grasp planner for arbitrary 3D objects[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 1999:1890-1896. [5] Miller A T, Knoop S, Christensen H I, et al. Automatic grasp planning using shape primitives[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2003:1824-1829. [6] Domae Y, Okuda H, Taguchi Y, et al. Fast graspability evaluation on single depth maps for bin picking with general grippers[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:1997-2004. [7] Buchholz D, Kubus D, Weidauer I, et al. Combining visual and inertial features for efficient grasping and bin-picking[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:875-882. [8] 苏杰,张云洲,房立金,等. 基于多重几何约束的未知物体抓取位姿估计[J].机器人, 2020, 42(2):129-138. Su J, Zhang Y Z, Fang L J, et al. Estimation of the grasping pose of unkonwn objects based on multiple geometric constraints[J]. Robot, 2020, 42(2):129-138. [9] Lenz I, Lee H, Saxena A. Deep learning for detecting robotic grasps[J]. International Journal of Robotics Research, 2015, 34(4-5):705-724. [10] Redmon J, Angelova A. Real-time grasp detection using convolutional neural networks[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2015:1316-1322. [11] Harada K, Wan W W, Tsuji T, et al. Initial experiments on learning-based randomized bin-picking allowing finger contact with neighboring objects[C]//IEEE International Conference on Automation Science and Engineering. Piscataway, USA:IEEE, 2016:1196-1202. [12] Mahler J, Pokorny F T, Hou B, et al. Dex-Net 1.0:A cloudbased network of 3D objects for robust grasp planning using a multi-armed bandit model with correlated rewards[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2016:1957-1964. [13] Mahler J, Liang J, Niyaz S, et al. Dex-Net 2.0:Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics[C/OL]//Robotics:Science and Systems. (2017-08-08)[2020-12-09]. https://doi.org/10.15607/RSS.2017.XIII.058. [14] Asif U, Tang J B, Harrer S. GraspNet:An efficient convolutional neural network for real-time grasp detection for low-powered devices[C]//Twenty-Seventh International Joint Conference on Artificial Intelligence. London, England:AAAI Press, 2018:4875-4882. [15] Zeng A, Song S R, Yu K T, et al. Robotic pick-and-place of no-vel objects in clutter with multi-affordance grasping and cross-domain image matching[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2018:3750-3757. [16] Kalashnikov D, Irpan A, Pastor P, et al. QT-Opt:Scalable deep reinforcement learning for vision-based robotic manipulation[DB/OL]. (2018-11-28)[2020-12-09]. https://arxiv.org/abs/1806.10293. [17] Zheng A, Song S R, Welker S, et al. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2018:4238-4245. [18] Mousavian A, Eppner C, Fox D. 6-DoF GraspNet:Variational grasp generation for object manipulation[C]//IEEE/CVF International Conference on Computer Vision. Piscataway, USA:IEEE, 2019:2901-2910. [19] Guo Y L, Bennamoun M, Sohel F, et al. A comprehensive performance evaluation of 3D local feature descriptors[J]. International Journal of Computer Vision, 2016, 116:66-89. [20] Drost B, Ulrich M, Navab N, et al. Model globally, match locally:Efficient and robust 3D object recognition[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, USA:IEEE, 2010:998-1005. [21] Kiforenko L, Drost B, Tombari F, et al. A performance evaluation of point pair features[J]. Computer Vision and Image Understanding, 2017, 166:66-80. [22] 宋薇,仇楠楠,沈林勇,等. 面向工业零件的机器人单目立体匹配与抓取[J].机器人, 2018, 40(6):950-957. Song W, Qiu N N, Shen L Y, et al. The monocular stereo matching and grasping of robot for industrial parts[J]. Robot, 2018, 40(6):950-957. [23] Spenrath F, Spiller A, Verl A. Gripping point determination and collision prevention in a bin-picking application[C]//7th German Conference on Robotics. Berlin, Germany:VDE, 2012. [24] Buchholz D, Futterlieb M, Winkelbach S, et al. Efficient binpicking and grasp planning based on depth data[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:3245-3250. [25] Spenrath F, Pott A. Gripping point determination for bin picking using heuristic search[J]. Procedia CIRP, 2017, 62:606-611. [26] Spenrath F, Pott A. Statistical analysis of influencing factors for heuristic grip determination in random bin picking[C]//IEEE International Conference on Advanced Intelligent Mechatronics. Piscataway, USA:IEEE, 2017:868-873. [27] Spenrath F, Pott A. Using neural networks for heuristic grasp planning in random bin picking[C]//IEEE 14th International Conference on Automation Science and Engineering. Piscataway, USA:IEEE, 2018:258-263. [28] Li D P, Wang H Y, Liu N, et al. 3D object recognition and pose estimation from point cloud using stably observed point pair feature[J]. IEEE Access, 2020, 8:44335-44345. [29] Kaufman A, Cohen D, Yagel R. Volume graphics[J]. Computer, 1993, 26(7):51-64.