Estimation of the Grasping Pose of Unknown Objects Based onMultiple Geometric Constraints
SU Jie1, ZHANG Yunzhou1,2, FANG Lijin1, LI Qi1, WANG Shuai1
1. Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110169, China; 2. College of Information Science and Engineering, Northeastern University, Shenyang 110819, China
Abstract:Aiming at the problem of grasping unknown objects quickly and stably by robots in unstructured environment, a method based on multiple geometric constraints is proposed to estimate the pose of unknown objects. Firstly, the geometric point cloud information of the scene is acquired by depth camera. Target object can be obtained after point cloud preprocessing, and the sample of the grasping pose is generated by simplified geometric constraint of the grasper. With the constraint of simplified force-closure, a quick and coarse screening of samples is performed. After the force balance constraint on the geometric profile of the grasping pose is analyzed, a stable pose is transmitted to the robot to perform grasping. Using a 6 degree of freedom (6-DoF) robotic manipulator with a depth camera, experiments are conducted to grasp objects with different postures and shapes. Results show that the proposed method can effectively deal with the situation that various objects exist and their three-dimensional models are unknown. It also has good applicability in both single-target and multi-target scenarios.
[1] Goldfeder C, Allen P K. Data-driven grasping[J]. Autonomous Robots, 2011, 31(1):1-20. [2] Herzog A, Pastor P, Kalakrishnan M, et al. Learning of grasp selection based on shape-templates[J]. Autonomous Robots, 2014, 36(1-2):51-65. [3] Bohg J, Kragic D. Learning grasping points with shape context[J]. Robotics and Autonomous Systems, 2010, 58(4):362-377. [4] Curtis N, Xiao J. Efficient and effective grasping of novel objects through learning and adapting a knowledge base[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2008:2252-2257. [5] Collet A, Srinivasa S S. Efficient multi-view object recognition and full pose estimation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2010:2050-2055. [6] Kehoe B, Matsukawa A, Candido S, et al. Cloud-based robot grasping with the Google object recognition engine[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:4263-4270. [7] Saxena A, Driemeyer J, Kearns J, et al. Robotic grasping of novel objects[C]//Advances in Neural Information Processing Systems. La Jolla, USA:Neural Information Processing Systems Foundation, 2007:1209-1216. [8] Jiang Y, Moseson S, Saxena A. Efficient grasping from RGBD images:Learning using a new rectangle representation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2011:3304-3311. [9] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]//Advances in Neural Information Processing Systems. La Jolla, USA:Neural Information Processing Systems Foundation, 2012:1097-1105. [10] Lenz I, Lee H, Saxena A. Deep learning for detecting robotic grasps[J]. International Journal of Robotics Research, 2015, 34(4-5):705-724. [11] 喻群超,尚伟伟,张驰.基于三级卷积神经网络的物体抓取检测[J].机器人,2018,40(5):762-768.Yu Q C, Shang W W, Zhang C. Object grasp detecting based on three-level convolution neural network[J]. Robot, 2018, 40(5):762-768. [12] Redmon J, Angelova A. Real-time grasp detection using convolutional neural networks[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2015:1316-1322. [13] Kumra S, Kanan C. Robotic grasp detection using deep convolutional neural networks[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2017:769-776. [14] Mahler J, Liang J, Niyaz S, et al. Dex-Net 2.0:Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics[DB/OL].(2017-08-08)[2019-05-20]. https://arxiv.org/abs/1703.09312. [15] 仲训杲,徐敏,仲训昱,等.基于多模特征深度学习的机器人抓取判别方法[J].自动化学报,2016,42(7):1022-1029. Zhong X G, Xu M, Zhong X Y, et al. Multimodal features deep learning for robotic potential grasp recognition[J]. Acta Automatica Sinica, 2016, 42(7):1022-1029. [16] 杜学丹,蔡莹皓,鲁涛,等.一种基于深度学习的机械臂抓取方法[J].机器人,2017,39(6):820-828,837.Du X D, Cai Y H, Lu T, et al. A robotic grasping method based on deep learning[J]. Robot, 2017, 39(6):820-828,837. [17] Morrison D, Corke P, Leitner J. Closing the loop for robotic grasping:A real-time, generative grasp synthesis approach[DB/OL].(2018-05-15)[2019-05-20]. https://arxiv.org/abs/1804.05172. [18] Levine S, Pastor P, Krizhevsky A, et al. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection[J]. International Journal of Robotics Research, 2018, 37(4-5):421-436. [19] Bousmalis K, Irpan A, Wohlhart P, et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2018:4243-4250. [20] Zhang H B, Lan X G, Bai S T, et al. RoI-based robotic grasp detection in object overlapping scenes using convolutional neural network[DB/OL].(2019-03-14)[2019-05-20]. https://arxiv.org/abs/1808.10313. [21] Nguyen V D. Constructing force-closure grasps[J]. International Journal of Robotics Research, 1988, 7(3):3-16. [22] Bicchi A. On the closure properties of robotic grasping[J]. International Journal of Robotics Research, 1995, 14(4):319-334. [23] Jabalameli A, Ettehadi N, Behal A. Edge-based recognition of novel objects for robotic grasping[DB/OL].(2018-02-23)[2019-05-20]. https://arxiv.org/pdf/1802.08753v1.pdf. [24] ten Pas A, Platt R. Using geometry to detect grasp poses in 3D point clouds[M]//Robotics Research. Cham, Switzerland:Springer, 2018:307-324. [25] Jain S, Argall B. Grasp detection for assistive robotic manipulation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2016:2015-2021. [26] Lippiello V, Ruggiero F, Siciliano B, et al. Visual grasp planning for unknown objects using a multifingered robotic hand[J]. IEEE/ASME Transactions on Mechatronics, 2013, 18(3):1050-1059. [27] Lei Q, Wisse M. Fast grasping of unknown objects using force balance optimization[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2014:2454-2460. [28] Lei Q, Chen G, Meijer J, et al. A novel algorithm for fast grasping of unknown objects using C-shape configuration[J]. AIP Advances, 2018, 8(2). DOI:10.1063/1.5006570. [29] Moreira A, Santos M Y. Concave hull:A k-nearest neighbours approach for the computation of the region occupied by a set of points[C]//2nd International Conference on Computer Graph-ics Theory and Applications. Setubal, Portugal:INSTICC-INST SYST Technologies Information Control&Communication, 2007:61-68. [30] Bentley J L. Multidimensional binary search trees used for associative searching[J]. Communications of the ACM, 1975, 18(9):509-517. [31] Calli B, Singh A, Walsman A, et al. The YCB object and model set:Towards common benchmarks for manipulation research[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2015:510-517.