Abstract:To grasp stacked objects with unknown and irregular shapes, an autonomous grasping method based on two-stage progressive network (TSPN) is proposed. Firstly, the global grasping distribution is acquired based on end-to-end strategy, and then the optimal grasping configuration is determined based on the sampling-evaluation strategy. By combining the above two strategies, the structure of TSPN is simplified, the number of samples to be evaluated is reduced significantly, and the generalization ability is guaranteed while the grasping efficiency is improved. In order to speed up the learning process of the grasping model, a self-supervised learning strategy guided by prior knowledge is introduced, and 220 irregular objects are used for grasping learning. Experiments are carried out in simulation and real environment respectively. The results show that the proposed grasping model is suitable for the grasping scenes with multiple objects, stacked objects, unknown irregular objects, and the objects with random position and pose. The grasping accuracy and detection speed are significantly improved compared with other reference methods. The whole learning process lasted for 10 days, and the results show that the learning strategy guided by prior knowledge can significantly speed up the learning process.
[1] Redmon J, Angelova A. Real-time grasp detection using convolutional neural networks[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2015:1316-1322. [2] Mahler J, Liang J, Niyaz S, et al. Dex-Net 2.0:Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics[DB/OL]. (2017-08-08)[2019-11-10]. https://arxiv.org/abs/1703.09312v3. [3] 杜学丹,蔡莹皓,鲁涛,等.一种基于深度学习的机械臂抓取方法[J].机器人,2017,39(6):820-828,837. Du X D, Cai Y H, Lu T, et al. A robotic grasping method based on deep learning[J]. Robot, 2017, 39(6):820-828,837. [4] Kumra S, Kanan C. Robotic grasp detection using deep convolutional neural networks[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2017:769-776. [5] Zeng A, Song S, Yu K T, et al. Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2018:3750-3757. [6] Guo D, Sun F C, Liu H P, et al. A hybrid deep architecture for robotic grasp detection[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:1609-1614. [7] Morrison D, Corke P, Leitner J. Closing the loop for robotic grasping:A real-time, generative grasp synthesis approach[DB/OL]. (2018-05-15)[2019-11-10]. https://arxiv.org/abs/1804.05172v2. [8] Lenz I, Lee H, Saxena A. Deep learning for detecting robotic grasps[J]. International Journal of Robotics Research, 2015, 34(4-5):705-724. [9] Zhang H B, Lan X G, Bai S T, et al. ROI-based robotic grasp detection for object overlapping scenes[DB/OL]. (2019-05-14)[2019-11-10]. https://arxiv.org/abs/1808.10313v6. [10] Pinto L, Gupta A. Supersizing self-supervision:Learning to grasp from 50 k tries and 700 robot hours[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2016:3406-3413. [11] Levine S, Pastor P, Krizhevsky A, et al. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection[J]. International Journal of Robotics Research, 2018, 37(4-5):421-436. [12] Sahbani A, El-Khoury S, Bidaud P. An overview of 3D object grasp synthesis algorithms[J]. Robotics and Autonomous Systems, 2012, 60(3):326-336. [13] Goldfeder C, Allen P K. Data-driven grasping[J]. Autonomous Robots, 2011, 31(1):1-20. [14] Bohg J, Morales A, Asfour T, et al. Data-driven grasp synthesis-A survey[J]. IEEE Transactions on Robotics, 2014, 30(2):289-309. [15] 刘汉伟,曹雏清,王永娟.基于非结构基本组成分析的自主抓取方法[J].机器人,2019,41(5):583-590. Liu H W, Cao C Q, Wang Y J. Autonomous grasping method based on non-structural basic composition analysis[J]. Robot, 2019, 41(5):583-590. [16] 刘文达. 智能空间下基于云的物品识别和抓取研究[D].济南:山东大学,2016. Liu W D. Research of object recognition and grasping based on cloud in intelligent space[D]. Jinan:Shandong University, 2016. [17] 李树春,张静,张华,等.面向机器人抓取过程中目标位姿估计方法[J].传感器与微系统,2019,38(7):32-34,38. Li S C, Zhang J, Zhang H, et al. Object pose estimation method for robot grasping process[J]. Transducer and Microsystem Technologies, 2019, 38(7):32-34,38. [18] Zeng A, Song S, Welker S, et al. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning[C]//IEEE/RSJ International Conference on IntelligentRobots and Systems. Piscataway, USA:IEEE, 2018:4238-4245. [19] Finn C, Levine S. Deep visual foresight for planning robot motion[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:2786-2793. [20] James S, Davison A J, Johns E. Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task[DB/OL]. (2017-10-17)[2019-11-10]. https://arxiv.org/abs/1707.02267v2. [21] Watson J, Hughes J, Iida F. Real-world, real-time robotic grasping with convolutional neural networks[C]//18th Annual Conference Towards Autonomous Robotic Systems. Cham, Switzerland:Springer, 2017:617-626. [22] Matsumoto E, Saito M, Kume A, et al. End-to-end learning of object grasp poses in the Amazon Robotics Challenge[C]//Workshop on Warehouse Picking Automation. 2017. [23] Chu F J, Xu R N, Vela P A. Real-world multiobject, multigrasp detection[J]. IEEE Robotics and Automation Letters, 2018, 3(4):3355-3362.