Task Generalization of Robots Based on Parameterized Learning of Multi-demonstration Action Primitives
LIU Huan1,2, QIAN Kun1,2, GUI Boxing1,2, MA Xudong1,2
1. School of Automation, Southeast University, Nanjing 210096, China;
2. Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Nanjing 210096, China
Abstract:Aiming at the problem of task and action trajectory generalization in robot learning by demonstration, a method is proposed which combines task-parameterized learning for multi-demonstration action trajectories with action sequence reasoning. For the multi-demonstration trajectory samples of general action primitives, the dynamic movement primitives (DMPs) are used to encode trajectories and the task-parameterized model is built. Gaussian process regression is used to learn the mapping between external parameters and model parameters. Planning domain definition language (PDDL) is applied to deriving the missing action sequence for a new task instance. The task-parameterized model generalizes the target trajectories of actions according to the new external parameters and corrects the trajectory errors. Experiments on a UR5 robot show that the proposed method can generate action sequences and adjust generalization targets flexibly in the face of different task instances and changing environment. The task-parameterized model based on multi-demonstration can generalize smooth target trajectories for given external parameters with better effects compared to a single demonstration trajectory, which improves the ability of task generalization for robots.
[1] Kramberger A. A comparison of learning-by-demonstration methods for force-based robot skills[C]//23rd International Conference on Robotics in Alpe-Adria-Danube Region. Piscataway, USA:IEEE, 2014:6pp.
[2] Kroemer O, Peters J. A flexible hybrid framework for modeling complex manipulation tasks[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2011:1856-1861.
[3] Niekum S, Osentoski S, Konidaris G, et al. Learning and generalization of complex tasks from unstructured demonstrations[C]//IEEE/RSJ International Conference on IntelligentRobots and Systems. Piscataway, USA:IEEE, 2012:5239-5246.
[4] Ahmadzadeh S R, Paikan A, Mastrogiovanni F, et al. Learning symbolic representations of actions from human demonstrations[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2015:3801-3808.
[5] Gu Y, Sheng W, Ou Y. Automated assembly skill acquisition through human demonstration[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:6313-6318.
[6] Bruno D, Calinon S, Caldwell D G. Learning adaptive movements from demonstration and self-guided exploration[C]//4th IEEE International Conferences on Development and Learning and on Epigenetic Robotics. Piscataway, USA:IEEE, 2014:101-106.
[7] Schaal S. Dynamic movement primitives-A framework for motor control in humans and humanoid robotics[M]. Tokyo, Japan:Adaptive Motion of Animals and Machines, 2006.
[8] Ijspeert A J, Nakanishi J, Hoffmann H, et al. Dynamical movement primitives:Learning attractor models for motor behaviors[J]. Neural Computation, 2013, 25(2):328-373.
[9] Silva B D, Konidaris G, Barto A. Learning parameterized skills[C]//International Conference on Machine Learning. Edinburgh, UK:Omnipress, 2012:1679-1686.
[10] Silva B D, Baldassarre G, Konidaris G, et al. Learning parameterized motor skills on a humanoid robot[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:5239-5244.
[11] Hazara M, Kyrki V. Model selection for incremental learning of generalizable movement primitives[C]//18th IEEE International Conference on Advanced Robotics. Piscataway, USA:IEEE, 2017:359-366.
[12] Gams A, Denisa M, Ude A. Learning of parametric coupling terms for robot-environment interaction[C]//15th IEEE-RAS International Conference on Humanoid Robots. Piscataway, USA:IEEE, 2015:304-309.
[13] Pervez A, Lee D. Learning task-parameterized dynamic movement primitives using mixture of GMMs[J]. Intelligent Service Robotics, 2018, 11(1):61-78.
[14] Ude A, Gams A, Asfour T, et al. Task-specific generalization of discrete and periodic dynamic movement primitives[J]. IEEE Transactions on Robotics, 2010, 26(5):800-815.
[15] Stulp F, Raiola G, Hoarau A, et al. Learning compact parameterized skills with a single regression[C]//13th IEEE-RAS International Conference on Humanoid Robots. Piscataway, USA:IEEE, 2013:417-422.
[16] Alizadeh T, Malekzadeh M, Barzegari S. Learning from demonstration with partially observable task parameters using dynamic movement primitives and Gaussian process regression[C]//IEEE International Conference on Advanced Intelligent Mechatronics. Piscataway, USA:IEEE, 2016:889-894.
[17] Fox M, Long D. PDDL2.1:An extension to PDDL for expressing temporal planning domains[J]. Journal of Artificial Intelligence Research, 2003, 20(1):61-124.
[18] Coates A, Carpenter B, Case C, et al. Text detection and character recognition in scene images with unsupervised feature learning[C]//International Conference on Document Analysis and Recognition. Piscataway, USA:IEEE, 2011:440-445.