基于多演示动作基元参数化学习的机器人任务泛化

Task Generalization of Robots Based on Parameterized Learning of Multi-demonstration Action Primitives

  • 摘要: 针对机器人示范学习过程中任务泛化与动作轨迹泛化问题,提出了一种将多演示动作轨迹的任务参数化学习与动作序列推理相结合的方法.针对通用动作基元的多演示轨迹样本,利用动态运动基元进行轨迹编码并建立任务参数化模型,利用高斯过程回归学习外部参数与模型参数之间的映射.针对新的任务实例,利用规划域定义语言推理缺失动作序列,任务参数化模型根据新的外部参数泛化出动作的目标轨迹,并修正轨迹误差.在UR5机器人上的实验表明,面对不同任务实例和环境变化,该方法可灵活生成动作序列并调整泛化目标,基于多演示的任务参数化模型能够对给定外部参数泛化出平滑的目标轨迹,泛化效果优于单一演示轨迹,提高了机器人任务泛化的能力.

     

    Abstract: Aiming at the problem of task and action trajectory generalization in robot learning by demonstration, a method is proposed which combines task-parameterized learning for multi-demonstration action trajectories with action sequence reasoning. For the multi-demonstration trajectory samples of general action primitives, the dynamic movement primitives (DMPs) are used to encode trajectories and the task-parameterized model is built. Gaussian process regression is used to learn the mapping between external parameters and model parameters. Planning domain definition language (PDDL) is applied to deriving the missing action sequence for a new task instance. The task-parameterized model generalizes the target trajectories of actions according to the new external parameters and corrects the trajectory errors. Experiments on a UR5 robot show that the proposed method can generate action sequences and adjust generalization targets flexibly in the face of different task instances and changing environment. The task-parameterized model based on multi-demonstration can generalize smooth target trajectories for given external parameters with better effects compared to a single demonstration trajectory, which improves the ability of task generalization for robots.

     

/

返回文章
返回