王斐, 齐欢, 周星群, 王建辉. 基于多源信息融合的协作机器人演示编程及优化方法[J]. 机器人, 2018, 40(4): 551-559. DOI: 10.13973/j.cnki.robot.180164
引用本文: 王斐, 齐欢, 周星群, 王建辉. 基于多源信息融合的协作机器人演示编程及优化方法[J]. 机器人, 2018, 40(4): 551-559. DOI: 10.13973/j.cnki.robot.180164
WANG Fei, QI Huan, ZHOU Xingqun, WANG Jianhui. Demonstration Programming and Optimization Method ofCooperative Robot Based on Multi-Source Information Fusion[J]. ROBOT, 2018, 40(4): 551-559. DOI: 10.13973/j.cnki.robot.180164
Citation: WANG Fei, QI Huan, ZHOU Xingqun, WANG Jianhui. Demonstration Programming and Optimization Method ofCooperative Robot Based on Multi-Source Information Fusion[J]. ROBOT, 2018, 40(4): 551-559. DOI: 10.13973/j.cnki.robot.180164

基于多源信息融合的协作机器人演示编程及优化方法

Demonstration Programming and Optimization Method ofCooperative Robot Based on Multi-Source Information Fusion

  • 摘要: 为解决现有机器人装配学习过程复杂且对编程技术要求高等问题,提出一种基于前臂表面肌电信号和惯性多源信息融合的隐式交互方式来实现机器人演示编程.在通过演示学习获得演示人的装配经验的基础上,为提高对装配对象和环境变化的自适应能力,提出了一种多工深度确定性策略梯度算法(M-DDPG)来修正装配参数,在演示编程的基础上,进行强化学习确保机器人稳定执行任务.在演示编程实验中,提出一种改进的PCNN(并行卷积神经网络),称作1维PCNN(1D-PCNN),即通过1维的卷积与池化过程自动提取惯性信息与肌电信息特征,增强了手势识别的泛化性和准确率;在演示再现实验中,采用高斯混合模型(GMM)对演示数据进行统计编码,利用高斯混合回归(GMR)方法实现机器人轨迹动作再现,消除噪声点.最后,基于Primesense Carmine摄像机采用帧差法与多特征图核相关滤波算法(MKCF)的融合跟踪算法分别获取X轴与Y轴方向的环境变化,采用2个相同的网络结构并行进行连续过程的深度强化学习.在轴孔相对位置变化的情况下,机械臂能根据强化学习得到的泛化策略模型自动对机械臂末端位置进行调整,实现轴孔装配的演示学习.

     

    Abstract: In order to solve the problems of complex assembly and learning process of robot and high requirements for the programming technology, an implicit interaction method based on the fusion of forearm sEMG (surface electromyography) and inertial multi-source information is proposed to realize robot demonstration programming. Based on the assembly experience obtained from the demonstration learning by presenters, a multiple deep deterministic policy gradient (M-DDPG) algorithm is proposed to modify the assembly parameters in order to improve the adaptability to the changes of assembly objects and environment. On the basis of demonstration programming, reinforcement learning is carried out to ensure that the robot performs its tasks stably. In demonstration programming experiment, an improved PCNN (parallel convolution neural network) is proposed, named 1-D PCNN (1D-PCNN), which automatically extracts feature inertia and EMG through 1-dimensional convolution and pooling and enhances the generalization performance and accuracy of gesture recognition. In the demonstration reproduction experiment, the Gaussian mixture model (GMM) is used to statistically encode the demo data, and Gaussian mixture regression (GMR) is used to reproduce the robot's trajectory and eliminate noise points. Finally, the environmental changes in the X-axis and Y-axis directions are acquired respectively with Primesense Carmine camera by using the fusion tracking algorithm based on the frame difference method and the multiple kernel correlation filter (MKCF) algorithm. Two identical network structures are used to concurrently carry out the deep reinforcement learning of continuous processes. When the relative position of the peg and hole changes, the robotic arm can automatically adjust the end-effector position according to the generalization strategy model learned by reinforcement learning so as to realize the demonstration learning of the peg-in-hole assembly.

     

/

返回文章
返回