面向移乘搬运护理机器人的人体姿态视觉识别

Visual Recognition of Human Pose for the Transfer-care Assistant Robot

  • 摘要: 针对移乘搬运护理机器人系统对人体姿态检测的高准确性、近距离适应性等要求,提出一种基于RGB-D(RGB-depth)信息的双级卷积神经网络算法.利用第1级网络计算彩色图像中的人体关节像素坐标,将彩色图人体关节坐标映射到深度图坐标,计算关节热图;提出一种卷积神经网络结构,将深度图像和关节热图输入第2级网络,估算3D人体关节位置.根据人体关节点位置,采用图像分割方法进一步计算腋下点.实验结果表明,本文提出的人体姿态识别方法单次计算用时210 ms,在移乘搬运护理机器人应用环境下,人体姿态识别精度达到了91.5%,在近距离环境下人体姿态识别精度达到了90.3%,具备实时、准确的人体姿态全局坐标估算能力.

     

    Abstract: A two-level convolution neural network algorithm based on RGB-D (RGB-depth) information is proposed to meet the requirements of high accuracy and close-range adaptability of human pose detection for the transfer-care assistant robot system. The first-level network is used to calculate the human joint pixel coordinates in color image, and the human joint coordinates in color image are mapped to the depth map coordinates to calculate the joint heatmap. A convolution neural network structure is proposed as the second level network for inputting the depth image and joint heatmap to estimate the 3D human joint position. Based on the position of human joint point, the subaxillary point is further calculated by the image segmentation method. The experimental results show that the time for a single calculation by the proposed method is 210 ms, the accuracy of human pose recognition reaches 91.5% in the application environment of transfer-care assistant robot and 90.3% in the close-range environment. The proposed method can accurately estimate the global coordinates of human pose in real time.

     

/

返回文章
返回