基于多传感器信息融合的人类抓握特征学习及物体识别

Human Grasp Feature Learning and Object Recognition Based onMulti-sensor Information Fusion

  • 摘要: 基于柔性可穿戴传感器及多模态信息融合,研究人类的抓握特征学习及抓取物体识别,探索人类在抓取行为中所依赖的感知信息的使用.利用10个可拉伸传感器、14个温度传感器及78个压力传感器构建了数据手套并穿戴于人手,分别测量人类在抓取行为中手指关节的弯曲角度、抓取物体的温度及压力分布信息,并在时间及空间序列上建立了跨模态信息表征,同时使用深度卷积神经网络对此多模态信息进行融合,构建人类抓握特征学习模型,实现抓取物体的精准识别.分别针对关节角度特征、温度特征及压力信息特征进行了融合实验及有效性分析,结果表明了基于多传感器的多模态信息融合能够实现18种物品的精准识别.

     

    Abstract: Human grasping feature learning and object recognition are studied based on flexible wearable sensors and multi-modal information fusion, and the application of perceptual information to the human grasping process is explored. A data glove is built by utilizing 10 strain sensors, 14 temperature sensors and 78 pressure sensors, and is put on the human hand to measure the bending angle of the finger joints, as well as the temperature and pressure distribution information of the grasped object in human grasping behaviors. The cross-modal information representation is established on time and space sequences, and the multi-modal information is fused by deep convolution neural network to construct the learning model of human grasping feature and realize the accurate recognition of the grasped object. Relevant experiments and validity analysis are carried out for joint angle feature, temperature feature and pressure information feature respectively. The results show that the accurate recognition of 18 kinds of objects can be realized by multi-modal information fusion of multiple sensors.

     

/

返回文章
返回