|
|
Human Grasp Feature Learning and Object Recognition Based onMulti-sensor Information Fusion |
ZHANG Yangyang1, HUANG Ying1,2, LIU Yue1, LIU Caixia1, LIU Ping1, ZHANG Yugang1 |
1. School of Electronic Science & Applied Physics, Hefei University of Technology, Hefei 230009, China; 2. The State Key Laboratory of Bioelectronics, Southeast University, Nanjing 210096, China |
|
|
Abstract Human grasping feature learning and object recognition are studied based on flexible wearable sensors and multi-modal information fusion, and the application of perceptual information to the human grasping process is explored. A data glove is built by utilizing 10 strain sensors, 14 temperature sensors and 78 pressure sensors, and is put on the human hand to measure the bending angle of the finger joints, as well as the temperature and pressure distribution information of the grasped object in human grasping behaviors. The cross-modal information representation is established on time and space sequences, and the multi-modal information is fused by deep convolution neural network to construct the learning model of human grasping feature and realize the accurate recognition of the grasped object. Relevant experiments and validity analysis are carried out for joint angle feature, temperature feature and pressure information feature respectively. The results show that the accurate recognition of 18 kinds of objects can be realized by multi-modal information fusion of multiple sensors.
|
Received: 01 July 2019
Published: 15 May 2020
|
|
|
|
|
|
|
|