Abstract:
Human grasping feature learning and object recognition are studied based on flexible wearable sensors and multi-modal information fusion, and the application of perceptual information to the human grasping process is explored. A data glove is built by utilizing 10 strain sensors, 14 temperature sensors and 78 pressure sensors, and is put on the human hand to measure the bending angle of the finger joints, as well as the temperature and pressure distribution information of the grasped object in human grasping behaviors. The cross-modal information representation is established on time and space sequences, and the multi-modal information is fused by deep convolution neural network to construct the learning model of human grasping feature and realize the accurate recognition of the grasped object. Relevant experiments and validity analysis are carried out for joint angle feature, temperature feature and pressure information feature respectively. The results show that the accurate recognition of 18 kinds of objects can be realized by multi-modal information fusion of multiple sensors.