Abstract:
To overcome the limitations in manually setting the control parameters of facial expressions with motors of limited number, an online facial expression imitation algorithm is proposed for humanoid robot based on RBF (radial basis function) neural network by combining the Kinect based AAM (active appearance model). In the offline facial expression learning phase, a forward mechanic model is modeled based on RBF networks to reflect the mapping relationship between the motor control values and the facial deformation characteristics, and an inverse prediction model is further developed for wrapping the smoothness of continuous motor movements. In the online facial expression imitating phase, optimal motor values are solved to minimize deformation deviations between the robot and the performer, based on the forward mechanic model and the inverse prediction model; moreover, a weighting factor is introduced to adjust the instantaneous similarity of expression imitation and the smoothness of motor's continuous motion. Finally, the rationality and generalization ability of the two models are validated from the perspective of mean statistics and prediction deviations, and the influence of weighting factor on space-time similarity and smoothness is further discussed. The experimental results indicate that the deformation deviations of the forward mechanic model are less than 1%, and the motor control deviations of the inverse prediction model are less than 1.5%. Compared with the three methods of Jaeckel, Trovato and Magtanong, the proposed algorithm has advantages in the similarity of single-frame expression imitation and the smoothness of multi-frame expression motion.