[1] Manohar V, Crandall J W. Programming robots to express emotions: Interaction paradigms, communication modalities, and context[J]. IEEE Transactions on Human-Machine Systems, 2014, 44(3): 362-373. [2] Tadesse Y. Actuation technologies for humanoid robots with facial expressions (HRwFE)[J]. Transaction on Control and Mechanical Systems, 2013, 2(7): 337-349.[3] 任福继.机器人云与机器人学校[J].科技导报, 2012, 30(9):73-79.Ren F J. Robotic cloud and robotic school[J]. Science & Technology Review, 2012, 30(9): 73-79.[4] Jaeckel P, Campbell N, Melhuish C. Facial behavior mapping -- From video footage to a robot head[J]. Robotics and Autonomous Systems, 2008, 56(12): 1042-1049. [5] 刘遥峰, 王志良.基于情感交互的仿人头部机器人[J].机器人, 2009, 31(6):493-500.Liu Y F, Wang Z L. Humanoid head robot based on emotional interaction[J]. Robot, 2009, 31(6): 493-500.[6] Shayganfar M, Rich C, Sidner C. A design methodology for expressing emotion on robot faces[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2012: 4577-4583.[7] Ahn H S, Lee D W, Choi D, et al. Development of an incarnate announcing robot system using emotional interaction with humans[J]. International Journal of Humanoid Robotics, 2013, 10(2): 1-24.[8] Wilbers F, Ishi C, Ishiguro H. A blendshape model for mapping facial motions to an android[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2007: 542-547.[9] Magtanong E, Yamaguchi A, Takemura K, et al. Inverse kinematics solver for android faces with elastic skin[C]//Latest Advances in Robot Kinematics. Berlin, Germany: Springer-Verlag, 2012: 181-188.[10] Trovato G, Zecca M, Kishi T, et al. Generation of humanoid robot's facial expressions for context-aware communication[J]. International Journal of Humanoid Robotics, 2013, 10(1): doi, 1350013-1-22.[11] 王巍, 王志良, 郑思仪, 等.一类具有相同结构的表情机器人共同注意方法[J].机器人, 2012, 34(3):265-274.Wang W, Wang Z L, Zheng S Y, et al. Joint attention for a kind of facial expression robots with the same structure[J]. Robot, 2012, 34(3): 265-274.[12] Microsoft. Develop for Kinect[EB/OL]. [2013-12-05]. http:// www.microsoft.com/zh-cn/kinectforwindows/.[13] 黄忠, 胡敏, 王晓华.一种基于几何特征的表情相似性度量方法[J].模式识别与人工智能, 2015, 28(5):443-451.Huang Z, Hu M, Wang X H. A similarity measurement method of facial expression based on geometric feature[J]. Pattern Recognition and Artificial Intelligence, 2015, 28(5): 443-451.[14] 樊劲辉, 贾松敏, 李秀智.基于 RBF 神经网络的全向智能轮椅自适应控制[J].华中科技大学学报:自然科学版, 2014, 42(2):111-115.Fan J H, Jia S M, Li X Z. Adaptive control for omni-directional intelligent wheelchairs based on RBF neural network[J]. Journal of Huazhong University of Science and Technology: Natural Science Edition, 2014, 42(2): 111-115.[15] 樊兆峰, 马小平, 邵晓根.非线性系统 RBF 神经网络多步预测控制[J].控制与决策, 2014, 29(7):1274-1278.Fan Z F, Ma X P, Shao X G. RBF neural network multi-step predictive control for nonlinear systems[J]. Control and Decision, 2014, 29(7): 1274-1278.[16] Montoliu R, Pla F. Generalized least squares-based parametric motion estimation[J]. Computer Vision and Image Understanding, 2009, 113(7): 790-801. [17] Tadesse Y, Priya S. Graphical facial expression analysis and design method: An approach to determine humanoid skin deformation[J]. Journal of Mechanisms and Robotics, 2012, 4(2): 1-16.[18] 朱特浩, 赵群飞, 夏泽洋.利用 Kinect 的人体动作视觉感知算法[J].机器人, 2014, 36(6):647-653. Zhu T H, Zhao Q F, Xia Z Y. A visual perception algorithm for human motion by a Kinect[J]. Robot, 2014, 36(6): 647-653. |