[1] Aggarwal J K, RyooMS. Human activity analysis: A review[J]. ACM Computing Surveys, 2011, 43(3): No.16.[2] 谷军霞,丁晓青,王生进.行为分析算法综述[J].中国图 象图形学报,2009,14(3):377-387. Gu J X, Ding X Q,Wang S J. A survey of activity analysis algorithms[J]. Journal of Image and Graphics, 2009, 14(3): 377-387.[3] Weinland D, Ronfard R, Boyer E. A survey of visionbased methods for action representation, segmentation and recognition[J]. Computer Vision and Image Understanding, 2011,115(2): 224-241.[4] 杜友田,陈峰,徐文立,等.基于视觉的人的运动识别综 述[J].电子学报,2007,35(1):84-90. Du Y T, Chen F, Xu W L, et al. A survey on the vision-based human motion recognition[J]. Acta Electronica Sinica, 2007, 35(1): 84-90.[5] Cheng L Y, Sun Q, Su H, et al. Design and implementation of human-robot interactive demonstration system based on Kinect[C]//Proceedings of the 24th Chinese Control and Decision Conference. Piscataway, USA: IEEE, 2012: 971-975.[6] 尹建芹,田国会,姜海涛,等.面向家庭服务的人体动作 识别[J].四川大学学报:工程科学版,2011,43(4):101-107. Yin J Q, Tian G H, Jiang H T, et al. Human action recognition oriented to family service[J]. Journal of Sichuan University: Engineering Science Edition, 2011, 43(4): 101-107.[7] 尹建芹,田国会,周风余.智能空间下基于时序直方图的 人体行为表示与理解[J].计算机学报,2014,37(2):470-479. Yin J Q, Tian G H, Zhou F Y. Human activity representation and recognition based on temporal order histogram in intelligent space[J]. Chinese Journal of Computers, 2014, 37(2): 470-479.[8] Wu J X, Osuntogun A, Choudhury T, et al. A scalable approach to activity recognition based on object use[C]//IEEE International Conference on Computer Vision. Piscataway, USA: IEEE, 2007: 1-8.[9] Li C C, Chen Y Y. Human posture recognition by simple rules[C]//2006 IEEE International Conference on Systems, Man, and Cybernetics. Piscataway, USA: IEEE, 2007: 3237-3240.[10] Rodriguez M D, Ahmed J, Shah M. Action MACH: A spatiotemporal maximum average correlation height filter for action recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2008: 3001-3008.[11] Biswas J, Veloso M. Depth camera based indoor mobile robot localization and navigation[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2012: 1697-1702.[12] 杨东方,王仕成,刘华平,等.基于Kinect 系统的场景建 模与机器人自主导航[J].机器人,2012,34(5):581-589. Yang D F, Wang S C, Liu H P, et al. Scene modeling and autonomous navigation for robots based on Kinect system[J]. Robot, 2012, 34(5): 581-589.[13] Noel R R, Salekin A, Islam R. A natural user interface classroom based on Kinect[J]. IEEE Learning Technology, 2011, 13(4): 59-61.[14] Rogers III J G, Christensen H I. A conditional random field model for place and object classification[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2012: 1766-1772.[15] Izadi S, Kim D, Hilliges O, et al. KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera[C]//24th Annual ACM Symposium on User Interface Software and Technology. New York, USA: ACM, 2011: 559-568.[16] Thanh T T, Chen F, Kotani K, et al. Extraction of discriminative patterns from skeleton sequences for human action recognition[C]//IEEE International Conference on Computing and Communication Technologies, Research, Innovation, and Vision for the Future. Piscataway, USA: IEEE, 2012: 6pp.[17] Girshick R, Shotton J, Kohli P, et al. Efficient regression of general-activity human poses from depth images[C]//IEEE International Conference on Computer Vision. Piscataway, USA: IEEE, 2011: 415-422.[18] Sempena S, Maulidevi N U, Aryan P R. Human action recognition using dynamic time warping[C]//2011 International Conference on Electrical Engineering and Informatics. Piscataway, USA: IEEE, 2011: 5pp.[19] Raptis M, Kirovski D, Hoppe H. Real-time classification of dance gestures from skeleton animation[C]//ACM SIGGRAPH/ Eurographics Symposium on Computer Animation. New York, USA: ACM, 2011: 147-156.[20] Xia L, Chen C C, Aggarwal J K. View invariant human action recognition using histograms of 3D joints[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2012: 20-27.[21] Packer B, Saenko K, Koller D. A combined pose, object, and feature model for action understanding[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2012: 1378-1385.[22] Lai K, Konrad J, Ishwar P. A gesture-driven computer interface using Kinect[C]//IEEE Southwest Symposium on Image Analysis and Interpretation. Piscataway, USA: IEEE, 2012: 185-188.[23] Sanchez-Riera J, Cech J, Horaud R. Action recognition robust to background clutter by using stereo vision[C]//12th European Conference on Computer Vision. Berlin, Germany: Springer, 2012: 332-341.[24] Misumi H, Fujimura W, Kosaka T, et al. GAMIC: Exaggerated real time character animation control method for full-body gesture interaction systems[C]//ACM SIGGRAPH 2011. New York, USA: ACM, 2011: 5.[25] Lin S Y, Shie C K, Chen S C, et al. Human action recognition using action trait code[C]//21st International Conference on.Pattern Recognition. Piscataway, USA: IEEE, 2012: 3456-3459.[26] Patsadu O, Nukoolkit C, Watanapa B. Human gesture recognition using Kinect camera[C]//9th International Joint Conference on Computer Science and Software Engineering. Piscataway, USA: IEEE, 2012: 28-32.[27] Yang X D, Zhang C Y, Tian Y L. Recognizing actions using depth motion maps-based histograms of oriented gradients[C]//Proceedings of the 20th ACM International Conference on Multimedia. New York, USA: ACM, 2012: 1057-1060.[28] Lallee S, Lemaignan S, Lenz A, et al. Towards a platformindependent cooperative human-robot interaction system: I. Perception[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2010: 4444-4451.[29] Lallée S, Pattacini U, Boucher J D, et al. Towards a platformindependent cooperative human-robot interaction system: II. Perception, execution and imitation of goal directed actions[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2011: 2895-2902. |