期刊文献+

基于视频的情感时空融合特征提取算法

Video-Based Fusion Algorithm of the Spatial and Temporal Emotion Characteristics
在线阅读 下载PDF
导出
摘要 随着计算机技术的飞速发展以及人机交互技术的广泛应用,基于视频的表情识别逐渐成为研究热点之一,并逐渐实用化。本文提出了一种基于视频的情感时空融合特征提取算法,并用于表情识别。首先获取情感视频的时空特征点和其对应的立方体(cuobids),然后融合Piotr Dollar提出的描述算子和CBP_TOP描述算子所提取的cuobids的特征向量作为时空特征点最终的特征向量,最后采用"词袋模型"方法来提取情感视频最终的表情特征,并用于后续的表情分类。仿真实验表明此算法在保证识别精度的基础上大大提高了识别速率。 In recent years, with the rapid development of computer technology and the extensive application of human-computer interaction technology, the video-based recognition of facial expression is becoming one of the hot fields, and has obtained practical applications. This paper proposes a video-based fusion algorithm on the spatial and temporal emotion characteristics. Firstly, we get spatial and temporal characteristics and the corresponding cube (cuboids) of emotional video. And then, the feature vectors of cuboids, extracted by the operator by Piotr Dollar proposed and the CBP_TOP operator, are fused as the final feature vector of the spatial and temporal characteristics. Finally, the final characteristics of the emotional video are extracted by means of words-bag model, which will be used for subsequent expression classification. Simulation results show that the proposed algorithm can improve the recognition speed greatly and ensure the recognition accuracy.
出处 《华东理工大学学报(自然科学版)》 CAS CSCD 北大核心 2015年第2期236-243,共8页 Journal of East China University of Science and Technology
关键词 表情识别 时空特征 CBP—TOP算子 词袋模型 face recognition spatial and temporal characteristics CBP_TOP operator words bag model
  • 相关文献

参考文献18

  • 1Ekman P, Friensen W V. Facial Action Coding System: In- vestigator's Guide [M]. [s. l.]: Consulting Psychologists Press,1978.
  • 2Ekman P. Strong evidence for universals in facial expres- sions: A reply to Russell's mistaken critique [J]. Psychologi- cal Bulletin, 1994,115(2) :268-287.
  • 3Cohen I, sebe N, Cozman F G,et al. Learning bayesian net- work classifiers for facial expression recognition with both labeled and unlabeled data[J]. Computer Vision and Pattern Recognition,2003, 1(1) :595-601.
  • 4Barron J, Fleet D, Beauchemin S. Performance of optical flow techniques [J ]. International Journal of Computer Vision, 1994, 12(1):43-77.
  • 5Smith S, Brady J. ASSET-2: Real-time motion segmentation and shape tracking [J]. IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 1995, 17(8):814-820.
  • 6Blake A, Isard M. Condensation: Conditional density propa gation for visual tracking [J]. puter Vision, 1998, 29(1) : 5.
  • 7Laptev I, Lindeberg T. On space-time interest points [J] International Journal of Computer Vision, 2005,64 (2/3) 107-123.
  • 8Ryoo M S, Aggarwal J K. Spation-temporal relationship match: Video structure comparison for recognition of complex human activities[C] // International Conference on Computer Vision. Kyoto: IEEE, 2009: 1593-1600.
  • 9Rapantzikos K, Avrithis Y, Kollias S. Dense saliency-based spatiotemporal feature points for action recognition [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE, 2009:1454-1461.
  • 10Dollar P, Rabaud V, Cottrell G, et al. Behavior recognition via sparse spatio-temporal features[C] // Visual Surveillance and Performance Evaluation of Tracking and Surveillance. [s. l.]: IEEE, 2005:65-72.

二级参考文献45

共引文献226

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部