期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Personalized Emotion Space for Video Affective Content Representation
1
作者 SUN Kai,YU Junqing,HUANG Yue,HU Xiaoqiang,LIU Qing College of Computer Science and Technology,Huazhong University of Science and Technology,Wuhan 430074,Hubei,China 《Wuhan University Journal of Natural Sciences》 CAS 2009年第5期393-398,共6页
A personalized emotion space is proposed to bridge the "affective gap" in video affective content understanding. In order to unify the discrete and dimensional emotion model, fuzzy C-mean (FCM) clustering algorith... A personalized emotion space is proposed to bridge the "affective gap" in video affective content understanding. In order to unify the discrete and dimensional emotion model, fuzzy C-mean (FCM) clustering algorithm is adopted to divide the emotion space. Gaussian mixture model (GMM) is used to determine the membership functions of typical affective subspaces. At every step of modeling the space, the inputs rely completely on the affective experiences recorded by the audiences. The advantages of the improved V-A (Velance-Arousal) emotion model are the per- sonalization, the ability to define typical affective state areas in the V-A emotion space, and the convenience to explicitly express the intensity of each affective state. The experimental results validate the model and show it can be used as a personalized emotion space for video affective content representation. 展开更多
关键词 video affective computing personalized emotion space video affective content representation fuzzy C-means clustering (FCM) Gaussian mixture model (GMM)
原文传递
Affective Video Content Analysis: Decade Review and New Perspectives
2
作者 Junxiao Xue Jie Wang +2 位作者 Xiaozhen Liu Qian Zhang Xuecheng Wu 《Big Data Mining and Analytics》 2025年第1期118-144,共27页
Video content is rich in semantics and has the ability to evoke various emotions in viewers. In recent years, with the rapid development of affective computing and the explosive growth of visual data, Affective Video ... Video content is rich in semantics and has the ability to evoke various emotions in viewers. In recent years, with the rapid development of affective computing and the explosive growth of visual data, Affective Video Content Analysis (AVCA) as an essential branch of affective computing has become a widely researched topic. In this study, we comprehensively review the development of AVCA over the past decade, particularly focusing on the most advanced methods adopted to address the three major challenges of video feature extraction, expression subjectivity, and multimodal feature fusion. We first introduce the widely used emotion representation models in AVCA and describe commonly used datasets. We summarize and compare representative methods in the following aspects: (1) unimodal AVCA models, including facial expression recognition and posture emotion recognition;(2) multimodal AVCA models, including feature fusion, decision fusion, and attention-based multimodal models;and (3) model performance evaluation standards. Finally, we discuss future challenges and promising research directions, such as emotion recognition and public opinion analysis, human-computer interaction, and emotional intelligence. 展开更多
关键词 affective computing video emotion video feature extraction machine learning emotional intelligence
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部