Current theories of artificial intelligence(AI) generally exclude human emotions. The idea at the core of such theories could be described as ‘cognition is computing';that is, that human psychological and symboli...Current theories of artificial intelligence(AI) generally exclude human emotions. The idea at the core of such theories could be described as ‘cognition is computing';that is, that human psychological and symbolic representations and the operations involved in structuring such representations in human thinking and intelligence can be converted by AI into a series of cognitive symbolic representations and calculations in a manner that simulates human intelligence. However, after decades of development, the cognitive computing doctrine has encountered many difficulties, both in theory and in practice;in particular, it is far from approaching real human intelligence. Real human intelligence runs through the whole process of the emotions. The core and motivation of rational thinking are derived from the emotions. Intelligence without emotion neither exists nor is meaningful. For example, the idea of ‘hot thinking' proposed by Paul Thagard, a philosopher of cognitive science, discusses the mechanism of the emotions in human cognition and the thinking process. Through an analysis from the perspectives of cognitive neurology, cognitive psychology and social anthropology, this article notes that there may be a type of thinking that could be called ‘emotional thinking'. This type of thinking includes complex emotional factors during the cognitive processes. The term is used to refer to the capacity to process information and use emotions to integrate information in order to arrive at the right decisions and reactions. This type of thinking can be divided into two types according to the role of cognition: positive and negative emotional thinking. That division reflects opposite forces in the cognitive process. In the future, ‘emotional computing' will cause an important acceleration in the development of AI consciousness. The foundation of AI consciousness is emotional computing based on the simulation of emotional thinking.展开更多
With the rapid growth of information transmission via the Internet,efforts have been made to reduce network load to promote efficiency.One such application is semantic computing,which can extract and process semantic ...With the rapid growth of information transmission via the Internet,efforts have been made to reduce network load to promote efficiency.One such application is semantic computing,which can extract and process semantic communication.Social media has enabled users to share their current emotions,opinions,and life events through their mobile devices.Notably,people suffering from mental health problems are more willing to share their feelings on social networks.Therefore,it is necessary to extract semantic information from social media(vlog data)to identify abnormal emotional states to facilitate early identification and intervention.Most studies do not consider spatio-temporal information when fusing multimodal information to identify abnormal emotional states such as depression.To solve this problem,this paper proposes a spatio-temporal squeeze transformer method for the extraction of semantic features of depression.First,a module with spatio-temporal data is embedded into the transformer encoder,which is utilized to obtain a representation of spatio-temporal features.Second,a classifier with a voting mechanism is designed to encourage the model to classify depression and non-depression effec-tively.Experiments are conducted on the D-Vlog dataset.The results show that the method is effective,and the accuracy rate can reach 70.70%.This work provides scaffolding for future work in the detection of affect recognition in semantic communication based on social media vlog data.展开更多
Video content is rich in semantics and has the ability to evoke various emotions in viewers. In recent years, with the rapid development of affective computing and the explosive growth of visual data, Affective Video ...Video content is rich in semantics and has the ability to evoke various emotions in viewers. In recent years, with the rapid development of affective computing and the explosive growth of visual data, Affective Video Content Analysis (AVCA) as an essential branch of affective computing has become a widely researched topic. In this study, we comprehensively review the development of AVCA over the past decade, particularly focusing on the most advanced methods adopted to address the three major challenges of video feature extraction, expression subjectivity, and multimodal feature fusion. We first introduce the widely used emotion representation models in AVCA and describe commonly used datasets. We summarize and compare representative methods in the following aspects: (1) unimodal AVCA models, including facial expression recognition and posture emotion recognition;(2) multimodal AVCA models, including feature fusion, decision fusion, and attention-based multimodal models;and (3) model performance evaluation standards. Finally, we discuss future challenges and promising research directions, such as emotion recognition and public opinion analysis, human-computer interaction, and emotional intelligence.展开更多
基金supported by two research programmes of Shanxi Province.One is a research project of soft science study titled‘The Role of Internet Plus in Promoting the Innovative Development of Shanxi’s Cultural Industry’(Project no.2017041020-2)the other is a 2019 higher education teaching reform project titled‘A Study on Higher Education Teaching Reform of Ideological and Political Theory Courses in the Big Data Era’(Project no.2019JGSZ003)
文摘Current theories of artificial intelligence(AI) generally exclude human emotions. The idea at the core of such theories could be described as ‘cognition is computing';that is, that human psychological and symbolic representations and the operations involved in structuring such representations in human thinking and intelligence can be converted by AI into a series of cognitive symbolic representations and calculations in a manner that simulates human intelligence. However, after decades of development, the cognitive computing doctrine has encountered many difficulties, both in theory and in practice;in particular, it is far from approaching real human intelligence. Real human intelligence runs through the whole process of the emotions. The core and motivation of rational thinking are derived from the emotions. Intelligence without emotion neither exists nor is meaningful. For example, the idea of ‘hot thinking' proposed by Paul Thagard, a philosopher of cognitive science, discusses the mechanism of the emotions in human cognition and the thinking process. Through an analysis from the perspectives of cognitive neurology, cognitive psychology and social anthropology, this article notes that there may be a type of thinking that could be called ‘emotional thinking'. This type of thinking includes complex emotional factors during the cognitive processes. The term is used to refer to the capacity to process information and use emotions to integrate information in order to arrive at the right decisions and reactions. This type of thinking can be divided into two types according to the role of cognition: positive and negative emotional thinking. That division reflects opposite forces in the cognitive process. In the future, ‘emotional computing' will cause an important acceleration in the development of AI consciousness. The foundation of AI consciousness is emotional computing based on the simulation of emotional thinking.
基金supported in part by the STI 2030-Major Projects(2021ZD0202002)in part by the National Natural Science Foundation of China(Grant No.62227807)+2 种基金in part by the Natural Science Foundation of Gansu Province,China(Grant No.22JR5RA488)in part by the Fundamental Research Funds for the Central Universities(Grant No.lzujbky-2023-16)Supported by Supercomputing Center of Lanzhou University.
文摘With the rapid growth of information transmission via the Internet,efforts have been made to reduce network load to promote efficiency.One such application is semantic computing,which can extract and process semantic communication.Social media has enabled users to share their current emotions,opinions,and life events through their mobile devices.Notably,people suffering from mental health problems are more willing to share their feelings on social networks.Therefore,it is necessary to extract semantic information from social media(vlog data)to identify abnormal emotional states to facilitate early identification and intervention.Most studies do not consider spatio-temporal information when fusing multimodal information to identify abnormal emotional states such as depression.To solve this problem,this paper proposes a spatio-temporal squeeze transformer method for the extraction of semantic features of depression.First,a module with spatio-temporal data is embedded into the transformer encoder,which is utilized to obtain a representation of spatio-temporal features.Second,a classifier with a voting mechanism is designed to encourage the model to classify depression and non-depression effec-tively.Experiments are conducted on the D-Vlog dataset.The results show that the method is effective,and the accuracy rate can reach 70.70%.This work provides scaffolding for future work in the detection of affect recognition in semantic communication based on social media vlog data.
基金supported by the National Natural Science Foundation of China(No.U21B6001).
文摘Video content is rich in semantics and has the ability to evoke various emotions in viewers. In recent years, with the rapid development of affective computing and the explosive growth of visual data, Affective Video Content Analysis (AVCA) as an essential branch of affective computing has become a widely researched topic. In this study, we comprehensively review the development of AVCA over the past decade, particularly focusing on the most advanced methods adopted to address the three major challenges of video feature extraction, expression subjectivity, and multimodal feature fusion. We first introduce the widely used emotion representation models in AVCA and describe commonly used datasets. We summarize and compare representative methods in the following aspects: (1) unimodal AVCA models, including facial expression recognition and posture emotion recognition;(2) multimodal AVCA models, including feature fusion, decision fusion, and attention-based multimodal models;and (3) model performance evaluation standards. Finally, we discuss future challenges and promising research directions, such as emotion recognition and public opinion analysis, human-computer interaction, and emotional intelligence.