This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent ...This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.展开更多
Multimodal emotion recognition technology leverages the power of deep learning to address advanced visual and emotional tasks.While generic deep networks can handle simple emotion recognition tasks,their generalizatio...Multimodal emotion recognition technology leverages the power of deep learning to address advanced visual and emotional tasks.While generic deep networks can handle simple emotion recognition tasks,their generalization capability in complex and noisy environments,such as multi-scene outdoor settings,remains limited.To overcome these challenges,this paper proposes a novel multimodal emotion recognition framework.First,we develop a robust network architecture based on the T5-small model,designed for dynamic-static fusion in complex scenarios,effectively mitigating the impact of noise.Second,we introduce a dynamic-static cross fusion network(D-SCFN)to enhance the integration and extraction of dynamic and static information,embedding it seamlessly within the T5 framework.Finally,we design and evaluate three distinct multi-task analysis frameworks to explore dependencies among tasks.The experimental results demonstrate that our model significantly outperforms other existing models,showcasing exceptional stability and remarkable adaptability to complex and dynamic scenarios.展开更多
Learning modality-fused representations and processing unaligned multimodal sequences are meaningful and challenging in multimodal emotion recognition.Existing approaches use directional pairwise attention or a messag...Learning modality-fused representations and processing unaligned multimodal sequences are meaningful and challenging in multimodal emotion recognition.Existing approaches use directional pairwise attention or a message hub to fuse language,visual,and audio modalities.However,these fusion methods are often quadratic in complexity with respect to the modal sequence length,bring redundant information and are not efficient.In this paper,we propose an efficient neural network to learn modality-fused representations with CB-Transformer(LMR-CBT)for multimodal emotion recognition from unaligned multi-modal sequences.Specifically,we first perform feature extraction for the three modalities respectively to obtain the local structure of the sequences.Then,we design an innovative asymmetric transformer with cross-modal blocks(CB-Transformer)that enables complementary learning of different modalities,mainly divided into local temporal learning,cross-modal feature fusion and global self-attention representations.In addition,we splice the fused features with the original features to classify the emotions of the sequences.Finally,we conduct word-aligned and unaligned experiments on three challenging datasets,IEMOCAP,CMU-MOSI,and CMU-MOSEI.The experimental results show the superiority and efficiency of our proposed method in both settings.Compared with the mainstream methods,our approach reaches the state-of-the-art with a minimum number of parameters.展开更多
In the realm of human-computer interaction, accurately discerning the user's emotional state during a conversation has become increasingly critical. Multimodal emotion recognition has garnered considerable attenti...In the realm of human-computer interaction, accurately discerning the user's emotional state during a conversation has become increasingly critical. Multimodal emotion recognition has garnered considerable attention.However, the task of multimodal emotion recognition still faces several challenges. Especially, the models are unable to effectively extract multimodal contextual and interaction information, which results in a significant redundancy in the concatenated features representation. To mitigate this issue, a cross-modal fusion network based on graph feature learning(CFNet-GFL) model is proposed. Firstly, the model employs a cross-modal module to integrate multiple feature representations, resulting in more precise feature embedding. This alleviates the issue of information redundancy in the fusion process. Secondly, the model leverages a graph feature learning approach to extract intra-modal contextual preferences and inter-modal information interactions. By capturing the diversity and consistency of multimodal information, the model enhances the ability to understand emotions. Finally, the performance of the CFNet-GFL model is demonstrated by some experiments on the IEMOCAP and MELD datasets. The w-F1 scores show a significant improvement of approximately 2.08% and 1.36% respectively. These findings demonstrate the effectiveness of the model in multimodal emotion recognition.展开更多
With the development of artificial intelligence,emotion recognition has become a hot topic in the field of humancomputer interaction.This paper focuses on the application and optimization of deep convolutional neural ...With the development of artificial intelligence,emotion recognition has become a hot topic in the field of humancomputer interaction.This paper focuses on the application and optimization of deep convolutional neural networks(CNNs)in multimodal emotion recognition.Multimodal emotion recognition involves analyzing data from different sources—such as voice,facial expressions,and text—to more accurately identify and interpret human emotional states.This paper first reviews the basic theories and methods of multimodal data processing,then details the structure and function of deep convolutional neural networks,particularly their advantages in handling various types of data.By innovating and optimizing network structures,loss functions,and training strategies,we have improved the model's accuracy in emotion recognition.Ultimately,experimental results show that the optimized CNN model demonstrates superior performance in multimodal emotion recognition tasks.展开更多
The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suf...The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suffers from redundant features and does not capture complementary features well.We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction,and the features that can reinforce a modality may contain only a part of it.To this end,we design an innovative Transformer-based Adaptive Cross-modal Fusion Network(TACFN).Specifically,for the redundant features,we make one modality perform intra-modal feature selection through a self-attention mechanism,so that the selected features can adaptively and efficiently interact with another modality.To better capture the complementary information between the modalities,we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities.We apply TCAFN to the RAVDESS and IEMOCAP datasets.For fair comparison,we use the same unimodal representations to validate the effectiveness of the proposed fusion method.The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance.All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN.展开更多
A Tsinghua-developed biometric recognition system, designed to bolster traditional public security identification measures, was highly commended in an appraisal by the Ministry of Education on June 22, 2005.
基金Supported by Education and Teaching Reform Project of the First Clinical College of Chongqing Medical University,No.CMER202305Natural Science Foundation of Tibet Autonomous Region,No.XZ2024ZR-ZY100(Z).
文摘This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.
基金funded by the Foundation Item:Shaanxi Provincial Technical Innovation Guidance Plan(2023KXJ-279).
文摘Multimodal emotion recognition technology leverages the power of deep learning to address advanced visual and emotional tasks.While generic deep networks can handle simple emotion recognition tasks,their generalization capability in complex and noisy environments,such as multi-scene outdoor settings,remains limited.To overcome these challenges,this paper proposes a novel multimodal emotion recognition framework.First,we develop a robust network architecture based on the T5-small model,designed for dynamic-static fusion in complex scenarios,effectively mitigating the impact of noise.Second,we introduce a dynamic-static cross fusion network(D-SCFN)to enhance the integration and extraction of dynamic and static information,embedding it seamlessly within the T5 framework.Finally,we design and evaluate three distinct multi-task analysis frameworks to explore dependencies among tasks.The experimental results demonstrate that our model significantly outperforms other existing models,showcasing exceptional stability and remarkable adaptability to complex and dynamic scenarios.
基金National Natural Science Foundation of China(Grant No.72293583).
文摘Learning modality-fused representations and processing unaligned multimodal sequences are meaningful and challenging in multimodal emotion recognition.Existing approaches use directional pairwise attention or a message hub to fuse language,visual,and audio modalities.However,these fusion methods are often quadratic in complexity with respect to the modal sequence length,bring redundant information and are not efficient.In this paper,we propose an efficient neural network to learn modality-fused representations with CB-Transformer(LMR-CBT)for multimodal emotion recognition from unaligned multi-modal sequences.Specifically,we first perform feature extraction for the three modalities respectively to obtain the local structure of the sequences.Then,we design an innovative asymmetric transformer with cross-modal blocks(CB-Transformer)that enables complementary learning of different modalities,mainly divided into local temporal learning,cross-modal feature fusion and global self-attention representations.In addition,we splice the fused features with the original features to classify the emotions of the sequences.Finally,we conduct word-aligned and unaligned experiments on three challenging datasets,IEMOCAP,CMU-MOSI,and CMU-MOSEI.The experimental results show the superiority and efficiency of our proposed method in both settings.Compared with the mainstream methods,our approach reaches the state-of-the-art with a minimum number of parameters.
基金supported by National Natural Science Foundation of China (62201452)。
文摘In the realm of human-computer interaction, accurately discerning the user's emotional state during a conversation has become increasingly critical. Multimodal emotion recognition has garnered considerable attention.However, the task of multimodal emotion recognition still faces several challenges. Especially, the models are unable to effectively extract multimodal contextual and interaction information, which results in a significant redundancy in the concatenated features representation. To mitigate this issue, a cross-modal fusion network based on graph feature learning(CFNet-GFL) model is proposed. Firstly, the model employs a cross-modal module to integrate multiple feature representations, resulting in more precise feature embedding. This alleviates the issue of information redundancy in the fusion process. Secondly, the model leverages a graph feature learning approach to extract intra-modal contextual preferences and inter-modal information interactions. By capturing the diversity and consistency of multimodal information, the model enhances the ability to understand emotions. Finally, the performance of the CFNet-GFL model is demonstrated by some experiments on the IEMOCAP and MELD datasets. The w-F1 scores show a significant improvement of approximately 2.08% and 1.36% respectively. These findings demonstrate the effectiveness of the model in multimodal emotion recognition.
文摘With the development of artificial intelligence,emotion recognition has become a hot topic in the field of humancomputer interaction.This paper focuses on the application and optimization of deep convolutional neural networks(CNNs)in multimodal emotion recognition.Multimodal emotion recognition involves analyzing data from different sources—such as voice,facial expressions,and text—to more accurately identify and interpret human emotional states.This paper first reviews the basic theories and methods of multimodal data processing,then details the structure and function of deep convolutional neural networks,particularly their advantages in handling various types of data.By innovating and optimizing network structures,loss functions,and training strategies,we have improved the model's accuracy in emotion recognition.Ultimately,experimental results show that the optimized CNN model demonstrates superior performance in multimodal emotion recognition tasks.
基金supported by Beijing Key Laboratory of Behavior and Mental Health,Peking University。
文摘The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suffers from redundant features and does not capture complementary features well.We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction,and the features that can reinforce a modality may contain only a part of it.To this end,we design an innovative Transformer-based Adaptive Cross-modal Fusion Network(TACFN).Specifically,for the redundant features,we make one modality perform intra-modal feature selection through a self-attention mechanism,so that the selected features can adaptively and efficiently interact with another modality.To better capture the complementary information between the modalities,we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities.We apply TCAFN to the RAVDESS and IEMOCAP datasets.For fair comparison,we use the same unimodal representations to validate the effectiveness of the proposed fusion method.The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance.All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN.
文摘A Tsinghua-developed biometric recognition system, designed to bolster traditional public security identification measures, was highly commended in an appraisal by the Ministry of Education on June 22, 2005.