Targeted multimodal sentiment classification(TMSC)aims to identify the sentiment polarity of a target mentioned in a multimodal post.The majority of current studies on this task focus on mapping the image and the text...Targeted multimodal sentiment classification(TMSC)aims to identify the sentiment polarity of a target mentioned in a multimodal post.The majority of current studies on this task focus on mapping the image and the text to a high-dimensional space in order to obtain and fuse implicit representations,ignoring the rich semantic information contained in the images and not taking into account the contribution of the visual modality in the multimodal fusion representation,which can potentially influence the results of TMSC tasks.This paper proposes a general model for Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images(ITMSC)as a way to tackle these issues and improve the accu-racy of multimodal sentiment analysis.Specifically,the ITMSC model can automatically adjust the contribution of images in the fusion representation through the exploitation of semantic descriptions of images and text similarity relations.Further,we propose a target-based attention module to capture the target-text relevance,an image-based attention module to capture the image-text relevance,and a target-image matching module based on the former two modules to properly align the target with the image so that fine-grained semantic information can be extracted.Our experimental results demonstrate that our model achieves comparable performance with several state-of-the-art approaches on two multimodal sentiment datasets.Our findings indicate that incorporating semantic descriptions of images can enhance our understanding of multimodal content and lead to improved sentiment analysis performance.展开更多
Multimodal Aspect-Based Sentiment Analysis(MABSA)aims to detect sentiment polarity toward specific aspects by leveraging both textual and visual inputs.However,existing models suffer from weak aspectimage alignment,mo...Multimodal Aspect-Based Sentiment Analysis(MABSA)aims to detect sentiment polarity toward specific aspects by leveraging both textual and visual inputs.However,existing models suffer from weak aspectimage alignment,modality imbalance dominated by textual signals,and limited reasoning for implicit or ambiguous sentiments requiring external knowledge.To address these issues,we propose a unified framework named Gated-Linear Aspect-Aware Multimodal Sentiment Network(GLAMSNet).First of all,an input encoding module is employed to construct modality-specific and aspect-aware representations.Subsequently,we introduce an image–aspect correlation matching module to provide hierarchical supervision for visual-textual alignment.Building upon these components,we further design a Gated-Linear Aspect-Aware Fusion(GLAF)module to enhance aspect-aware representation learning by adaptively filtering irrelevant textual information and refining semantic alignment under aspect guidance.Additionally,an External Language Model Knowledge-Guided mechanism is integrated to incorporate sentimentaware prior knowledge from GPT-4o,enabling robust semantic reasoning especially under noisy or ambiguous inputs.Experimental studies conducted based on Twitter-15 and Twitter-17 datasets demonstrate that the proposed model outperforms most state-of-the-art methods,achieving 79.36%accuracy and 74.72%F1-score,and 74.31%accuracy and 72.01%F1-score,respectively.展开更多
In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the e...In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the experiment of the emotion classification method based on the encoder.The experimental analysis shows that the encoder has higher precision than other encoders in emotion classification.It is hoped that this analysis can provide some reference for the emotion classification under the current intelligent algorithm mode.展开更多
Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively h...Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively handling social media data with multiple modalities.Moreover,most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations,leading to unsatisfactory sentiment classification results.Motivated by this,we propose a new visualtextual sentiment classification model named Multi-Model Fusion(MMF),which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content.The proposed model comprises three deep neural networks.Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data.Thus,more discriminative features are gathered for accurate sentiment classification.Then,a multichannel joint fusion modelwith a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification.Finally,the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model.An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model(LIME)to ensure the model’s explainability and resilience.The proposed MMF model has been tested on four real-world sentiment datasets,achieving(99.78%)accuracy on Binary_Getty(BG),(99.12%)on Binary_iStock(BIS),(95.70%)on Twitter,and(79.06%)on the Multi-View Sentiment Analysis(MVSA)dataset.These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.展开更多
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on...Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment.展开更多
文摘Targeted multimodal sentiment classification(TMSC)aims to identify the sentiment polarity of a target mentioned in a multimodal post.The majority of current studies on this task focus on mapping the image and the text to a high-dimensional space in order to obtain and fuse implicit representations,ignoring the rich semantic information contained in the images and not taking into account the contribution of the visual modality in the multimodal fusion representation,which can potentially influence the results of TMSC tasks.This paper proposes a general model for Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images(ITMSC)as a way to tackle these issues and improve the accu-racy of multimodal sentiment analysis.Specifically,the ITMSC model can automatically adjust the contribution of images in the fusion representation through the exploitation of semantic descriptions of images and text similarity relations.Further,we propose a target-based attention module to capture the target-text relevance,an image-based attention module to capture the image-text relevance,and a target-image matching module based on the former two modules to properly align the target with the image so that fine-grained semantic information can be extracted.Our experimental results demonstrate that our model achieves comparable performance with several state-of-the-art approaches on two multimodal sentiment datasets.Our findings indicate that incorporating semantic descriptions of images can enhance our understanding of multimodal content and lead to improved sentiment analysis performance.
基金supported in part by the National Nature Science Foundation of China under Grants 62476216 and 62273272in part by the Key Research and Development Program of Shaanxi Province under Grant 2024GX-YBXM-146+1 种基金in part by the Scientific Research Program Funded by Education Department of Shaanxi Provincial Government under Grant 23JP091the Youth Innovation Team of Shaanxi Universities.
文摘Multimodal Aspect-Based Sentiment Analysis(MABSA)aims to detect sentiment polarity toward specific aspects by leveraging both textual and visual inputs.However,existing models suffer from weak aspectimage alignment,modality imbalance dominated by textual signals,and limited reasoning for implicit or ambiguous sentiments requiring external knowledge.To address these issues,we propose a unified framework named Gated-Linear Aspect-Aware Multimodal Sentiment Network(GLAMSNet).First of all,an input encoding module is employed to construct modality-specific and aspect-aware representations.Subsequently,we introduce an image–aspect correlation matching module to provide hierarchical supervision for visual-textual alignment.Building upon these components,we further design a Gated-Linear Aspect-Aware Fusion(GLAF)module to enhance aspect-aware representation learning by adaptively filtering irrelevant textual information and refining semantic alignment under aspect guidance.Additionally,an External Language Model Knowledge-Guided mechanism is integrated to incorporate sentimentaware prior knowledge from GPT-4o,enabling robust semantic reasoning especially under noisy or ambiguous inputs.Experimental studies conducted based on Twitter-15 and Twitter-17 datasets demonstrate that the proposed model outperforms most state-of-the-art methods,achieving 79.36%accuracy and 74.72%F1-score,and 74.31%accuracy and 72.01%F1-score,respectively.
文摘In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the experiment of the emotion classification method based on the encoder.The experimental analysis shows that the encoder has higher precision than other encoders in emotion classification.It is hoped that this analysis can provide some reference for the emotion classification under the current intelligent algorithm mode.
文摘Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively handling social media data with multiple modalities.Moreover,most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations,leading to unsatisfactory sentiment classification results.Motivated by this,we propose a new visualtextual sentiment classification model named Multi-Model Fusion(MMF),which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content.The proposed model comprises three deep neural networks.Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data.Thus,more discriminative features are gathered for accurate sentiment classification.Then,a multichannel joint fusion modelwith a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification.Finally,the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model.An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model(LIME)to ensure the model’s explainability and resilience.The proposed MMF model has been tested on four real-world sentiment datasets,achieving(99.78%)accuracy on Binary_Getty(BG),(99.12%)on Binary_iStock(BIS),(95.70%)on Twitter,and(79.06%)on the Multi-View Sentiment Analysis(MVSA)dataset.These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.
基金supported by Science and Technology Research Project of Jiangxi Education Department.Project Grant No.GJJ2203306.
文摘Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment.