期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
NLWSNet:a weakly supervised network for visual sentiment analysis in mislabeled web images
1
作者 Luo-yang XUE Qi-rong MAO +1 位作者 Xiao-hua HUANG Jie CHEN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2020年第9期1321-1333,共13页
Large-scale datasets are driving the rapid developments of deep convolutional neural networks for visual sentiment analysis.However,the annotation of large-scale datasets is expensive and time consuming.Instead,it ise... Large-scale datasets are driving the rapid developments of deep convolutional neural networks for visual sentiment analysis.However,the annotation of large-scale datasets is expensive and time consuming.Instead,it iseasy to obtain weakly labeled web images from the Internet.However,noisy labels st.ill lead to seriously degraded performance when we use images directly from the web for training networks.To address this drawback,we propose an end-to-end weakly supervised learning network,which is robust to mislabeled web images.Specifically,the proposed attention module automatically eliminates the distraction of those samples with incorrect labels bv reducing their attention scores in the training process.On the other hand,the special-class activation map module is designed to stimulate the network by focusing on the significant regions from the samples with correct labels in a weakly supervised learning approach.Besides the process of feature learning,applying regularization to the classifier is considered to minimize the distance of those samples within the same class and maximize the distance between different class centroids.Quantitative and qualitative evaluations on well-and mislabeled web image datasets demonstrate that the proposed algorithm outperforms the related methods. 展开更多
关键词 visual sentiment analysis Weakly supervised learning Mislabeled samples Significant sentiment regions
原文传递
Multimodal sentiment analysis for social media contents during public emergencies 被引量:1
2
作者 Tao Fan Hao Wang +2 位作者 Peng Wu Chen Ling Milad Taleby Ahvanooey 《Journal of Data and Information Science》 CSCD 2023年第3期61-87,共27页
Purpose:Nowadays,public opinions during public emergencies involve not only textual contents but also contain images.However,the existing works mainly focus on textual contents and they do not provide a satisfactory a... Purpose:Nowadays,public opinions during public emergencies involve not only textual contents but also contain images.However,the existing works mainly focus on textual contents and they do not provide a satisfactory accuracy of sentiment analysis,lacking the combination of multimodal contents.In this paper,we propose to combine texts and images generated in the social media to perform sentiment analysis.Design/methodology/approach:We propose a Deep Multimodal Fusion Model(DMFM),which combines textual and visual sentiment analysis.We first train word2vec model on a large-scale public emergency corpus to obtain semantic-rich word vectors as the input of textual sentiment analysis.BiLSTM is employed to generate encoded textual embeddings.To fully excavate visual information from images,a modified pretrained VGG16-based sentiment analysis network is used with the best-performed fine-tuning strategy.A multimodal fusion method is implemented to fuse textual and visual embeddings completely,producing predicted labels.Findings:We performed extensive experiments on Weibo and Twitter public emergency datasets,to evaluate the performance of our proposed model.Experimental results demonstrate that the DMFM provides higher accuracy compared with baseline models.The introduction of images can boost the performance of sentiment analysis during public emergencies.Research limitations:In the future,we will test our model in a wider dataset.We will also consider a better way to learn the multimodal fusion information.Practical implications:We build an efficient multimodal sentiment analysis model for the social media contents during public emergencies.Originality/value:We consider the images posted by online users during public emergencies on social platforms.The proposed method can present a novel scope for sentiment analysis during public emergencies and provide the decision support for the government when formulating policies in public emergencies. 展开更多
关键词 Public emergency Multimodal sentiment analysis Social platform Textual sentiment analysis visual sentiment analysis
在线阅读 下载PDF
Survey of visual sentiment prediction for social media analysis 被引量:1
3
作者 Rongrong JI Donglin CAO +1 位作者 Yiyi ZHOU Fuhai CHEN 《Frontiers of Computer Science》 SCIE EI CSCD 2016年第4期602-611,共10页
Recent years have witnessed a rapid spread of multi-modality microblogs like Twitter and Sina Weibo composed of image, text and emoticon. Visual sentiment prediction of such microblog based social media has recently a... Recent years have witnessed a rapid spread of multi-modality microblogs like Twitter and Sina Weibo composed of image, text and emoticon. Visual sentiment prediction of such microblog based social media has recently attracted ever-increasing research focus with broad application prospect. In this paper, we give a systematic review of the recent advances and cutting-edge techniques for visual senti- ment analysis. To this end, in this paper we review the most recent works in this topic, in which detailed comparison as well as experimental evaluation are given over the cuttingedge methods. We further reveal and discuss the future trends and potential directions for visual sentiment prediction. 展开更多
关键词 visual sentiment analysis sentiment predication human emotion
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部