Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do ...Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do not evolve with the changing linguistic context,which can hinder the ability to form meaningful connections between the image and the generated captions.This limitation often leads to captions that are less accurate or descriptive.In this paper,we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context.Our model strengthens the alignment between visual and linguistic elements,resulting in more coherent and contextually appropriate captions.Specifically,we introduce two innovative modules:the Visual Weighting Module(VWM)and the Enhanced Features Attention Module(EFAM).The VWM adjusts visual features using partial attention,enabling dynamic reweighting of the visual inputs,while the EFAM further refines these features to improve their relevance to the generated caption.By continuously adjusting visual features in response to the linguistic context,our model bridges the gap between static visual features and dynamic language generation.We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset,where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance.Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance.展开更多
While automatic image captioning systems have made notable progress in the past few years,generating captions that fully convey sentiment remains a considerable challenge.Although existing models achieve strong perfor...While automatic image captioning systems have made notable progress in the past few years,generating captions that fully convey sentiment remains a considerable challenge.Although existing models achieve strong performance in visual recognition and factual description,they often fail to account for the emotional context that is naturally present in human-generated captions.To address this gap,we propose the Sentiment-Driven Caption Generator(SDCG),which combines transformer-based visual and textual processing withmulti-level fusion.RoBERTa is used for extracting sentiment from textual input,while visual features are handled by the Vision Transformer(ViT).These features are fused using several fusion approaches,including Concatenation,Attention,Visual-Sentiment Co-Attention(VSCA),and Cross-Attention.Our experiments demonstrate that SDCG significantly outperforms baseline models such as the Generalized Image Transformer(GIT),which achieves 82.01%,and Bootstrapping Language-Image Pre-training(BLIP),which achieves 83.07%,in sentiment accuracy.While SDCG achieves 94.52%sentiment accuracy and improves scores in BLEU and ROUGE-L,the model demonstrates clear advantages.More importantly,the captions aremore natural,as they incorporate emotional cues and contextual awareness,making them resemble those written by a human.展开更多
基金supported by the National Natural Science Foundation of China(Nos.U22A2034,62177047)High Caliber Foreign Experts Introduction Plan funded by MOST,and Central South University Research Programme of Advanced Interdisciplinary Studies(No.2023QYJC020).
文摘Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do not evolve with the changing linguistic context,which can hinder the ability to form meaningful connections between the image and the generated captions.This limitation often leads to captions that are less accurate or descriptive.In this paper,we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context.Our model strengthens the alignment between visual and linguistic elements,resulting in more coherent and contextually appropriate captions.Specifically,we introduce two innovative modules:the Visual Weighting Module(VWM)and the Enhanced Features Attention Module(EFAM).The VWM adjusts visual features using partial attention,enabling dynamic reweighting of the visual inputs,while the EFAM further refines these features to improve their relevance to the generated caption.By continuously adjusting visual features in response to the linguistic context,our model bridges the gap between static visual features and dynamic language generation.We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset,where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance.Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance.
基金funded by the Committee of Science of the Ministry of Science andHigher Education of the Republic of Kazakhstan(Grant No.BR24993166).
文摘While automatic image captioning systems have made notable progress in the past few years,generating captions that fully convey sentiment remains a considerable challenge.Although existing models achieve strong performance in visual recognition and factual description,they often fail to account for the emotional context that is naturally present in human-generated captions.To address this gap,we propose the Sentiment-Driven Caption Generator(SDCG),which combines transformer-based visual and textual processing withmulti-level fusion.RoBERTa is used for extracting sentiment from textual input,while visual features are handled by the Vision Transformer(ViT).These features are fused using several fusion approaches,including Concatenation,Attention,Visual-Sentiment Co-Attention(VSCA),and Cross-Attention.Our experiments demonstrate that SDCG significantly outperforms baseline models such as the Generalized Image Transformer(GIT),which achieves 82.01%,and Bootstrapping Language-Image Pre-training(BLIP),which achieves 83.07%,in sentiment accuracy.While SDCG achieves 94.52%sentiment accuracy and improves scores in BLEU and ROUGE-L,the model demonstrates clear advantages.More importantly,the captions aremore natural,as they incorporate emotional cues and contextual awareness,making them resemble those written by a human.