Image captioning has seen significant research efforts over the last decade.The goal is to generate meaningful semantic sentences that describe visual content depicted in photographs and are syntactically accurate.Man...Image captioning has seen significant research efforts over the last decade.The goal is to generate meaningful semantic sentences that describe visual content depicted in photographs and are syntactically accurate.Many real-world applications rely on image captioning,such as helping people with visual impairments to see their surroundings.To formulate a coherent and relevant textual description,computer vision techniques are utilized to comprehend the visual content within an image,followed by natural language processing methods.Numerous approaches and models have been developed to deal with this multifaceted problem.Several models prove to be stateof-the-art solutions in this field.This work offers an exclusive perspective emphasizing the most critical strategies and techniques for enhancing image caption generation.Rather than reviewing all previous image captioning work,we analyze various techniques that significantly improve image caption generation and achieve significant performance improvements,including encompassing image captioning with visual attention methods,exploring semantic information types in captions,and employing multi-caption generation techniques.Further,advancements such as neural architecture search,few-shot learning,multi-phase learning,and cross-modal embedding within image caption networks are examined for their transformative effects.The comprehensive quantitative analysis conducted in this study identifies cutting-edgemethodologies and sheds light on their profound impact,driving forward the forefront of image captioning technology.展开更多
针对基于集合预测的密集视频描述方法由于缺乏显式的事件间特征交互且未针对事件间差异训练模型而导致的模型重复预测事件或生成语句雷同问题,提出一种基于事件最大边界的密集视频描述(dense video captioning based on event maximal m...针对基于集合预测的密集视频描述方法由于缺乏显式的事件间特征交互且未针对事件间差异训练模型而导致的模型重复预测事件或生成语句雷同问题,提出一种基于事件最大边界的密集视频描述(dense video captioning based on event maximal margin,EMM-DVC)方法。事件边界是包含事件间特征相似度、事件在视频中时间位置的距离、生成描述多样性的评分。EMM-DVC通过最大化事件边界,使相似预测结果的距离远且预测结果和实际事件的距离近。另外,EMM-DVC引入事件边界距离损失函数,通过扩大事件边界距离,引导模型关注不同事件。在ActivityNet Captions数据集上的实验证明,EMM-DVC与同类密集视频描述模型相比能生成更具多样性的描述文本,并且与主流密集视频描述模型相比,EMM-DVC在多个指标上达到最优水平。展开更多
基金supported by the National Natural Science Foundation of China(Nos.U22A2034,62177047)High Caliber Foreign Experts Introduction Plan funded by MOST,and Central South University Research Programme of Advanced Interdisciplinary Studies(No.2023QYJC020).
文摘Image captioning has seen significant research efforts over the last decade.The goal is to generate meaningful semantic sentences that describe visual content depicted in photographs and are syntactically accurate.Many real-world applications rely on image captioning,such as helping people with visual impairments to see their surroundings.To formulate a coherent and relevant textual description,computer vision techniques are utilized to comprehend the visual content within an image,followed by natural language processing methods.Numerous approaches and models have been developed to deal with this multifaceted problem.Several models prove to be stateof-the-art solutions in this field.This work offers an exclusive perspective emphasizing the most critical strategies and techniques for enhancing image caption generation.Rather than reviewing all previous image captioning work,we analyze various techniques that significantly improve image caption generation and achieve significant performance improvements,including encompassing image captioning with visual attention methods,exploring semantic information types in captions,and employing multi-caption generation techniques.Further,advancements such as neural architecture search,few-shot learning,multi-phase learning,and cross-modal embedding within image caption networks are examined for their transformative effects.The comprehensive quantitative analysis conducted in this study identifies cutting-edgemethodologies and sheds light on their profound impact,driving forward the forefront of image captioning technology.
文摘针对基于集合预测的密集视频描述方法由于缺乏显式的事件间特征交互且未针对事件间差异训练模型而导致的模型重复预测事件或生成语句雷同问题,提出一种基于事件最大边界的密集视频描述(dense video captioning based on event maximal margin,EMM-DVC)方法。事件边界是包含事件间特征相似度、事件在视频中时间位置的距离、生成描述多样性的评分。EMM-DVC通过最大化事件边界,使相似预测结果的距离远且预测结果和实际事件的距离近。另外,EMM-DVC引入事件边界距离损失函数,通过扩大事件边界距离,引导模型关注不同事件。在ActivityNet Captions数据集上的实验证明,EMM-DVC与同类密集视频描述模型相比能生成更具多样性的描述文本,并且与主流密集视频描述模型相比,EMM-DVC在多个指标上达到最优水平。