Image captioning,a pivotal research area at the intersection of image understanding,artificial intelligence,and linguistics,aims to generate natural language descriptions for images.This paper proposes an efficient im...Image captioning,a pivotal research area at the intersection of image understanding,artificial intelligence,and linguistics,aims to generate natural language descriptions for images.This paper proposes an efficient image captioning model named Mob-IMWTC,which integrates improved wavelet convolution(IMWTC)with an enhanced MobileNet V3 architecture.The enhanced MobileNet V3 integrates a transformer encoder as its encoding module and a transformer decoder as its decoding module.This innovative neural network significantly reduces the memory space required and model training time,while maintaining a high level of accuracy in generating image descriptions.IMWTC facilitates large receptive fields without significantly increasing the number of parameters or computational overhead.The improvedMobileNet V3 model has its classifier removed,and simultaneously,it employs IMWTC layers to replace the original convolutional layers.This makes Mob-IMWTC exceptionally well-suited for deployment on lowresource devices.Experimental results,based on objective evaluation metrics such as BLEU,ROUGE,CIDEr,METEOR,and SPICE,demonstrate that Mob-IMWTC outperforms state-of-the-art models,including three CNN architectures(CNN-LSTM,CNN-Att-LSTM,CNN-Tran),two mainstream methods(LCM-Captioner,ClipCap),and our previous work(Mob-Tran).Subjective evaluations further validate the model’s superiority in terms of grammaticality,adequacy,logic,readability,and humanness.Mob-IMWTC offers a lightweight yet effective solution for image captioning,making it suitable for deployment on resource-constrained devices.展开更多
基金funded by National Social Science Fund of China,grant number 23BYY197.
文摘Image captioning,a pivotal research area at the intersection of image understanding,artificial intelligence,and linguistics,aims to generate natural language descriptions for images.This paper proposes an efficient image captioning model named Mob-IMWTC,which integrates improved wavelet convolution(IMWTC)with an enhanced MobileNet V3 architecture.The enhanced MobileNet V3 integrates a transformer encoder as its encoding module and a transformer decoder as its decoding module.This innovative neural network significantly reduces the memory space required and model training time,while maintaining a high level of accuracy in generating image descriptions.IMWTC facilitates large receptive fields without significantly increasing the number of parameters or computational overhead.The improvedMobileNet V3 model has its classifier removed,and simultaneously,it employs IMWTC layers to replace the original convolutional layers.This makes Mob-IMWTC exceptionally well-suited for deployment on lowresource devices.Experimental results,based on objective evaluation metrics such as BLEU,ROUGE,CIDEr,METEOR,and SPICE,demonstrate that Mob-IMWTC outperforms state-of-the-art models,including three CNN architectures(CNN-LSTM,CNN-Att-LSTM,CNN-Tran),two mainstream methods(LCM-Captioner,ClipCap),and our previous work(Mob-Tran).Subjective evaluations further validate the model’s superiority in terms of grammaticality,adequacy,logic,readability,and humanness.Mob-IMWTC offers a lightweight yet effective solution for image captioning,making it suitable for deployment on resource-constrained devices.