Within the realm of multimodal neural machine translation(MNMT),addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue...Within the realm of multimodal neural machine translation(MNMT),addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue.We saw that discrepancies between textual content and associated images can lead to visual noise,potentially diverting the model’s focus away from the textual data and so affecting the translation’s comprehensive effectiveness.To solve this visual noise problem,we propose an innovative KDNR-MNMT model.Themodel combines the knowledge distillation technique with an anti-noise interaction mechanism,which makes full use of the synthesized graphic knowledge and local image interaction masks,aiming to extract more effective visual features.Meanwhile,the KDNR-MNMT model adopts a multimodal adaptive gating fusion strategy to enhance the constructive interaction of different modal information.By integrating a perceptual attention mechanism,which uses cross-modal interaction cues within the Transformer framework,our approach notably enhances the quality of machine translation outputs.To confirmthemodel’s performance,we carried out extensive testing and assessment on the extensively utilized Multi30K dataset.The outcomes of our experiments prove substantial enhancements in our model’s BLEU and METEOR scores,with respective increases of 0.78 and 0.99 points over prevailing methods.This accomplishment affirms the potency of our strategy for mitigating visual interference and heralds groundbreaking advancements within themultimodal NMT domain,further propelling the evolution of this scholarly pursuit.展开更多
基金supported by the Henan Provincial Science and Technology Research Project:232102211017,232102211006,232102210044,242102211020 and 242102211007the ZhengzhouUniversity of Light Industry Science and Technology Innovation Team Program Project:23XNKJTD0205.
文摘Within the realm of multimodal neural machine translation(MNMT),addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue.We saw that discrepancies between textual content and associated images can lead to visual noise,potentially diverting the model’s focus away from the textual data and so affecting the translation’s comprehensive effectiveness.To solve this visual noise problem,we propose an innovative KDNR-MNMT model.Themodel combines the knowledge distillation technique with an anti-noise interaction mechanism,which makes full use of the synthesized graphic knowledge and local image interaction masks,aiming to extract more effective visual features.Meanwhile,the KDNR-MNMT model adopts a multimodal adaptive gating fusion strategy to enhance the constructive interaction of different modal information.By integrating a perceptual attention mechanism,which uses cross-modal interaction cues within the Transformer framework,our approach notably enhances the quality of machine translation outputs.To confirmthemodel’s performance,we carried out extensive testing and assessment on the extensively utilized Multi30K dataset.The outcomes of our experiments prove substantial enhancements in our model’s BLEU and METEOR scores,with respective increases of 0.78 and 0.99 points over prevailing methods.This accomplishment affirms the potency of our strategy for mitigating visual interference and heralds groundbreaking advancements within themultimodal NMT domain,further propelling the evolution of this scholarly pursuit.