Image inpainting refers to synthesizing missing content in an image based on known information to restore occluded or damaged regions,which is a typical manifestation of this trend.With the increasing complexity of im...Image inpainting refers to synthesizing missing content in an image based on known information to restore occluded or damaged regions,which is a typical manifestation of this trend.With the increasing complexity of image in tasks and the growth of data scale,existing deep learning methods still have some limitations.For example,they lack the ability to capture long-range dependencies and their performance in handling multi-scale image structures is suboptimal.To solve this problem,the paper proposes an image inpainting method based on the parallel dual-branch learnable Transformer network.The encoder of the proposed model generator consists of a dual-branch parallel structure with stacked CNN blocks and Transformer blocks,aiming to extract global and local feature information from images.Furthermore,a dual-branch fusion module is adopted to combine the features obtained from both branches.Additionally,a gated full-scale skip connection module is proposed to further enhance the coherence of the inpainting results and alleviate information loss.Finally,experimental results from the three public datasets demonstrate the superior performance of the proposed method.展开更多
基金supported by Scientific Research Fund of Hunan Provincial Natural Science Foundation under Grant 20231J60257Hunan Provincial Engineering Research Center for Intelligent Rehabilitation Robotics and Assistive Equipment under Grant 2025SH501Inha University and Design of a Conflict Detection and Validation Tool under Grant HX2024123.
文摘Image inpainting refers to synthesizing missing content in an image based on known information to restore occluded or damaged regions,which is a typical manifestation of this trend.With the increasing complexity of image in tasks and the growth of data scale,existing deep learning methods still have some limitations.For example,they lack the ability to capture long-range dependencies and their performance in handling multi-scale image structures is suboptimal.To solve this problem,the paper proposes an image inpainting method based on the parallel dual-branch learnable Transformer network.The encoder of the proposed model generator consists of a dual-branch parallel structure with stacked CNN blocks and Transformer blocks,aiming to extract global and local feature information from images.Furthermore,a dual-branch fusion module is adopted to combine the features obtained from both branches.Additionally,a gated full-scale skip connection module is proposed to further enhance the coherence of the inpainting results and alleviate information loss.Finally,experimental results from the three public datasets demonstrate the superior performance of the proposed method.