In existing image manipulation localization methods,the receptive field of standard convolution is limited,and during feature transfer,it is easy to lose high-frequency information about traces of manipulation.In addi...In existing image manipulation localization methods,the receptive field of standard convolution is limited,and during feature transfer,it is easy to lose high-frequency information about traces of manipulation.In addition,during feature fusion,the use of fixed sampling kernels makes it difficult to focus on local changes in features,leading to limited localization accuracy.This paper proposes an image manipulation localization method based on dual-branch hybrid convolution.First,a dual-branch hybrid convolution module is designed to expand the receptive field of the model to enhance the feature extraction ability of contextual semantic information,while also enabling the model to focus more on the high-frequency detail features of manipulation traces while localizing the manipulated area.Second,a multiscale content-aware feature fusion module is used to dynamically generate adaptive sampling kernels for each position in the feature map,enabling the model to focus more on the details of local features while locating the manipulated area.Experimental results on multiple datasets show that this method not only effectively improves the accuracy of image manipulation localization but also enhances the robustness of the model.展开更多
Existing image manipulation localization(IML)techniques require large,densely annotated sets of forged images.This requirement greatly increases labeling costs and limits a model’s ability to handle manipulation type...Existing image manipulation localization(IML)techniques require large,densely annotated sets of forged images.This requirement greatly increases labeling costs and limits a model’s ability to handle manipulation types that are novel or absent from the training data.To address these issues,we present CLIP-IML,an IML framework that leverages contrastive language-image pre-training(CLIP).A lightweight feature-reconstruction module transforms CLIP token sequences into spatial tensors,after which a compact feature-pyramid network and a multi-scale fusion decoder work together to capture information from fine to coarse levels.We evaluated CLIP-IML on ten public datasets that cover copy-move,splicing,removal,and artificial intelligence(AI)-generated forgeries.The framework raises the average F1-score by 7.85%relative to the strongest recent baselines and secures either the first-or second-place performance on every dataset.Ablation studies show that CLIP pre-training,higher resolution inputs,and the multi-scale decoder each make complementary contributions.Under six common post-processing perturbations,as well as the compression pipelines used by Facebook,Weibo,and WeChat,the performance decline never exceeds 2.2%,confirming strong practical robustness.Moreover,CLIP-IML requires only a few thousand annotated images for training,which markedly reduces data-collection and labeling effort compared with previous methods.All of these results indicate that CLIP-IML is highly generalizable for image tampering localization across a wide range of tampering scenarios.展开更多
基金National Natural Science Foundation of China(61703363)Shanxi Provincial Basic Research Program(202403021221206)+2 种基金Key Project of Shanxi Provincial Strategic Research on Science and Technology(202304031401011)Funding Project for Scientific Research Innovation Team on Data Mining and Industrial Intelligence Applications(YCXYTD-202402)Yuncheng University Research Project(YQ-2020021)。
文摘In existing image manipulation localization methods,the receptive field of standard convolution is limited,and during feature transfer,it is easy to lose high-frequency information about traces of manipulation.In addition,during feature fusion,the use of fixed sampling kernels makes it difficult to focus on local changes in features,leading to limited localization accuracy.This paper proposes an image manipulation localization method based on dual-branch hybrid convolution.First,a dual-branch hybrid convolution module is designed to expand the receptive field of the model to enhance the feature extraction ability of contextual semantic information,while also enabling the model to focus more on the high-frequency detail features of manipulation traces while localizing the manipulated area.Second,a multiscale content-aware feature fusion module is used to dynamically generate adaptive sampling kernels for each position in the feature map,enabling the model to focus more on the details of local features while locating the manipulated area.Experimental results on multiple datasets show that this method not only effectively improves the accuracy of image manipulation localization but also enhances the robustness of the model.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region under Grant No.2023D01C21the National Natural Science Foundation of China under Grant No.62362063.
文摘Existing image manipulation localization(IML)techniques require large,densely annotated sets of forged images.This requirement greatly increases labeling costs and limits a model’s ability to handle manipulation types that are novel or absent from the training data.To address these issues,we present CLIP-IML,an IML framework that leverages contrastive language-image pre-training(CLIP).A lightweight feature-reconstruction module transforms CLIP token sequences into spatial tensors,after which a compact feature-pyramid network and a multi-scale fusion decoder work together to capture information from fine to coarse levels.We evaluated CLIP-IML on ten public datasets that cover copy-move,splicing,removal,and artificial intelligence(AI)-generated forgeries.The framework raises the average F1-score by 7.85%relative to the strongest recent baselines and secures either the first-or second-place performance on every dataset.Ablation studies show that CLIP pre-training,higher resolution inputs,and the multi-scale decoder each make complementary contributions.Under six common post-processing perturbations,as well as the compression pipelines used by Facebook,Weibo,and WeChat,the performance decline never exceeds 2.2%,confirming strong practical robustness.Moreover,CLIP-IML requires only a few thousand annotated images for training,which markedly reduces data-collection and labeling effort compared with previous methods.All of these results indicate that CLIP-IML is highly generalizable for image tampering localization across a wide range of tampering scenarios.