目的近年来,基于深度学习的水印方法得到了广泛研究。现有方法通常对特征图的低频和高频部分同等对待,忽视了不同频率成分之间的重要差异,导致模型在处理多样化攻击时缺乏灵活性,难以同时实现水印的高保真性和强鲁棒性。为此,本文提出...目的近年来,基于深度学习的水印方法得到了广泛研究。现有方法通常对特征图的低频和高频部分同等对待,忽视了不同频率成分之间的重要差异,导致模型在处理多样化攻击时缺乏灵活性,难以同时实现水印的高保真性和强鲁棒性。为此,本文提出一种频率感知驱动的深度鲁棒图像水印技术(deep robust image watermarking driven by frequency awareness,RIWFP)。方法通过差异化机制处理低频和高频成分,提升水印性能。具体而言,低频成分通过小波卷积神经网络进行建模,利用宽感受野卷积在粗粒度层面高效学习全局结构和上下文信息;高频成分则采用深度可分离卷积和注意力机制组成的特征蒸馏块进行精炼,强化图像细节,在细粒度层面高效捕捉高频信息。此外,本文使用多频率小波损失函数,引导模型聚焦于不同频带的特征分布,进一步提升生成图像的质量。结果实验结果表明,提出的频率感知驱动的深度鲁棒图像水印技术在多个数据集上均表现出优越性能。在COCO(common objects in context)数据集上,RIWFP在随机丢弃攻击下的准确率达到91.4%;在椒盐噪声和中值滤波攻击下,RIWFP分别以100%和99.5%的准确率达到了最高水平,展现了其对高频信息的高效学习能力。在Ima⁃geNet数据集上,RIWFP在裁剪攻击下的准确率为93.4%;在JPEG压缩攻击下的准确率为99.6%,均显著优于其他对比方法。综合来看,RIWFP在COCO和ImageNet数据集上的平均准确率分别为96.7%和96.9%,均高于其他对比方法。结论本文所提方法通过频率感知的粗到细处理策略,显著增强了水印的不可见性和鲁棒性,在处理多种攻击时表现出优越性能。展开更多
为了解决模型版权保护和深度伪造检测中的图像真实性验证问题,提出高质量和高鲁棒性的面向扩散模型输出的水印方法DeWM(Decoder-driven WaterMarking for diffusion model)。首先,提出一种由解码器驱动的水印嵌入网络来实现编码器和解...为了解决模型版权保护和深度伪造检测中的图像真实性验证问题,提出高质量和高鲁棒性的面向扩散模型输出的水印方法DeWM(Decoder-driven WaterMarking for diffusion model)。首先,提出一种由解码器驱动的水印嵌入网络来实现编码器和解码器特征的直接共享,从而生成有高鲁棒性和不可见性的水印;其次,设计一种微调策略来对预训练扩散模型的解码器进行微调,并使生成的所有图像隐含特定水印,从而在不改变模型架构和扩散过程的前提下,实现简单且有效的水印嵌入。实验结果表明,在MS-COCO数据集上与潜在扩散模型水印方法Stable Signature相比,在水印位数提高至64位时,所提方法生成的水印图像的峰值信噪比(PSNR)与结构相似度(SSIM)分别增加了14.87%和9.41%,且所提方法针对裁剪、亮度调整和图像重建攻击的水印提取的位精度平均提升了3%,鲁棒性显著提高。展开更多
Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image dis...Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.展开更多
The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and ot...The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and other risks.Detecting AI-generated text has thus become essential to safeguard the authenticity and reliability of digital information.This survey reviews recent progress in detection methods,categorizing approaches into passive and active categories based on their reliance on intrinsic textual features or embedded signals.Passive detection is further divided into surface linguistic feature-based and language model-based methods,whereas active detection encompasses watermarking-based and semantic retrieval-based approaches.This taxonomy enables systematic comparison of methodological differences in model dependency,applicability,and robustness.A key challenge for AI-generated text detection is that existing detectors are highly vulnerable to adversarial attacks,particularly paraphrasing,which substantially compromises their effectiveness.Addressing this gap highlights the need for future research on enhancing robustness and cross-domain generalization.By synthesizing current advances and limitations,this survey provides a structured reference for the field and outlines pathways toward more reliable and scalable detection solutions.展开更多
针对基于深度学习的水印方法未充分突显图像的关键特征,以及未有效利用中间卷积层输出特征的问题,为提升含水印图像的视觉质量和抵抗噪声攻击的能力,提出一种融合注意力机制和多尺度特征的图像水印方法。在编码器部分,设计注意力模块关...针对基于深度学习的水印方法未充分突显图像的关键特征,以及未有效利用中间卷积层输出特征的问题,为提升含水印图像的视觉质量和抵抗噪声攻击的能力,提出一种融合注意力机制和多尺度特征的图像水印方法。在编码器部分,设计注意力模块关注重要图像特征,以减小水印嵌入引起的图像失真;在解码器部分,设计多尺度特征提取模块,以捕获不同层次的图像细节。实验结果表明,在COCO数据集上与深度水印模型HiDDeN(Hiding Data with Deep Networks)相比,所提方法生成的含水印图像的峰值信噪比(PSNR)和结构相似度(SSIM)分别增加了11.63%和1.29%;所提方法针对dropout、cropout、crop、高斯模糊和JPEG压缩的水印提取平均误比特率(BER)降低了53.85%;此外,消融实验结果验证了添加注意力模块和多尺度特征提取模块的方法有更好的不可见性和鲁棒性。展开更多
文摘目的近年来,基于深度学习的水印方法得到了广泛研究。现有方法通常对特征图的低频和高频部分同等对待,忽视了不同频率成分之间的重要差异,导致模型在处理多样化攻击时缺乏灵活性,难以同时实现水印的高保真性和强鲁棒性。为此,本文提出一种频率感知驱动的深度鲁棒图像水印技术(deep robust image watermarking driven by frequency awareness,RIWFP)。方法通过差异化机制处理低频和高频成分,提升水印性能。具体而言,低频成分通过小波卷积神经网络进行建模,利用宽感受野卷积在粗粒度层面高效学习全局结构和上下文信息;高频成分则采用深度可分离卷积和注意力机制组成的特征蒸馏块进行精炼,强化图像细节,在细粒度层面高效捕捉高频信息。此外,本文使用多频率小波损失函数,引导模型聚焦于不同频带的特征分布,进一步提升生成图像的质量。结果实验结果表明,提出的频率感知驱动的深度鲁棒图像水印技术在多个数据集上均表现出优越性能。在COCO(common objects in context)数据集上,RIWFP在随机丢弃攻击下的准确率达到91.4%;在椒盐噪声和中值滤波攻击下,RIWFP分别以100%和99.5%的准确率达到了最高水平,展现了其对高频信息的高效学习能力。在Ima⁃geNet数据集上,RIWFP在裁剪攻击下的准确率为93.4%;在JPEG压缩攻击下的准确率为99.6%,均显著优于其他对比方法。综合来看,RIWFP在COCO和ImageNet数据集上的平均准确率分别为96.7%和96.9%,均高于其他对比方法。结论本文所提方法通过频率感知的粗到细处理策略,显著增强了水印的不可见性和鲁棒性,在处理多种攻击时表现出优越性能。
基金supported by Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(Grant No.SJCX24_1332)Jiangsu Province Education Science Planning Project in 2024(Grant No.B-b/2024/01/122)High-Level Talent Scientific Research Foundation of Jinling Institute of Technology,China(Grant No.jit-b-201918).
文摘Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.
基金supported in part by the Science and Technology Innovation Program of Hunan Province under Grant 2025RC3166the National Natural Science Foundation of China under Grant 62572176the National Key R&D Program of China under Grant 2024YFF0618800.
文摘The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and other risks.Detecting AI-generated text has thus become essential to safeguard the authenticity and reliability of digital information.This survey reviews recent progress in detection methods,categorizing approaches into passive and active categories based on their reliance on intrinsic textual features or embedded signals.Passive detection is further divided into surface linguistic feature-based and language model-based methods,whereas active detection encompasses watermarking-based and semantic retrieval-based approaches.This taxonomy enables systematic comparison of methodological differences in model dependency,applicability,and robustness.A key challenge for AI-generated text detection is that existing detectors are highly vulnerable to adversarial attacks,particularly paraphrasing,which substantially compromises their effectiveness.Addressing this gap highlights the need for future research on enhancing robustness and cross-domain generalization.By synthesizing current advances and limitations,this survey provides a structured reference for the field and outlines pathways toward more reliable and scalable detection solutions.
文摘针对基于深度学习的水印方法未充分突显图像的关键特征,以及未有效利用中间卷积层输出特征的问题,为提升含水印图像的视觉质量和抵抗噪声攻击的能力,提出一种融合注意力机制和多尺度特征的图像水印方法。在编码器部分,设计注意力模块关注重要图像特征,以减小水印嵌入引起的图像失真;在解码器部分,设计多尺度特征提取模块,以捕获不同层次的图像细节。实验结果表明,在COCO数据集上与深度水印模型HiDDeN(Hiding Data with Deep Networks)相比,所提方法生成的含水印图像的峰值信噪比(PSNR)和结构相似度(SSIM)分别增加了11.63%和1.29%;所提方法针对dropout、cropout、crop、高斯模糊和JPEG压缩的水印提取平均误比特率(BER)降低了53.85%;此外,消融实验结果验证了添加注意力模块和多尺度特征提取模块的方法有更好的不可见性和鲁棒性。