In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by ...In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by fusing lip images and audio signals. The main method used is lip-audio matching detection technology based on the Siamese neural network, combined with MFCC (Mel Frequency Cepstrum Coefficient) feature extraction of band-pass filters, an improved dual-branch Siamese network structure, and a two-stream network structure design. Firstly, the video stream is preprocessed to extract lip images, and the audio stream is preprocessed to extract MFCC features. Then, these features are processed separately through the two branches of the Siamese network. Finally, the model is trained and optimized through fully connected layers and loss functions. The experimental results show that the testing accuracy of the model in this study on the LRW (Lip Reading in the Wild) dataset reaches 92.3%;the recall rate is 94.3%;the F1 score is 93.3%, significantly better than the results of CNN (Convolutional Neural Networks) and LSTM (Long Short-Term Memory) models. In the validation of multi-resolution image streams, the highest accuracy of dual-resolution image streams reaches 94%. Band-pass filters can effectively improve the signal-to-noise ratio of deep forgery video detection when processing different types of audio signals. The real-time processing performance of the model is also excellent, and it achieves an average score of up to 5 in user research. These data demonstrate that the method proposed in this study can effectively fuse visual and audio information in deep forgery video detection, accurately identify inconsistencies between video and audio, and thus verify the effectiveness of lip-audio modality fusion technology in improving detection performance.展开更多
Face forgery detection is drawing ever-increasing attention in the academic community owing to security concerns.Despite the considerable progress in existing methods,we note that:Previous works overlooked finegrain f...Face forgery detection is drawing ever-increasing attention in the academic community owing to security concerns.Despite the considerable progress in existing methods,we note that:Previous works overlooked finegrain forgery cues with high transferability.Such cues positively impact the model’s accuracy and generalizability.Moreover,single-modality often causes overfitting of the model,and Red-Green-Blue(RGB)modal-only is not conducive to extracting the more detailed forgery traces.We propose a novel framework for fine-grain forgery cues mining with fusion modality to cope with these issues.First,we propose two functional modules to reveal and locate the deeper forged features.Our method locates deeper forgery cues through a dual-modality progressive fusion module and a noise adaptive enhancement module,which can excavate the association between dualmodal space and channels and enhance the learning of subtle noise features.A sensitive patch branch is introduced on this foundation to enhance the mining of subtle forgery traces under fusion modality.The experimental results demonstrate that our proposed framework can desirably explore the differences between authentic and forged images with supervised learning.Comprehensive evaluations of several mainstream datasets show that our method outperforms the state-of-the-art detection methods with remarkable detection ability and generalizability.展开更多
随着社交网络平台的迅速发展,网络欺凌问题日益突出,文本与图片相结合的多样化网络表达形式提高了网络欺凌的检测和治理难度.构建了一个包含文本和图片的中文多模态网络欺凌数据集,将BERT(bidirectional encoder representations from t...随着社交网络平台的迅速发展,网络欺凌问题日益突出,文本与图片相结合的多样化网络表达形式提高了网络欺凌的检测和治理难度.构建了一个包含文本和图片的中文多模态网络欺凌数据集,将BERT(bidirectional encoder representations from transformers)模型与ResNet50模型相结合,分别提取文本和图片的单模态特征,并进行决策层融合,对融合后的特征进行检测,实现了对网络欺凌与非网络欺凌2个类别的文本和图片的准确识别.实验结果表明,提出的多模态网络欺凌检测模型能够有效识别出包含文本与图片的具有网络欺凌性质的社交网络帖子或者评论,提高了多模态形式网络欺凌检测的实用性、准确性和效率,为社交网络平台的网络欺凌检测和治理提供了一种新的思路和方法,有助于构建更加健康、文明的网络环境.展开更多
社交媒体上图像和文本数据的快速增长导致人们对多模态讽刺检测问题的关注不断提高。然而,现有基于特征提取融合的检测方法存在一些缺陷:一是大多数方法缺乏多模态检测所需的底层模态对齐能力,二是模态融合过程忽视了模态间的动态关系,...社交媒体上图像和文本数据的快速增长导致人们对多模态讽刺检测问题的关注不断提高。然而,现有基于特征提取融合的检测方法存在一些缺陷:一是大多数方法缺乏多模态检测所需的底层模态对齐能力,二是模态融合过程忽视了模态间的动态关系,三是未能充分利用模态互补性。为此,提出一种基于单模态监督对比学习、多模态融合和多视图聚合预测的检测模型。以CLIP(contrastive language image pre-training)模型作为编码器来增强图像和文本底层编码的对齐效果。结合单模态监督对比学习方法,通过单模态预测来指导模态间的动态关系。然后,设计了全局-局部跨模态融合方法,利用每种模态的语义级表示作为全局多模态上下文与局部单模态特征进行交互,通过多个跨模态融合层提高模态融合效果,并减少了以往局部-局部跨模态融合方法的时间和空间成本。采用多视图聚合预测方法充分利用图像、文本和图文视图的互补性。总之,该模型能有效捕捉多模态讽刺数据的跨模态语义不一致性,在公开数据集MSD上取得了比现有最好方法DMSD-Cl更好的结果。展开更多
文摘In response to the problem of traditional methods ignoring audio modality tampering, this study aims to explore an effective deep forgery video detection technique that improves detection precision and reliability by fusing lip images and audio signals. The main method used is lip-audio matching detection technology based on the Siamese neural network, combined with MFCC (Mel Frequency Cepstrum Coefficient) feature extraction of band-pass filters, an improved dual-branch Siamese network structure, and a two-stream network structure design. Firstly, the video stream is preprocessed to extract lip images, and the audio stream is preprocessed to extract MFCC features. Then, these features are processed separately through the two branches of the Siamese network. Finally, the model is trained and optimized through fully connected layers and loss functions. The experimental results show that the testing accuracy of the model in this study on the LRW (Lip Reading in the Wild) dataset reaches 92.3%;the recall rate is 94.3%;the F1 score is 93.3%, significantly better than the results of CNN (Convolutional Neural Networks) and LSTM (Long Short-Term Memory) models. In the validation of multi-resolution image streams, the highest accuracy of dual-resolution image streams reaches 94%. Band-pass filters can effectively improve the signal-to-noise ratio of deep forgery video detection when processing different types of audio signals. The real-time processing performance of the model is also excellent, and it achieves an average score of up to 5 in user research. These data demonstrate that the method proposed in this study can effectively fuse visual and audio information in deep forgery video detection, accurately identify inconsistencies between video and audio, and thus verify the effectiveness of lip-audio modality fusion technology in improving detection performance.
基金This study is supported by the Fundamental Research Funds for the Central Universities of PPSUC under Grant 2022JKF02009.
文摘Face forgery detection is drawing ever-increasing attention in the academic community owing to security concerns.Despite the considerable progress in existing methods,we note that:Previous works overlooked finegrain forgery cues with high transferability.Such cues positively impact the model’s accuracy and generalizability.Moreover,single-modality often causes overfitting of the model,and Red-Green-Blue(RGB)modal-only is not conducive to extracting the more detailed forgery traces.We propose a novel framework for fine-grain forgery cues mining with fusion modality to cope with these issues.First,we propose two functional modules to reveal and locate the deeper forged features.Our method locates deeper forgery cues through a dual-modality progressive fusion module and a noise adaptive enhancement module,which can excavate the association between dualmodal space and channels and enhance the learning of subtle noise features.A sensitive patch branch is introduced on this foundation to enhance the mining of subtle forgery traces under fusion modality.The experimental results demonstrate that our proposed framework can desirably explore the differences between authentic and forged images with supervised learning.Comprehensive evaluations of several mainstream datasets show that our method outperforms the state-of-the-art detection methods with remarkable detection ability and generalizability.
文摘随着社交网络平台的迅速发展,网络欺凌问题日益突出,文本与图片相结合的多样化网络表达形式提高了网络欺凌的检测和治理难度.构建了一个包含文本和图片的中文多模态网络欺凌数据集,将BERT(bidirectional encoder representations from transformers)模型与ResNet50模型相结合,分别提取文本和图片的单模态特征,并进行决策层融合,对融合后的特征进行检测,实现了对网络欺凌与非网络欺凌2个类别的文本和图片的准确识别.实验结果表明,提出的多模态网络欺凌检测模型能够有效识别出包含文本与图片的具有网络欺凌性质的社交网络帖子或者评论,提高了多模态形式网络欺凌检测的实用性、准确性和效率,为社交网络平台的网络欺凌检测和治理提供了一种新的思路和方法,有助于构建更加健康、文明的网络环境.
文摘社交媒体上图像和文本数据的快速增长导致人们对多模态讽刺检测问题的关注不断提高。然而,现有基于特征提取融合的检测方法存在一些缺陷:一是大多数方法缺乏多模态检测所需的底层模态对齐能力,二是模态融合过程忽视了模态间的动态关系,三是未能充分利用模态互补性。为此,提出一种基于单模态监督对比学习、多模态融合和多视图聚合预测的检测模型。以CLIP(contrastive language image pre-training)模型作为编码器来增强图像和文本底层编码的对齐效果。结合单模态监督对比学习方法,通过单模态预测来指导模态间的动态关系。然后,设计了全局-局部跨模态融合方法,利用每种模态的语义级表示作为全局多模态上下文与局部单模态特征进行交互,通过多个跨模态融合层提高模态融合效果,并减少了以往局部-局部跨模态融合方法的时间和空间成本。采用多视图聚合预测方法充分利用图像、文本和图文视图的互补性。总之,该模型能有效捕捉多模态讽刺数据的跨模态语义不一致性,在公开数据集MSD上取得了比现有最好方法DMSD-Cl更好的结果。