Recent advances in artificial intelligence and the availability of large-scale benchmarks have made deepfake video generation and manipulation easier.Therefore,developing reliable and robust deepfake video detection m...Recent advances in artificial intelligence and the availability of large-scale benchmarks have made deepfake video generation and manipulation easier.Therefore,developing reliable and robust deepfake video detection mechanisms is paramount.This research introduces a novel real-time deepfake video detection framework by analyzing gaze and blink patterns,addressing the spatial-temporal challenges unique to gaze and blink anomalies using the TimeSformer and hybrid Transformer-CNN models.The TimeSformer architecture leverages spatial-temporal attention mechanisms to capture fine-grained blinking intervals and gaze direction anomalies.Compared to state-of-the-art traditional convolutional models like MesoNet and EfficientNet,which primarily focus on global facial features,our approach emphasizes localized eye-region analysis,significantly enhancing detection accuracy.We evaluate our framework on four standard datasets:FaceForensics,CelebDF-V2,DFDC,and FakeAVCeleb.The proposed framework results reveal higher accuracy,with the TimeSformer model achieving accuracies of 97.5%,96.3%,95.8%,and 97.1%,and with the hybrid Transformer-CNN model demonstrating accuracies of 92.8%,91.5%,90.9%,and 93.2%,on FaceForensics,CelebDF-V2,DFDC,and FakeAVCeleb datasets,respectively,showing robustness in distinguishing manipulated from authentic videos.Our research provides a robust state-of-the-art framework for real-time deepfake video detection.This novel study significantly contributes to video forensics,presenting scalable and accurate real-world application solutions.展开更多
文摘Recent advances in artificial intelligence and the availability of large-scale benchmarks have made deepfake video generation and manipulation easier.Therefore,developing reliable and robust deepfake video detection mechanisms is paramount.This research introduces a novel real-time deepfake video detection framework by analyzing gaze and blink patterns,addressing the spatial-temporal challenges unique to gaze and blink anomalies using the TimeSformer and hybrid Transformer-CNN models.The TimeSformer architecture leverages spatial-temporal attention mechanisms to capture fine-grained blinking intervals and gaze direction anomalies.Compared to state-of-the-art traditional convolutional models like MesoNet and EfficientNet,which primarily focus on global facial features,our approach emphasizes localized eye-region analysis,significantly enhancing detection accuracy.We evaluate our framework on four standard datasets:FaceForensics,CelebDF-V2,DFDC,and FakeAVCeleb.The proposed framework results reveal higher accuracy,with the TimeSformer model achieving accuracies of 97.5%,96.3%,95.8%,and 97.1%,and with the hybrid Transformer-CNN model demonstrating accuracies of 92.8%,91.5%,90.9%,and 93.2%,on FaceForensics,CelebDF-V2,DFDC,and FakeAVCeleb datasets,respectively,showing robustness in distinguishing manipulated from authentic videos.Our research provides a robust state-of-the-art framework for real-time deepfake video detection.This novel study significantly contributes to video forensics,presenting scalable and accurate real-world application solutions.