期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
基于改进TimeSformer算法的人体异常行为识别研究
1
作者 廖晓群 徐清钏 +2 位作者 杨浩东 李丹 薛亚楠 《计算机工程》 2025年第11期112-122,共11页
人体异常行为研究是应对人体潜在危险和紧急情况的重要保障任务。针对人体异常行为定义模糊、缺乏标准数据集等问题,基于生活场景定义头痛、摔倒、抽搐、腰痛、拳打、踢踹6种高发生频率的人体异常行为,并自建数据集HABDataset-6。基于... 人体异常行为研究是应对人体潜在危险和紧急情况的重要保障任务。针对人体异常行为定义模糊、缺乏标准数据集等问题,基于生活场景定义头痛、摔倒、抽搐、腰痛、拳打、踢踹6种高发生频率的人体异常行为,并自建数据集HABDataset-6。基于注意力机制的TimeSformer算法在自建数据集HABDataset-6上存在高损失和时间序列建模不全面的问题,难以提取复杂样本的特征。为了更好地处理人体异常行为数据,提出改进算法TS-AT。首先采用加速随机梯度下降(ASGD)优化算法改进交叉熵损失函数来设计CAS模块降低原算法损失值,其次嵌入时间偏移模块(TSM)到原算法的Backbone网络中,提高时间序列的感知能力,提取更优特征用于模型训练。实验结果表明:TS-AT算法在自建数据集HABDataset-6上取得了良好效果,各行为类别的平均推理准确率高于80%;在公开数据集UCF-10和老人异常行为数据上,平均测试准确率分别达到了99%和84%,超过了对比算法。这些结果表明TS-AT算法在人体异常行为识别方面具有更高的精确度和良好的鲁棒性,有望提高应对潜在危险和紧急情况的能力,进一步保障人们的安全与健康。 展开更多
关键词 人体异常行为 timesformer算法 时间序列 优化算法 时间偏移模块
在线阅读 下载PDF
Real-Time Deepfake Detection via Gaze and Blink Patterns:A Transformer Framework
2
作者 Muhammad Javed Zhaohui Zhang +3 位作者 Fida Hussain Dahri Asif Ali Laghari Martin Krajčík Ahmad Almadhor 《Computers, Materials & Continua》 2025年第10期1457-1493,共37页
Recent advances in artificial intelligence and the availability of large-scale benchmarks have made deepfake video generation and manipulation easier.Therefore,developing reliable and robust deepfake video detection m... Recent advances in artificial intelligence and the availability of large-scale benchmarks have made deepfake video generation and manipulation easier.Therefore,developing reliable and robust deepfake video detection mechanisms is paramount.This research introduces a novel real-time deepfake video detection framework by analyzing gaze and blink patterns,addressing the spatial-temporal challenges unique to gaze and blink anomalies using the TimeSformer and hybrid Transformer-CNN models.The TimeSformer architecture leverages spatial-temporal attention mechanisms to capture fine-grained blinking intervals and gaze direction anomalies.Compared to state-of-the-art traditional convolutional models like MesoNet and EfficientNet,which primarily focus on global facial features,our approach emphasizes localized eye-region analysis,significantly enhancing detection accuracy.We evaluate our framework on four standard datasets:FaceForensics,CelebDF-V2,DFDC,and FakeAVCeleb.The proposed framework results reveal higher accuracy,with the TimeSformer model achieving accuracies of 97.5%,96.3%,95.8%,and 97.1%,and with the hybrid Transformer-CNN model demonstrating accuracies of 92.8%,91.5%,90.9%,and 93.2%,on FaceForensics,CelebDF-V2,DFDC,and FakeAVCeleb datasets,respectively,showing robustness in distinguishing manipulated from authentic videos.Our research provides a robust state-of-the-art framework for real-time deepfake video detection.This novel study significantly contributes to video forensics,presenting scalable and accurate real-world application solutions. 展开更多
关键词 Deepfake detection deep learning video forensics gaze and blink patterns TRANSFORMERS timesformer MesoNet4
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部