Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to ...Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to overfitting to known attack types and struggle to generalize to novel attack scenarios.Recent studies have explored formulating fPAD as an anomaly detection problem or one-class classification task,enabling the training of generalized models for unknown attack detection.However,conventional anomaly detection approaches encounter difficulties in precisely delineating the boundary between bonafide samples and unknown attacks.To address this challenge,we propose a novel framework focusing on unknown attack detection using exclusively bonafide facial data during training.The core innovation lies in our pseudo-negative sample synthesis(PNSS)strategy,which facilitates learning of compact decision boundaries between bonafide faces and potential attack variations.Specifically,PNSS generates synthetic negative samples within low-likelihood regions of the bonafide feature space to represent diverse unknown attack patterns.To overcome the inherent imbalance between positive and synthetic negative samples during iterative training,we implement a dual-loss mechanism combining focal loss for classification optimization with pairwise confusion loss as a regularizer.This architecture effectively mitigates model bias towards bonafide samples while maintaining discriminative power.Comprehensive evaluations across three benchmark datasets validate the framework’s superior performance.Notably,our PNSS achieves 8%–18% average classification error rate(ACER)reduction compared with state-of-the-art one-class fPAD methods in cross-dataset evaluations on Idiap Replay-Attack and MSU-MFSD datasets.展开更多
The integration of AI technology with IoT devices,as in the case of Artificial Intelligence of Things(AIoT),has enabled more efficient and intelligent processing and analysis of data than traditional IoT systems.However...The integration of AI technology with IoT devices,as in the case of Artificial Intelligence of Things(AIoT),has enabled more efficient and intelligent processing and analysis of data than traditional IoT systems.However,the use of biometric information by AIoT devices can pose new security risks,such as presentation attacks and privacy breaches,particularly for immutable features such as iris information,which can lead to long-term security vulnerabilities when compromised.Most existing iris recognition system security models are currently designed to address only direct presentation attack algorithms.Therefore,such models cannot address other security threats.To address these challenges,this study proposes a hybrid iris recognition system security protection model that employs presentation attack detection,flow monitoring,and black list restrictions to enhance the overall security of AIoT devices and improve the efficiency of protection.Specifically,the model aims to prevent presentation attacks andflow attacks against the iris recognition system,which may compromise the security of biometric information.The proposed method is expected to increase AIoT devices security against potential threats to sensitive information.展开更多
基金supported in part by the National Natural Science Foundation of China under Grants 61972267,and 61772070in part by the Natural Science Foundation of Hebei Province under Grant F2024210005.
文摘Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to overfitting to known attack types and struggle to generalize to novel attack scenarios.Recent studies have explored formulating fPAD as an anomaly detection problem or one-class classification task,enabling the training of generalized models for unknown attack detection.However,conventional anomaly detection approaches encounter difficulties in precisely delineating the boundary between bonafide samples and unknown attacks.To address this challenge,we propose a novel framework focusing on unknown attack detection using exclusively bonafide facial data during training.The core innovation lies in our pseudo-negative sample synthesis(PNSS)strategy,which facilitates learning of compact decision boundaries between bonafide faces and potential attack variations.Specifically,PNSS generates synthetic negative samples within low-likelihood regions of the bonafide feature space to represent diverse unknown attack patterns.To overcome the inherent imbalance between positive and synthetic negative samples during iterative training,we implement a dual-loss mechanism combining focal loss for classification optimization with pairwise confusion loss as a regularizer.This architecture effectively mitigates model bias towards bonafide samples while maintaining discriminative power.Comprehensive evaluations across three benchmark datasets validate the framework’s superior performance.Notably,our PNSS achieves 8%–18% average classification error rate(ACER)reduction compared with state-of-the-art one-class fPAD methods in cross-dataset evaluations on Idiap Replay-Attack and MSU-MFSD datasets.
基金supported by Shanghai Engineering Research Center of Cyber and Information Security Evaluation(The Third Research Institute of Ministry of Public Security)supported by the project Analysis and Research of Attack Detection Technology in IoT Smart Devices with project No.KFKT2021-009.
文摘The integration of AI technology with IoT devices,as in the case of Artificial Intelligence of Things(AIoT),has enabled more efficient and intelligent processing and analysis of data than traditional IoT systems.However,the use of biometric information by AIoT devices can pose new security risks,such as presentation attacks and privacy breaches,particularly for immutable features such as iris information,which can lead to long-term security vulnerabilities when compromised.Most existing iris recognition system security models are currently designed to address only direct presentation attack algorithms.Therefore,such models cannot address other security threats.To address these challenges,this study proposes a hybrid iris recognition system security protection model that employs presentation attack detection,flow monitoring,and black list restrictions to enhance the overall security of AIoT devices and improve the efficiency of protection.Specifically,the model aims to prevent presentation attacks andflow attacks against the iris recognition system,which may compromise the security of biometric information.The proposed method is expected to increase AIoT devices security against potential threats to sensitive information.