Deep neural networks(DNNs)and generative AI(GenAI)are increasingly vulnerable to backdoor attacks,where adversaries embed triggers into inputs to cause models to misclassify or misinterpret target labels.Beyond tradit...Deep neural networks(DNNs)and generative AI(GenAI)are increasingly vulnerable to backdoor attacks,where adversaries embed triggers into inputs to cause models to misclassify or misinterpret target labels.Beyond traditional single-trigger scenarios,attackers may inject multiple triggers across various object classes,forming unseen backdoor-object configurations that evade standard detection pipelines.In this paper,we introduce DBOM(Disentangled Backdoor-Object Modeling),a proactive framework that leverages structured disentanglement to identify and neutralize both seen and unseen backdoor threats at the dataset level.Specifically,DBOM factorizes input image representations by modeling triggers and objects as independent primitives in the embedding space through the use of Vision-Language Models(VLMs).By leveraging the frozen,pre-trained encoders of VLMs,our approach decomposes the latent representations into distinct components through a learnable visual prompt repository and prompt prefix tuning,ensuring that the relationships between triggers and objects are explicitly captured.To separate trigger and object representations in the visual prompt repository,we introduce the trigger–object separation and diversity losses that aids in disentangling trigger and object visual features.Next,by aligning image features with feature decomposition and fusion,as well as learned contextual prompt tokens in a shared multimodal space,DBOM enables zero-shot generalization to novel trigger-object pairings that were unseen during training,thereby offering deeper insights into adversarial attack patterns.Experimental results on CIFAR-10 and GTSRB demonstrate that DBOM robustly detects poisoned images prior to downstream training,significantly enhancing the security of DNN training pipelines.展开更多
With the ever-increasing continuous adoption of Industrial Internet of Things(IoT)technologies,security concerns have grown exponentially,especially regarding securing critical infrastructures.This is primarily due to...With the ever-increasing continuous adoption of Industrial Internet of Things(IoT)technologies,security concerns have grown exponentially,especially regarding securing critical infrastructures.This is primarily due to the potential for backdoors to provide unauthorized access,disrupt operations,and compromise sensitive data.Backdoors pose a significant threat to the integrity and security of Industrial IoT setups by exploiting vulnerabilities and bypassing standard authentication processes.Hence its detection becomes of paramount importance.This paper not only investigates the capabilities of Machine Learning(ML)models in identifying backdoor malware but also evaluates the impact of balancing the dataset via resampling techniques,including Synthetic Minority Oversampling Technique(SMOTE),Synthetic Data Vault(SDV),and Conditional Tabular Generative Adversarial Network(CTGAN),and feature reduction such as Pearson correlation coefficient,on the performance of the ML models.Experimental evaluation on the CCCS-CIC-AndMal-2020 dataset demonstrates that the Random Forest(RF)classifier generated an optimal model with 99.98%accuracy when using a balanced dataset created by SMOTE.Additionally,the training and testing time was reduced by approximately 50%when switching from the full feature set to a reduced feature set,without significant performance loss.展开更多
The Unintentional Insider Threat (UIT) concept highlights that insider threats might not always stem from malicious intent and can occur across various domains. This research examines how individuals with medical or p...The Unintentional Insider Threat (UIT) concept highlights that insider threats might not always stem from malicious intent and can occur across various domains. This research examines how individuals with medical or psychological issues might unintentionally become insider threats due to their perception of being targeted. Insights from the survey A Survey of Unintentional Medical Insider Threat Category indicate that such perceptions can be linked to underlying health conditions. The study Emotion Analysis Based on Belief of Targeted Individual Supporting Insider Threat Detection reveals that anger is a common emotion among these individuals. The findings suggest that UITs are often linked to medical or psychological issues, with anger being prevalent. To mitigate these risks, it is recommended that Insider Threat programs integrate expertise from medicine, psychology, and cybersecurity. Additionally, handwriting analysis is proposed as a potential tool for detecting insider threats, reflecting the evolving nature of threat assessment methodologies.展开更多
基金supported by the UWF Argo Cyber Emerging Scholars(ACES)program funded by the National Science Foundation(NSF)CyberCorps^(®) Scholarship for Service(SFS)award under grant number 1946442.
文摘Deep neural networks(DNNs)and generative AI(GenAI)are increasingly vulnerable to backdoor attacks,where adversaries embed triggers into inputs to cause models to misclassify or misinterpret target labels.Beyond traditional single-trigger scenarios,attackers may inject multiple triggers across various object classes,forming unseen backdoor-object configurations that evade standard detection pipelines.In this paper,we introduce DBOM(Disentangled Backdoor-Object Modeling),a proactive framework that leverages structured disentanglement to identify and neutralize both seen and unseen backdoor threats at the dataset level.Specifically,DBOM factorizes input image representations by modeling triggers and objects as independent primitives in the embedding space through the use of Vision-Language Models(VLMs).By leveraging the frozen,pre-trained encoders of VLMs,our approach decomposes the latent representations into distinct components through a learnable visual prompt repository and prompt prefix tuning,ensuring that the relationships between triggers and objects are explicitly captured.To separate trigger and object representations in the visual prompt repository,we introduce the trigger–object separation and diversity losses that aids in disentangling trigger and object visual features.Next,by aligning image features with feature decomposition and fusion,as well as learned contextual prompt tokens in a shared multimodal space,DBOM enables zero-shot generalization to novel trigger-object pairings that were unseen during training,thereby offering deeper insights into adversarial attack patterns.Experimental results on CIFAR-10 and GTSRB demonstrate that DBOM robustly detects poisoned images prior to downstream training,significantly enhancing the security of DNN training pipelines.
文摘With the ever-increasing continuous adoption of Industrial Internet of Things(IoT)technologies,security concerns have grown exponentially,especially regarding securing critical infrastructures.This is primarily due to the potential for backdoors to provide unauthorized access,disrupt operations,and compromise sensitive data.Backdoors pose a significant threat to the integrity and security of Industrial IoT setups by exploiting vulnerabilities and bypassing standard authentication processes.Hence its detection becomes of paramount importance.This paper not only investigates the capabilities of Machine Learning(ML)models in identifying backdoor malware but also evaluates the impact of balancing the dataset via resampling techniques,including Synthetic Minority Oversampling Technique(SMOTE),Synthetic Data Vault(SDV),and Conditional Tabular Generative Adversarial Network(CTGAN),and feature reduction such as Pearson correlation coefficient,on the performance of the ML models.Experimental evaluation on the CCCS-CIC-AndMal-2020 dataset demonstrates that the Random Forest(RF)classifier generated an optimal model with 99.98%accuracy when using a balanced dataset created by SMOTE.Additionally,the training and testing time was reduced by approximately 50%when switching from the full feature set to a reduced feature set,without significant performance loss.
文摘The Unintentional Insider Threat (UIT) concept highlights that insider threats might not always stem from malicious intent and can occur across various domains. This research examines how individuals with medical or psychological issues might unintentionally become insider threats due to their perception of being targeted. Insights from the survey A Survey of Unintentional Medical Insider Threat Category indicate that such perceptions can be linked to underlying health conditions. The study Emotion Analysis Based on Belief of Targeted Individual Supporting Insider Threat Detection reveals that anger is a common emotion among these individuals. The findings suggest that UITs are often linked to medical or psychological issues, with anger being prevalent. To mitigate these risks, it is recommended that Insider Threat programs integrate expertise from medicine, psychology, and cybersecurity. Additionally, handwriting analysis is proposed as a potential tool for detecting insider threats, reflecting the evolving nature of threat assessment methodologies.