In federated learning,backdoor attacks have become an important research topic with their wide application in processing sensitive datasets.Since federated learning detects or modifies local models through defense mec...In federated learning,backdoor attacks have become an important research topic with their wide application in processing sensitive datasets.Since federated learning detects or modifies local models through defense mechanisms during aggregation,it is difficult to conduct effective backdoor attacks.In addition,existing backdoor attack methods are faced with challenges,such as low backdoor accuracy,poor ability to evade anomaly detection,and unstable model training.To address these challenges,a method called adaptive simulation backdoor attack(ASBA)is proposed.Specifically,ASBA improves the stability of model training by manipulating the local training process and using an adaptive mechanism,the ability of the malicious model to evade anomaly detection by combing large simulation training and clipping,and the backdoor accuracy by introducing a stimulus model to amplify the impact of the backdoor in the global model.Extensive comparative experiments under five advanced defense scenarios show that ASBA can effectively evade anomaly detection and achieve high backdoor accuracy in the global model.Furthermore,it exhibits excellent stability and effectiveness after multiple rounds of attacks,outperforming state-of-the-art backdoor attack methods.展开更多
The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integra...The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies.展开更多
According to the dynamic interaction process between cyber flow and power flow in grid cyber-physical systems(GCPS),attackers could gradually trigger large-scale power failures through cooperative cyber-attacks,subseq...According to the dynamic interaction process between cyber flow and power flow in grid cyber-physical systems(GCPS),attackers could gradually trigger large-scale power failures through cooperative cyber-attacks,subsequently forming cross-domain cascading failures(CDCF)that cross cyber-domain and power-domain and endanger the stable running of GCPS.To reveal the evolutionary mechanism of CDCF,an optimal attack scheme evaluation method is proposed,considering the spatiotemporal synergy of multiple attack-event-chains.First,in accordance with the spatiotemporal synergy of multiple attack-event-chains,the CDCF evolutionary mechanism is analyzed from the attackers'perspective,and a CDCF mathematical model is established.Furthermore,an attack graph model of CDCF evolution and its hazard calculation method are proposed.Then,the attackers'decision-making process for the optimal attack scheme of CDCF is deduced based on the attack graph model.Finally,both the evaluation and implementation processes of the optimal attack scheme are simulated in the GCPS experimental system based on IEEE-39 bus systems.展开更多
Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global...Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global model through compromised updates,posing significant threats to model integrity and becoming a key focus in FL security.Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity,resulting in limited stealth and adaptability.To address the heterogeneity of malicious client devices,this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack(CASBA).By incorporating measurements of clients’computational and communication capabilities,CASBA employs a dynamic hierarchical attack strategy that adaptively aligns attack intensity with available resources.Furthermore,an improved deep convolutional generative adversarial network(DCGAN)is integrated into the attack pipeline to embed triggers without modifying original data,significantly enhancing stealthiness.Comparative experiments with Shadow Backdoor Attack(SBA)across multiple scenarios demonstrate that CASBA dynamically adjusts resource consumption based on device capabilities,reducing average memory usage per iteration by 5.8%.CASBA improves resource efficiency while keeping the drop in attack success rate within 3%.Additionally,the effectiveness of CASBA against three robust FL algorithms is also validated.展开更多
Dear Editor,This letter studies the problem of stealthy attacks targeting stochastic event-based estimation,alongside proposing measures for their mitigation.A general attack framework is introduced,and the correspond...Dear Editor,This letter studies the problem of stealthy attacks targeting stochastic event-based estimation,alongside proposing measures for their mitigation.A general attack framework is introduced,and the corresponding stealthiness condition is analyzed.To enhance system security,we advocate for a single-dimensional encryption method,showing that securing a singular data element is sufficient to shield the system from the perils of stealthy attacks.展开更多
Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulner...Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulnerabilities in software and communication protocols to silently gain access,exfiltrate data,and enable long-term surveillance.Their stealth and ability to evade traditional defenses make detection and mitigation highly challenging.This paper addresses these threats by systematically mapping the tactics and techniques of zero-click attacks using the MITRE ATT&CK framework,a widely adopted standard for modeling adversarial behavior.Through this mapping,we categorize real-world attack vectors and better understand how such attacks operate across the cyber-kill chain.To support threat detection efforts,we propose an Active Learning-based method to efficiently label the Pegasus spyware dataset in alignment with the MITRE ATT&CK framework.This approach reduces the effort of manually annotating data while improving the quality of the labeled data,which is essential to train robust cybersecurity models.In addition,our analysis highlights the structured execution paths of zero-click attacks and reveals gaps in current defense strategies.The findings emphasize the importance of forward-looking strategies such as continuous surveillance,dynamic threat profiling,and security education.By bridging zero-click attack analysis with the MITRE ATT&CK framework and leveraging machine learning for dataset annotation,this work provides a foundation for more accurate threat detection and the development of more resilient and structured cybersecurity frameworks.展开更多
The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a...The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a crucial research question arises:How can we differentiate between AI-generated and human-authored text?Existing detectors face some challenges,such as operating as black boxes,relying on supervised training,and being vulnerable to manipulation and misinformation.To tackle these challenges,we propose an innovative unsupervised white-box detection method that utilizes a“dual-driven verification mechanism”to achieve high-performance detection,even in the presence of obfuscated attacks in the text content.To be more specific,we initially employ the SpaceInfi strategy to enhance the difficulty of detecting the text content.Subsequently,we randomly select vulnerable spots from the text and perturb them using another pre-trained language model(e.g.,T5).Finally,we apply a dual-driven defense mechanism(D3M)that validates text content with perturbations,whether generated by a model or authored by a human,based on the dimensions of Information TransmissionQuality and Information TransmissionDensity.Through experimental validation,our proposed novelmethod demonstrates state-of-the-art(SOTA)performancewhen exposed to equivalent levels of perturbation intensity across multiple benchmarks,thereby showcasing the effectiveness of our strategies.展开更多
The performance of deep recommendation models degrades significantly under data poisoning attacks.While adversarial training methods such as Vulnerability-Aware Training(VAT)enhance robustness by injecting perturbatio...The performance of deep recommendation models degrades significantly under data poisoning attacks.While adversarial training methods such as Vulnerability-Aware Training(VAT)enhance robustness by injecting perturbations into embeddings,they remain limited by coarse-grained noise and a static defense strategy,leaving models susceptible to adaptive attacks.This study proposes a novel framework,Self-Purification Data Sanitization(SPD),which integrates vulnerability-aware adversarial training with dynamic label correction.Specifically,SPD first identifies high-risk users through a fragility scoring mechanism,then applies self-purification by replacing suspicious interactions with model-predicted high-confidence labels during training.This closed-loop process continuously sanitizes the training data and breaks the protection ceiling of conventional adversarial training.Experiments demonstrate that SPD significantly improves the robustness of both Matrix Factorization(MF)and LightGCN models against various poisoning attacks.We show that SPD effectively suppresses malicious gradient propagation and maintains recommendation accuracy.Evaluations on Gowalla and Yelp2018 confirmthat SPD-trainedmodels withstandmultiple attack strategies—including Random,Bandwagon,DP,and Rev attacks—while preserving performance.展开更多
This paper proposes a tamper detection technique for semi-fragile watermarking using Quantizationbased Discrete Cosine Transform(DCT)for tamper localization.In this study,the proposed embedding strategy is investigate...This paper proposes a tamper detection technique for semi-fragile watermarking using Quantizationbased Discrete Cosine Transform(DCT)for tamper localization.In this study,the proposed embedding strategy is investigated by experimental tests over the diagonal order of the DCT coefficients.The cover image is divided into non-overlapping blocks of size 8×8 pixels.The DCT is applied to each block,and the coefficients are arranged using a zig-zag pattern within the block.In this study,the low-frequency coefficients are selected to examine the impact of the imperceptibility score and tamper detection accuracy.High accuracy of tamper detection can be achieved by checking the surrounding blocks to determine whether the corresponding block has been tampered with.The proposed tamper detection is tested under various malicious,incidental,and hybrid attacks(both incidental and malicious attacks).The experimental results demonstrate that the proposed technique achieves a Peak-Signal-to-Noise Ratio(PSNR)value of 41.2318 dB,an average Structural Similarity Index Measure(SSIM)value of 0.9768.The proposed scheme is also evaluated against malicious attacks such as copy-move,object deletion,object manipulation,and collage attacks.The proposed scheme can detect the malicious attack localization under various tampering rates.In addition,the proposed scheme can still detect tampered pixels under a hybrid attack,such as a combination ofmalicious and incidental attacks,with an average accuracy of 96.44%.展开更多
Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to ...Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field.展开更多
The large-scale deployment of Internet of Things(IoT)technology across various aspects of daily life has significantly propelled the intelligent development of society.Among them,the integration of IoT and named data ...The large-scale deployment of Internet of Things(IoT)technology across various aspects of daily life has significantly propelled the intelligent development of society.Among them,the integration of IoT and named data networks(NDNs)reduces network complexity and provides practical directions for content-oriented network design.However,ensuring data integrity in NDN-IoT applications remains a challenging issue.Very recently,Wang et al.(Entropy,27(5),471(2025))designed a certificateless aggregate signature(CLAS)scheme for NDN-IoT environments.Wang et al.stated that their construction was provably secure under various types of security attacks.Using theoretical analysis methods,in this work,we reveal that their CLAS design fails to meet unforgeability,a core security requirement for CLAS schemes.In particular,we demonstrate that their scheme is vulnerable to amalicious public-key replacement attack,enabling an adversary to produce authentic signatures for arbitrary fraudulent messages.Therefore,Wang et al.’s design cannot achieve its goal.To address the issue,we systematically examine the root causes behind the vulnerability and propose a security-enhanced CLAS construction for NDN-IoT environments.We prove the security ofour improveddesignunder the standard security assumptionandalsoanalyze its practicalperformanceby comparing the computational and communication costs with several related works.The comparison results show the practicality of our design.展开更多
In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free...In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.展开更多
The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobil...The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods.展开更多
At inference time,deep neural networks are susceptible to backdoor attacks,which can produce attackercontrolled outputs when inputs contain carefully crafted triggers.Existing defense methods often focus on specific a...At inference time,deep neural networks are susceptible to backdoor attacks,which can produce attackercontrolled outputs when inputs contain carefully crafted triggers.Existing defense methods often focus on specific attack types or incur high costs,such as data cleaning or model fine-tuning.In contrast,we argue that it is possible to achieve effective and generalizable defense without removing triggers or incurring high model-cleaning costs.Fromthe attacker’s perspective and based on characteristics of vulnerable neuron activation anomalies,we propose an Adaptive Feature Injection(AFI)method for black-box backdoor detection.AFI employs a pre-trained image encoder to extract multi-level deep features and constructs a dynamic weight fusionmechanism for precise identification and interception of poisoned samples.Specifically,we select the control samples with the largest feature differences fromthe clean dataset via feature-space analysis,and generate blended sample pairs with the test sample using dynamic linear interpolation.The detection statistic is computed by measuring the divergence G(x)in model output responses.We systematically evaluate the effectiveness of AFI against representative backdoor attacks,including BadNets,Blend,WaNet,and IAB,on three benchmark datasets:MNIST,CIFAR-10,and ImageNet.Experimental results show that AFI can effectively detect poisoned samples,achieving average detection rates of 95.20%,94.15%,and 86.49%on these datasets,respectively.Compared with existing methods,AFI demonstrates strong cross-domain generalization ability and robustness to unknown attacks.展开更多
Internet of Things(IoTs)devices are bringing about a revolutionary change our society by enabling connectivity regardless of time and location.However,The extensive deployment of these devices also makes them attracti...Internet of Things(IoTs)devices are bringing about a revolutionary change our society by enabling connectivity regardless of time and location.However,The extensive deployment of these devices also makes them attractive victims for themalicious actions of adversaries.Within the spectrumof existing threats,Side-ChannelAttacks(SCAs)have established themselves as an effective way to compromise cryptographic implementations.These attacks exploit unintended,unintended physical leakage that occurs during the cryptographic execution of devices,bypassing the theoretical strength of the crypto design.In recent times,the advancement of deep learning has provided SCAs with a powerful ally.Well-trained deep-learningmodels demonstrate an exceptional capacity to identify correlations between side-channel measurements and sensitive data,thereby significantly enhancing such attacks.To further understand the security threats posed by deep-learning SCAs and to aid in formulating robust countermeasures in the future,this paper undertakes an exhaustive investigation of leading-edge SCAs targeting Advanced Encryption Standard(AES)implementations.The study specifically focuses on attacks that exploit power consumption and electromagnetic(EM)emissions as primary leakage sources,systematically evaluating the extent to which diverse deep learning techniques enhance SCAs acrossmultiple critical dimensions.These dimensions include:(i)the characteristics of publicly available datasets derived from various hardware and software platforms;(ii)the formalization of leakage models tailored to different attack scenarios;(iii)the architectural suitability and performance of state-of-the-art deep learning models.Furthermore,the survey provides a systematic synthesis of current research findings,identifies significant unresolved issues in the existing literature and suggests promising directions for future work,including cross-device attack transferability and the impact of quantum-classical hybrid computing on side-channel security.展开更多
This article investigates the distributed recursive filtering problem for discrete-time stochastic cyber–physical systems.A particular feature of our work is that we consider systems in which the state is constrained...This article investigates the distributed recursive filtering problem for discrete-time stochastic cyber–physical systems.A particular feature of our work is that we consider systems in which the state is constrained by saturation.Measurements are transmitted to nodes of a sensor network over unreliable wireless channels.We propose a linear coding mechanism,together with a distributed method for obtaining a state estimate at each node.These designs aim to minimize the state estimation error covariance.In addition,we derive a bound on this covariance,and accommodate the design parameters to minimize this bound.The resulting design depends on the packet loss probabilities of the wireless channels.This permits applying the proposed scheme to systems in which communications suffer from denial-of-service attacks,as such attacks typically affect those probabilities.Finally,we present a numerical example illustrating this application.展开更多
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
As attack techniques evolve and data volumes increase,the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential.Artificial intelligence holds...As attack techniques evolve and data volumes increase,the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential.Artificial intelligence holds significant potential to improve the operational efficiency and cybersecurity of these systems.However,its dependence on cyber-based infrastructures expands the attack surface and introduces the risk that adversarial manipulations of artificial intelligence models may cause physical harm.To address these concerns,this study presents a comprehensive review of artificial intelligence-driven threat detection methods and adversarial attacks targeting artificial intelligence within industrial control environments,examining both their benefits and associated risks.A systematic literature review was conducted across major scientific databases,including IEEE,Elsevier,Springer Nature,ACM,MDPI,and Wiley,covering peer-reviewed journal and conference papers published between 2017 and 2026.Studies were selected based on predefined inclusion and exclusion criteria following a structured screening process.Based on an analysis of 101 selected studies,this survey categorizes artificial intelligence-based threat detection approaches across the physical,control,and application layers of industrial control systems and examines poisoning,evasion,and extraction attacks targeting industrial artificial intelligence.The findings identify key research trends,highlight unresolved security challenges,and discuss implications for the secure deployment of artificial intelligence-enabled cybersecurity solutions in industrial control systems.展开更多
The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques,which are often unable to adapt to rapidly evolving attack patterns.To address t...The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques,which are often unable to adapt to rapidly evolving attack patterns.To address these challenges,this study proposes X-MalNet,a lightweight Convolutional Neural Network(CNN)framework designed for static malware classification through image-based representations of binary executables.By converting malware binaries into grayscale images,the model extracts distinctive structural and texture-level features that signify malicious intent,thereby eliminating the dependence on manual feature engineering or dynamic behavioral analysis.Built upon a modified AlexNet architecture,X-MalNet employs transfer learning to enhance generalization and reduce computational cost,enabling efficient training and deployment on limited hardware resources.To promote interpretability and transparency,the framework integrates Gradient-weighted Class ActivationMapping(Grad-CAM)and Deep SHapleyAdditive exPlanations(DeepSHAP),offering spatial and pixel-level visualizations that reveal howspecific image regions influence classification outcomes.These explainability components support security analysts in validating the model’s reasoning,strengthening confidence in AI-assisted malware detection.Comprehensive experiments on the Malimg and Malevis benchmark datasets confirm the superior performance of X-MalNet,achieving classification accuracies of 99.15% and 98.72%,respectively.Further robustness evaluations using FastGradient SignMethod(FGSM)and Projected Gradient Descent(PGD)adversarial attacks demonstrate the model’s resilience against perturbed inputs.In conclusion,X-MalNet emerges as a scalable,interpretable,and robust malware detection framework that effectively balances accuracy,efficiency,and explainability.Its lightweight design and adversarial stability position it as a promising solution for real-world cybersecurity deployments,advancing the development of trustworthy,automated,and transparent malware classification systems.展开更多
Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev...Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.展开更多
文摘In federated learning,backdoor attacks have become an important research topic with their wide application in processing sensitive datasets.Since federated learning detects or modifies local models through defense mechanisms during aggregation,it is difficult to conduct effective backdoor attacks.In addition,existing backdoor attack methods are faced with challenges,such as low backdoor accuracy,poor ability to evade anomaly detection,and unstable model training.To address these challenges,a method called adaptive simulation backdoor attack(ASBA)is proposed.Specifically,ASBA improves the stability of model training by manipulating the local training process and using an adaptive mechanism,the ability of the malicious model to evade anomaly detection by combing large simulation training and clipping,and the backdoor accuracy by introducing a stimulus model to amplify the impact of the backdoor in the global model.Extensive comparative experiments under five advanced defense scenarios show that ASBA can effectively evade anomaly detection and achieve high backdoor accuracy in the global model.Furthermore,it exhibits excellent stability and effectiveness after multiple rounds of attacks,outperforming state-of-the-art backdoor attack methods.
基金funded by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,under Grant No.(GPIP:1074-612-2024).
文摘The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies.
基金supported by National Natural Science Foundation of China(51977155 and 61833008).
文摘According to the dynamic interaction process between cyber flow and power flow in grid cyber-physical systems(GCPS),attackers could gradually trigger large-scale power failures through cooperative cyber-attacks,subsequently forming cross-domain cascading failures(CDCF)that cross cyber-domain and power-domain and endanger the stable running of GCPS.To reveal the evolutionary mechanism of CDCF,an optimal attack scheme evaluation method is proposed,considering the spatiotemporal synergy of multiple attack-event-chains.First,in accordance with the spatiotemporal synergy of multiple attack-event-chains,the CDCF evolutionary mechanism is analyzed from the attackers'perspective,and a CDCF mathematical model is established.Furthermore,an attack graph model of CDCF evolution and its hazard calculation method are proposed.Then,the attackers'decision-making process for the optimal attack scheme of CDCF is deduced based on the attack graph model.Finally,both the evaluation and implementation processes of the optimal attack scheme are simulated in the GCPS experimental system based on IEEE-39 bus systems.
基金supported by the National Natural Science Foundation of China(Grant No.62172123)the Key Research and Development Program of Heilongjiang Province,China(GrantNo.2022ZX01A36).
文摘Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global model through compromised updates,posing significant threats to model integrity and becoming a key focus in FL security.Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity,resulting in limited stealth and adaptability.To address the heterogeneity of malicious client devices,this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack(CASBA).By incorporating measurements of clients’computational and communication capabilities,CASBA employs a dynamic hierarchical attack strategy that adaptively aligns attack intensity with available resources.Furthermore,an improved deep convolutional generative adversarial network(DCGAN)is integrated into the attack pipeline to embed triggers without modifying original data,significantly enhancing stealthiness.Comparative experiments with Shadow Backdoor Attack(SBA)across multiple scenarios demonstrate that CASBA dynamically adjusts resource consumption based on device capabilities,reducing average memory usage per iteration by 5.8%.CASBA improves resource efficiency while keeping the drop in attack success rate within 3%.Additionally,the effectiveness of CASBA against three robust FL algorithms is also validated.
基金supported by the National Natural Science Foundation of China(62303353,62273030,62573320)。
文摘Dear Editor,This letter studies the problem of stealthy attacks targeting stochastic event-based estimation,alongside proposing measures for their mitigation.A general attack framework is introduced,and the corresponding stealthiness condition is analyzed.To enhance system security,we advocate for a single-dimensional encryption method,showing that securing a singular data element is sufficient to shield the system from the perils of stealthy attacks.
文摘Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulnerabilities in software and communication protocols to silently gain access,exfiltrate data,and enable long-term surveillance.Their stealth and ability to evade traditional defenses make detection and mitigation highly challenging.This paper addresses these threats by systematically mapping the tactics and techniques of zero-click attacks using the MITRE ATT&CK framework,a widely adopted standard for modeling adversarial behavior.Through this mapping,we categorize real-world attack vectors and better understand how such attacks operate across the cyber-kill chain.To support threat detection efforts,we propose an Active Learning-based method to efficiently label the Pegasus spyware dataset in alignment with the MITRE ATT&CK framework.This approach reduces the effort of manually annotating data while improving the quality of the labeled data,which is essential to train robust cybersecurity models.In addition,our analysis highlights the structured execution paths of zero-click attacks and reveals gaps in current defense strategies.The findings emphasize the importance of forward-looking strategies such as continuous surveillance,dynamic threat profiling,and security education.By bridging zero-click attack analysis with the MITRE ATT&CK framework and leveraging machine learning for dataset annotation,this work provides a foundation for more accurate threat detection and the development of more resilient and structured cybersecurity frameworks.
文摘The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a crucial research question arises:How can we differentiate between AI-generated and human-authored text?Existing detectors face some challenges,such as operating as black boxes,relying on supervised training,and being vulnerable to manipulation and misinformation.To tackle these challenges,we propose an innovative unsupervised white-box detection method that utilizes a“dual-driven verification mechanism”to achieve high-performance detection,even in the presence of obfuscated attacks in the text content.To be more specific,we initially employ the SpaceInfi strategy to enhance the difficulty of detecting the text content.Subsequently,we randomly select vulnerable spots from the text and perturb them using another pre-trained language model(e.g.,T5).Finally,we apply a dual-driven defense mechanism(D3M)that validates text content with perturbations,whether generated by a model or authored by a human,based on the dimensions of Information TransmissionQuality and Information TransmissionDensity.Through experimental validation,our proposed novelmethod demonstrates state-of-the-art(SOTA)performancewhen exposed to equivalent levels of perturbation intensity across multiple benchmarks,thereby showcasing the effectiveness of our strategies.
文摘The performance of deep recommendation models degrades significantly under data poisoning attacks.While adversarial training methods such as Vulnerability-Aware Training(VAT)enhance robustness by injecting perturbations into embeddings,they remain limited by coarse-grained noise and a static defense strategy,leaving models susceptible to adaptive attacks.This study proposes a novel framework,Self-Purification Data Sanitization(SPD),which integrates vulnerability-aware adversarial training with dynamic label correction.Specifically,SPD first identifies high-risk users through a fragility scoring mechanism,then applies self-purification by replacing suspicious interactions with model-predicted high-confidence labels during training.This closed-loop process continuously sanitizes the training data and breaks the protection ceiling of conventional adversarial training.Experiments demonstrate that SPD significantly improves the robustness of both Matrix Factorization(MF)and LightGCN models against various poisoning attacks.We show that SPD effectively suppresses malicious gradient propagation and maintains recommendation accuracy.Evaluations on Gowalla and Yelp2018 confirmthat SPD-trainedmodels withstandmultiple attack strategies—including Random,Bandwagon,DP,and Rev attacks—while preserving performance.
基金funded by Ministry of Higher Education Malaysia through Universiti Malaysia Pahang Al-Sultan Abdullah under Internal Research Grant(RDU233003).
文摘This paper proposes a tamper detection technique for semi-fragile watermarking using Quantizationbased Discrete Cosine Transform(DCT)for tamper localization.In this study,the proposed embedding strategy is investigated by experimental tests over the diagonal order of the DCT coefficients.The cover image is divided into non-overlapping blocks of size 8×8 pixels.The DCT is applied to each block,and the coefficients are arranged using a zig-zag pattern within the block.In this study,the low-frequency coefficients are selected to examine the impact of the imperceptibility score and tamper detection accuracy.High accuracy of tamper detection can be achieved by checking the surrounding blocks to determine whether the corresponding block has been tampered with.The proposed tamper detection is tested under various malicious,incidental,and hybrid attacks(both incidental and malicious attacks).The experimental results demonstrate that the proposed technique achieves a Peak-Signal-to-Noise Ratio(PSNR)value of 41.2318 dB,an average Structural Similarity Index Measure(SSIM)value of 0.9768.The proposed scheme is also evaluated against malicious attacks such as copy-move,object deletion,object manipulation,and collage attacks.The proposed scheme can detect the malicious attack localization under various tampering rates.In addition,the proposed scheme can still detect tampered pixels under a hybrid attack,such as a combination ofmalicious and incidental attacks,with an average accuracy of 96.44%.
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)support program(IITP-2025-RS-2023-00259497)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)and was supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Republic of Korea government(MSIT)(No.IITP-2025-RS-2023-00254129+1 种基金Graduate School of Metaverse Convergence(Sungkyunkwan University))was supported by the Basic Science Research Program of the National Research Foundation(NRF)funded by the Republic of Korean government(MSIT)(No.RS-2024-00346737).
文摘Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field.
基金supported in part by theHubei Engineering Research Center for BDS-CloudHigh-Precision Deformation Monitoring Open Funding(No.HBBDGJ202507Y)the National Natural Science Foundation of China(No.62377037).
文摘The large-scale deployment of Internet of Things(IoT)technology across various aspects of daily life has significantly propelled the intelligent development of society.Among them,the integration of IoT and named data networks(NDNs)reduces network complexity and provides practical directions for content-oriented network design.However,ensuring data integrity in NDN-IoT applications remains a challenging issue.Very recently,Wang et al.(Entropy,27(5),471(2025))designed a certificateless aggregate signature(CLAS)scheme for NDN-IoT environments.Wang et al.stated that their construction was provably secure under various types of security attacks.Using theoretical analysis methods,in this work,we reveal that their CLAS design fails to meet unforgeability,a core security requirement for CLAS schemes.In particular,we demonstrate that their scheme is vulnerable to amalicious public-key replacement attack,enabling an adversary to produce authentic signatures for arbitrary fraudulent messages.Therefore,Wang et al.’s design cannot achieve its goal.To address the issue,we systematically examine the root causes behind the vulnerability and propose a security-enhanced CLAS construction for NDN-IoT environments.We prove the security ofour improveddesignunder the standard security assumptionandalsoanalyze its practicalperformanceby comparing the computational and communication costs with several related works.The comparison results show the practicality of our design.
文摘In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.
基金extend their appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2026R760)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors also extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through small group research under grant number RGP2/714/46.
文摘The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods.
基金supported by the National Natural Science Foundation of China Grant(No.61972133)Project of Leading Talents in Science and Technology Innovation for Thousands of People Plan in Henan Province Grant(No.204200510021)the Key Research and Development Plan Special Project of Henan Province Grant(No.241111211400).
文摘At inference time,deep neural networks are susceptible to backdoor attacks,which can produce attackercontrolled outputs when inputs contain carefully crafted triggers.Existing defense methods often focus on specific attack types or incur high costs,such as data cleaning or model fine-tuning.In contrast,we argue that it is possible to achieve effective and generalizable defense without removing triggers or incurring high model-cleaning costs.Fromthe attacker’s perspective and based on characteristics of vulnerable neuron activation anomalies,we propose an Adaptive Feature Injection(AFI)method for black-box backdoor detection.AFI employs a pre-trained image encoder to extract multi-level deep features and constructs a dynamic weight fusionmechanism for precise identification and interception of poisoned samples.Specifically,we select the control samples with the largest feature differences fromthe clean dataset via feature-space analysis,and generate blended sample pairs with the test sample using dynamic linear interpolation.The detection statistic is computed by measuring the divergence G(x)in model output responses.We systematically evaluate the effectiveness of AFI against representative backdoor attacks,including BadNets,Blend,WaNet,and IAB,on three benchmark datasets:MNIST,CIFAR-10,and ImageNet.Experimental results show that AFI can effectively detect poisoned samples,achieving average detection rates of 95.20%,94.15%,and 86.49%on these datasets,respectively.Compared with existing methods,AFI demonstrates strong cross-domain generalization ability and robustness to unknown attacks.
基金The Key R&D Program of Hunan Province(Grant No.2025AQ2024)of the Department of Science and Technology of Hunan Province.Distinguished Young Scientists Fund(Grant No.24B0446)of Hunan Education Department.
文摘Internet of Things(IoTs)devices are bringing about a revolutionary change our society by enabling connectivity regardless of time and location.However,The extensive deployment of these devices also makes them attractive victims for themalicious actions of adversaries.Within the spectrumof existing threats,Side-ChannelAttacks(SCAs)have established themselves as an effective way to compromise cryptographic implementations.These attacks exploit unintended,unintended physical leakage that occurs during the cryptographic execution of devices,bypassing the theoretical strength of the crypto design.In recent times,the advancement of deep learning has provided SCAs with a powerful ally.Well-trained deep-learningmodels demonstrate an exceptional capacity to identify correlations between side-channel measurements and sensitive data,thereby significantly enhancing such attacks.To further understand the security threats posed by deep-learning SCAs and to aid in formulating robust countermeasures in the future,this paper undertakes an exhaustive investigation of leading-edge SCAs targeting Advanced Encryption Standard(AES)implementations.The study specifically focuses on attacks that exploit power consumption and electromagnetic(EM)emissions as primary leakage sources,systematically evaluating the extent to which diverse deep learning techniques enhance SCAs acrossmultiple critical dimensions.These dimensions include:(i)the characteristics of publicly available datasets derived from various hardware and software platforms;(ii)the formalization of leakage models tailored to different attack scenarios;(iii)the architectural suitability and performance of state-of-the-art deep learning models.Furthermore,the survey provides a systematic synthesis of current research findings,identifies significant unresolved issues in the existing literature and suggests promising directions for future work,including cross-device attack transferability and the impact of quantum-classical hybrid computing on side-channel security.
基金supported by the KGJ Basic Research Fund(JCKY2023110C080)the National Natural Science Foundation of China(62322306,62173057,62033006)+2 种基金Aviation Science Foundation Project(2022Z018063001)the Argentinean Agency for Scientific and Technological Promotion(PICT-2021-I-A-00730)the National Foreign Expert Individual Project(H20240983).
文摘This article investigates the distributed recursive filtering problem for discrete-time stochastic cyber–physical systems.A particular feature of our work is that we consider systems in which the state is constrained by saturation.Measurements are transmitted to nodes of a sensor network over unreliable wireless channels.We propose a linear coding mechanism,together with a distributed method for obtaining a state estimate at each node.These designs aim to minimize the state estimation error covariance.In addition,we derive a bound on this covariance,and accommodate the design parameters to minimize this bound.The resulting design depends on the packet loss probabilities of the wireless channels.This permits applying the proposed scheme to systems in which communications suffer from denial-of-service attacks,as such attacks typically affect those probabilities.Finally,we present a numerical example illustrating this application.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(RS-2023-00242528,50%)supported by the Korea Internet&Security Agency(KISA)through the Information Security Specialized University Support Project(50%).
文摘As attack techniques evolve and data volumes increase,the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential.Artificial intelligence holds significant potential to improve the operational efficiency and cybersecurity of these systems.However,its dependence on cyber-based infrastructures expands the attack surface and introduces the risk that adversarial manipulations of artificial intelligence models may cause physical harm.To address these concerns,this study presents a comprehensive review of artificial intelligence-driven threat detection methods and adversarial attacks targeting artificial intelligence within industrial control environments,examining both their benefits and associated risks.A systematic literature review was conducted across major scientific databases,including IEEE,Elsevier,Springer Nature,ACM,MDPI,and Wiley,covering peer-reviewed journal and conference papers published between 2017 and 2026.Studies were selected based on predefined inclusion and exclusion criteria following a structured screening process.Based on an analysis of 101 selected studies,this survey categorizes artificial intelligence-based threat detection approaches across the physical,control,and application layers of industrial control systems and examines poisoning,evasion,and extraction attacks targeting industrial artificial intelligence.The findings identify key research trends,highlight unresolved security challenges,and discuss implications for the secure deployment of artificial intelligence-enabled cybersecurity solutions in industrial control systems.
文摘The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques,which are often unable to adapt to rapidly evolving attack patterns.To address these challenges,this study proposes X-MalNet,a lightweight Convolutional Neural Network(CNN)framework designed for static malware classification through image-based representations of binary executables.By converting malware binaries into grayscale images,the model extracts distinctive structural and texture-level features that signify malicious intent,thereby eliminating the dependence on manual feature engineering or dynamic behavioral analysis.Built upon a modified AlexNet architecture,X-MalNet employs transfer learning to enhance generalization and reduce computational cost,enabling efficient training and deployment on limited hardware resources.To promote interpretability and transparency,the framework integrates Gradient-weighted Class ActivationMapping(Grad-CAM)and Deep SHapleyAdditive exPlanations(DeepSHAP),offering spatial and pixel-level visualizations that reveal howspecific image regions influence classification outcomes.These explainability components support security analysts in validating the model’s reasoning,strengthening confidence in AI-assisted malware detection.Comprehensive experiments on the Malimg and Malevis benchmark datasets confirm the superior performance of X-MalNet,achieving classification accuracies of 99.15% and 98.72%,respectively.Further robustness evaluations using FastGradient SignMethod(FGSM)and Projected Gradient Descent(PGD)adversarial attacks demonstrate the model’s resilience against perturbed inputs.In conclusion,X-MalNet emerges as a scalable,interpretable,and robust malware detection framework that effectively balances accuracy,efficiency,and explainability.Its lightweight design and adversarial stability position it as a promising solution for real-world cybersecurity deployments,advancing the development of trustworthy,automated,and transparent malware classification systems.
基金funded by the National Key Research and Development Program of China(Grant No.2024YFE0209000)the NSFC(Grant No.U23B2019).
文摘Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.