期刊文献+
共找到1,633篇文章
< 1 2 82 >
每页显示 20 50 100
Adaptive Simulation Backdoor Attack Based on Federated Learning
1
作者 SHI Xiujin XIA Kaixiong +3 位作者 YAN Guoying TAN Xuan SUN Yanxu ZHU Xiaolong 《Journal of Donghua University(English Edition)》 2026年第1期50-58,共9页
In federated learning,backdoor attacks have become an important research topic with their wide application in processing sensitive datasets.Since federated learning detects or modifies local models through defense mec... In federated learning,backdoor attacks have become an important research topic with their wide application in processing sensitive datasets.Since federated learning detects or modifies local models through defense mechanisms during aggregation,it is difficult to conduct effective backdoor attacks.In addition,existing backdoor attack methods are faced with challenges,such as low backdoor accuracy,poor ability to evade anomaly detection,and unstable model training.To address these challenges,a method called adaptive simulation backdoor attack(ASBA)is proposed.Specifically,ASBA improves the stability of model training by manipulating the local training process and using an adaptive mechanism,the ability of the malicious model to evade anomaly detection by combing large simulation training and clipping,and the backdoor accuracy by introducing a stimulus model to amplify the impact of the backdoor in the global model.Extensive comparative experiments under five advanced defense scenarios show that ASBA can effectively evade anomaly detection and achieve high backdoor accuracy in the global model.Furthermore,it exhibits excellent stability and effectiveness after multiple rounds of attacks,outperforming state-of-the-art backdoor attack methods. 展开更多
关键词 federated learning backdoor attack PRIVACY adaptive attack SIMULATION
在线阅读 下载PDF
PhishNet: A Real-Time, Scalable Ensemble Framework for Smishing Attack Detection Using Transformers and LLMs
2
作者 Abeer Alhuzali Qamar Al-Qahtani +2 位作者 Asmaa Niyazi Lama Alshehri Fatemah Alharbi 《Computers, Materials & Continua》 2026年第1期2194-2212,共19页
The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integra... The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies. 展开更多
关键词 Smishing attack detection phishing attacks ensemble learning CYBERSECURITY deep learning transformer-based models large language models
在线阅读 下载PDF
Optimal Cyber-attack Evaluation for Cross-domain Cascading Failures Considering Spatiotemporal Synergy of Multiple Attack-event-chains
3
作者 Yihan Liu Yufei Wang +1 位作者 Hongru Wang Qi Wang 《CSEE Journal of Power and Energy Systems》 2026年第1期495-507,共13页
According to the dynamic interaction process between cyber flow and power flow in grid cyber-physical systems(GCPS),attackers could gradually trigger large-scale power failures through cooperative cyber-attacks,subseq... According to the dynamic interaction process between cyber flow and power flow in grid cyber-physical systems(GCPS),attackers could gradually trigger large-scale power failures through cooperative cyber-attacks,subsequently forming cross-domain cascading failures(CDCF)that cross cyber-domain and power-domain and endanger the stable running of GCPS.To reveal the evolutionary mechanism of CDCF,an optimal attack scheme evaluation method is proposed,considering the spatiotemporal synergy of multiple attack-event-chains.First,in accordance with the spatiotemporal synergy of multiple attack-event-chains,the CDCF evolutionary mechanism is analyzed from the attackers'perspective,and a CDCF mathematical model is established.Furthermore,an attack graph model of CDCF evolution and its hazard calculation method are proposed.Then,the attackers'decision-making process for the optimal attack scheme of CDCF is deduced based on the attack graph model.Finally,both the evaluation and implementation processes of the optimal attack scheme are simulated in the GCPS experimental system based on IEEE-39 bus systems. 展开更多
关键词 Attack graph cascading failure cyber-attacks grid cyber-physical system optimal attack scheme
原文传递
CASBA:Capability-Adaptive Shadow Backdoor Attack against Federated Learning
4
作者 Hongwei Wu Guojian Li +2 位作者 Hanyun Zhang Zi Ye Chao Ma 《Computers, Materials & Continua》 2026年第3期1139-1163,共25页
Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global... Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global model through compromised updates,posing significant threats to model integrity and becoming a key focus in FL security.Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity,resulting in limited stealth and adaptability.To address the heterogeneity of malicious client devices,this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack(CASBA).By incorporating measurements of clients’computational and communication capabilities,CASBA employs a dynamic hierarchical attack strategy that adaptively aligns attack intensity with available resources.Furthermore,an improved deep convolutional generative adversarial network(DCGAN)is integrated into the attack pipeline to embed triggers without modifying original data,significantly enhancing stealthiness.Comparative experiments with Shadow Backdoor Attack(SBA)across multiple scenarios demonstrate that CASBA dynamically adjusts resource consumption based on device capabilities,reducing average memory usage per iteration by 5.8%.CASBA improves resource efficiency while keeping the drop in attack success rate within 3%.Additionally,the effectiveness of CASBA against three robust FL algorithms is also validated. 展开更多
关键词 Federated learning backdoor attack generative adversarial network adaptive attack strategy distributed machine learning
在线阅读 下载PDF
Single-Dimensional Encryption Against Stealthy Attacks on Stochastic Event-Based Estimation
5
作者 Jun Shang Di Zhao +1 位作者 Hanwen Zhang Dawei Shi 《IEEE/CAA Journal of Automatica Sinica》 2026年第1期233-235,共3页
Dear Editor,This letter studies the problem of stealthy attacks targeting stochastic event-based estimation,alongside proposing measures for their mitigation.A general attack framework is introduced,and the correspond... Dear Editor,This letter studies the problem of stealthy attacks targeting stochastic event-based estimation,alongside proposing measures for their mitigation.A general attack framework is introduced,and the corresponding stealthiness condition is analyzed.To enhance system security,we advocate for a single-dimensional encryption method,showing that securing a singular data element is sufficient to shield the system from the perils of stealthy attacks. 展开更多
关键词 enhance system securitywe securing singular data element single dimensional encryption stochastic event based estimation stealthiness condition security mitigation attack framework stealthy attacks
在线阅读 下载PDF
Unveiling Zero-Click Attacks: Mapping MITRE ATT&CK Framework for Enhanced Cybersecurity
6
作者 Md Shohel Rana Tonmoy Ghosh +2 位作者 Mohammad Nur Nobi Anichur Rahman Andrew HSung 《Computers, Materials & Continua》 2026年第1期29-66,共38页
Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulner... Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulnerabilities in software and communication protocols to silently gain access,exfiltrate data,and enable long-term surveillance.Their stealth and ability to evade traditional defenses make detection and mitigation highly challenging.This paper addresses these threats by systematically mapping the tactics and techniques of zero-click attacks using the MITRE ATT&CK framework,a widely adopted standard for modeling adversarial behavior.Through this mapping,we categorize real-world attack vectors and better understand how such attacks operate across the cyber-kill chain.To support threat detection efforts,we propose an Active Learning-based method to efficiently label the Pegasus spyware dataset in alignment with the MITRE ATT&CK framework.This approach reduces the effort of manually annotating data while improving the quality of the labeled data,which is essential to train robust cybersecurity models.In addition,our analysis highlights the structured execution paths of zero-click attacks and reveals gaps in current defense strategies.The findings emphasize the importance of forward-looking strategies such as continuous surveillance,dynamic threat profiling,and security education.By bridging zero-click attack analysis with the MITRE ATT&CK framework and leveraging machine learning for dataset annotation,this work provides a foundation for more accurate threat detection and the development of more resilient and structured cybersecurity frameworks. 展开更多
关键词 Bluebugging bluesnarfing CYBERSECURITY MITRE ATT&CK PEGASUS simjacker zero-click attacks
在线阅读 下载PDF
Enhancing Detection of AI-Generated Text:A Retrieval-Augmented Dual-Driven Defense Mechanism
7
作者 Xiaoyu Li Jie Zhang Wen Shi 《Computers, Materials & Continua》 2026年第4期877-895,共19页
The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a... The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a crucial research question arises:How can we differentiate between AI-generated and human-authored text?Existing detectors face some challenges,such as operating as black boxes,relying on supervised training,and being vulnerable to manipulation and misinformation.To tackle these challenges,we propose an innovative unsupervised white-box detection method that utilizes a“dual-driven verification mechanism”to achieve high-performance detection,even in the presence of obfuscated attacks in the text content.To be more specific,we initially employ the SpaceInfi strategy to enhance the difficulty of detecting the text content.Subsequently,we randomly select vulnerable spots from the text and perturb them using another pre-trained language model(e.g.,T5).Finally,we apply a dual-driven defense mechanism(D3M)that validates text content with perturbations,whether generated by a model or authored by a human,based on the dimensions of Information TransmissionQuality and Information TransmissionDensity.Through experimental validation,our proposed novelmethod demonstrates state-of-the-art(SOTA)performancewhen exposed to equivalent levels of perturbation intensity across multiple benchmarks,thereby showcasing the effectiveness of our strategies. 展开更多
关键词 Large language models machine-written PERTURBATION DETECTION ATTACKS
在线阅读 下载PDF
Robust Recommendation Adversarial Training Based on Self-Purification Data Sanitization
8
作者 Haiyan Long Gang Chen Hai Chen 《Computers, Materials & Continua》 2026年第4期840-859,共20页
The performance of deep recommendation models degrades significantly under data poisoning attacks.While adversarial training methods such as Vulnerability-Aware Training(VAT)enhance robustness by injecting perturbatio... The performance of deep recommendation models degrades significantly under data poisoning attacks.While adversarial training methods such as Vulnerability-Aware Training(VAT)enhance robustness by injecting perturbations into embeddings,they remain limited by coarse-grained noise and a static defense strategy,leaving models susceptible to adaptive attacks.This study proposes a novel framework,Self-Purification Data Sanitization(SPD),which integrates vulnerability-aware adversarial training with dynamic label correction.Specifically,SPD first identifies high-risk users through a fragility scoring mechanism,then applies self-purification by replacing suspicious interactions with model-predicted high-confidence labels during training.This closed-loop process continuously sanitizes the training data and breaks the protection ceiling of conventional adversarial training.Experiments demonstrate that SPD significantly improves the robustness of both Matrix Factorization(MF)and LightGCN models against various poisoning attacks.We show that SPD effectively suppresses malicious gradient propagation and maintains recommendation accuracy.Evaluations on Gowalla and Yelp2018 confirmthat SPD-trainedmodels withstandmultiple attack strategies—including Random,Bandwagon,DP,and Rev attacks—while preserving performance. 展开更多
关键词 ROBUSTNESS adversarial defense recommendation system poisoning attack SELF-PURIFICATION
在线阅读 下载PDF
Semi-Fragile Image Watermarking Using Quantization-Based DCT for Tamper Localization
9
作者 Agit Amrullah Ferda Ernawan 《Computers, Materials & Continua》 2026年第2期1967-1982,共16页
This paper proposes a tamper detection technique for semi-fragile watermarking using Quantizationbased Discrete Cosine Transform(DCT)for tamper localization.In this study,the proposed embedding strategy is investigate... This paper proposes a tamper detection technique for semi-fragile watermarking using Quantizationbased Discrete Cosine Transform(DCT)for tamper localization.In this study,the proposed embedding strategy is investigated by experimental tests over the diagonal order of the DCT coefficients.The cover image is divided into non-overlapping blocks of size 8×8 pixels.The DCT is applied to each block,and the coefficients are arranged using a zig-zag pattern within the block.In this study,the low-frequency coefficients are selected to examine the impact of the imperceptibility score and tamper detection accuracy.High accuracy of tamper detection can be achieved by checking the surrounding blocks to determine whether the corresponding block has been tampered with.The proposed tamper detection is tested under various malicious,incidental,and hybrid attacks(both incidental and malicious attacks).The experimental results demonstrate that the proposed technique achieves a Peak-Signal-to-Noise Ratio(PSNR)value of 41.2318 dB,an average Structural Similarity Index Measure(SSIM)value of 0.9768.The proposed scheme is also evaluated against malicious attacks such as copy-move,object deletion,object manipulation,and collage attacks.The proposed scheme can detect the malicious attack localization under various tampering rates.In addition,the proposed scheme can still detect tampered pixels under a hybrid attack,such as a combination ofmalicious and incidental attacks,with an average accuracy of 96.44%. 展开更多
关键词 Image watermarking SEMI-FRAGILE DCT tamper localization hybrid attack
在线阅读 下载PDF
Graph-Based Intrusion Detection with Explainable Edge Classification Learning
10
作者 Jaeho Shin Jaekwang Kim 《Computers, Materials & Continua》 2026年第1期610-635,共26页
Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to ... Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field. 展开更多
关键词 Intrusion detection graph neural network explainable AI network attacks GraphSAGE
在线阅读 下载PDF
An Efficient Certificateless Authentication Scheme with Enhanced Security for NDN-IoT Environments
11
作者 Feihong Xu Jianbo Wu +3 位作者 Qing An Fei Zhu Zhaoyang Han Saru Kumari 《Computers, Materials & Continua》 2026年第4期1788-1801,共14页
The large-scale deployment of Internet of Things(IoT)technology across various aspects of daily life has significantly propelled the intelligent development of society.Among them,the integration of IoT and named data ... The large-scale deployment of Internet of Things(IoT)technology across various aspects of daily life has significantly propelled the intelligent development of society.Among them,the integration of IoT and named data networks(NDNs)reduces network complexity and provides practical directions for content-oriented network design.However,ensuring data integrity in NDN-IoT applications remains a challenging issue.Very recently,Wang et al.(Entropy,27(5),471(2025))designed a certificateless aggregate signature(CLAS)scheme for NDN-IoT environments.Wang et al.stated that their construction was provably secure under various types of security attacks.Using theoretical analysis methods,in this work,we reveal that their CLAS design fails to meet unforgeability,a core security requirement for CLAS schemes.In particular,we demonstrate that their scheme is vulnerable to amalicious public-key replacement attack,enabling an adversary to produce authentic signatures for arbitrary fraudulent messages.Therefore,Wang et al.’s design cannot achieve its goal.To address the issue,we systematically examine the root causes behind the vulnerability and propose a security-enhanced CLAS construction for NDN-IoT environments.We prove the security ofour improveddesignunder the standard security assumptionandalsoanalyze its practicalperformanceby comparing the computational and communication costs with several related works.The comparison results show the practicality of our design. 展开更多
关键词 IOT certificateless signature public-key replacement attack data integrity AGGREGATION
在线阅读 下载PDF
AdvYOLO:An Improved Cross-Conv-Block Feature Fusion-Based YOLO Network for Transferable Adversarial Attacks on ORSIs Object Detection
12
作者 Leyu Dai Jindong Wang +2 位作者 Ming Zhou Song Guo Hengwei Zhang 《Computers, Materials & Continua》 2026年第4期767-792,共26页
In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free... In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions. 展开更多
关键词 Remote sensing object detection transferable adversarial attack feature fusion cross-conv-block
在线阅读 下载PDF
Explainable Hybrid AI Model for DDoS Detection in SDN-Enabled Internet of Vehicle
13
作者 Oumaima Saidani Nazia Azim +5 位作者 Ateeq Ur Rehman Akbayan Bekarystankyzy Hala Abdel Hameed Mostafa Mohamed R.Abonazel Ehab Ebrahim Mohamed Ebrahim Sarah Abu Ghazalah 《Computers, Materials & Continua》 2026年第5期499-526,共28页
The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobil... The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods. 展开更多
关键词 Explainable AI software defined networking Internet of vehicles DDoS attack ResNet BiLSTM
在线阅读 下载PDF
AFI:Blackbox Backdoor Detection Method Based on Adaptive Feature Injection
14
作者 Simin Tang Zhiyong Zhang +3 位作者 Junyan Pan Gaoyuan Quan Weiguo Wang Junchang Jing 《Computers, Materials & Continua》 2026年第4期1890-1908,共19页
At inference time,deep neural networks are susceptible to backdoor attacks,which can produce attackercontrolled outputs when inputs contain carefully crafted triggers.Existing defense methods often focus on specific a... At inference time,deep neural networks are susceptible to backdoor attacks,which can produce attackercontrolled outputs when inputs contain carefully crafted triggers.Existing defense methods often focus on specific attack types or incur high costs,such as data cleaning or model fine-tuning.In contrast,we argue that it is possible to achieve effective and generalizable defense without removing triggers or incurring high model-cleaning costs.Fromthe attacker’s perspective and based on characteristics of vulnerable neuron activation anomalies,we propose an Adaptive Feature Injection(AFI)method for black-box backdoor detection.AFI employs a pre-trained image encoder to extract multi-level deep features and constructs a dynamic weight fusionmechanism for precise identification and interception of poisoned samples.Specifically,we select the control samples with the largest feature differences fromthe clean dataset via feature-space analysis,and generate blended sample pairs with the test sample using dynamic linear interpolation.The detection statistic is computed by measuring the divergence G(x)in model output responses.We systematically evaluate the effectiveness of AFI against representative backdoor attacks,including BadNets,Blend,WaNet,and IAB,on three benchmark datasets:MNIST,CIFAR-10,and ImageNet.Experimental results show that AFI can effectively detect poisoned samples,achieving average detection rates of 95.20%,94.15%,and 86.49%on these datasets,respectively.Compared with existing methods,AFI demonstrates strong cross-domain generalization ability and robustness to unknown attacks. 展开更多
关键词 Deep learning backdoor attacks universal detection feature fusion backward reasoning
在线阅读 下载PDF
Recent Advances in Deep-Learning Side-Channel Attacks on AES Implementations
15
作者 Junnian Wang Xiaoxia Wang +3 位作者 Zexin Luo Qixiang Ouyang Chao Zhou Huanyu Wang 《Computers, Materials & Continua》 2026年第4期95-133,共39页
Internet of Things(IoTs)devices are bringing about a revolutionary change our society by enabling connectivity regardless of time and location.However,The extensive deployment of these devices also makes them attracti... Internet of Things(IoTs)devices are bringing about a revolutionary change our society by enabling connectivity regardless of time and location.However,The extensive deployment of these devices also makes them attractive victims for themalicious actions of adversaries.Within the spectrumof existing threats,Side-ChannelAttacks(SCAs)have established themselves as an effective way to compromise cryptographic implementations.These attacks exploit unintended,unintended physical leakage that occurs during the cryptographic execution of devices,bypassing the theoretical strength of the crypto design.In recent times,the advancement of deep learning has provided SCAs with a powerful ally.Well-trained deep-learningmodels demonstrate an exceptional capacity to identify correlations between side-channel measurements and sensitive data,thereby significantly enhancing such attacks.To further understand the security threats posed by deep-learning SCAs and to aid in formulating robust countermeasures in the future,this paper undertakes an exhaustive investigation of leading-edge SCAs targeting Advanced Encryption Standard(AES)implementations.The study specifically focuses on attacks that exploit power consumption and electromagnetic(EM)emissions as primary leakage sources,systematically evaluating the extent to which diverse deep learning techniques enhance SCAs acrossmultiple critical dimensions.These dimensions include:(i)the characteristics of publicly available datasets derived from various hardware and software platforms;(ii)the formalization of leakage models tailored to different attack scenarios;(iii)the architectural suitability and performance of state-of-the-art deep learning models.Furthermore,the survey provides a systematic synthesis of current research findings,identifies significant unresolved issues in the existing literature and suggests promising directions for future work,including cross-device attack transferability and the impact of quantum-classical hybrid computing on side-channel security. 展开更多
关键词 Side-channel attacks deep learning advanced encryption standard power analysis EM analysis
在线阅读 下载PDF
Coding-based distributed filtering for cyber–physical systems under denial of service attacks
16
作者 Shuang Feng Damián Marelli +2 位作者 Chen Wang Minyue Fu Tianju Sui 《Journal of Automation and Intelligence》 2026年第1期13-23,共11页
This article investigates the distributed recursive filtering problem for discrete-time stochastic cyber–physical systems.A particular feature of our work is that we consider systems in which the state is constrained... This article investigates the distributed recursive filtering problem for discrete-time stochastic cyber–physical systems.A particular feature of our work is that we consider systems in which the state is constrained by saturation.Measurements are transmitted to nodes of a sensor network over unreliable wireless channels.We propose a linear coding mechanism,together with a distributed method for obtaining a state estimate at each node.These designs aim to minimize the state estimation error covariance.In addition,we derive a bound on this covariance,and accommodate the design parameters to minimize this bound.The resulting design depends on the packet loss probabilities of the wireless channels.This permits applying the proposed scheme to systems in which communications suffer from denial-of-service attacks,as such attacks typically affect those probabilities.Finally,we present a numerical example illustrating this application. 展开更多
关键词 Cyber–physical system Sensor network Denial-of-service attack Linear coding State saturation
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
17
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
Cybersecurity Opportunities and Risks of Artificial Intelligence in Industrial Control Systems:A Survey
18
作者 Ka-Kyung Kim Joon-Seok Kim +1 位作者 Dong-Hyuk Shin Ieck-Chae Euom 《Computer Modeling in Engineering & Sciences》 2026年第2期186-233,共48页
As attack techniques evolve and data volumes increase,the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential.Artificial intelligence holds... As attack techniques evolve and data volumes increase,the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential.Artificial intelligence holds significant potential to improve the operational efficiency and cybersecurity of these systems.However,its dependence on cyber-based infrastructures expands the attack surface and introduces the risk that adversarial manipulations of artificial intelligence models may cause physical harm.To address these concerns,this study presents a comprehensive review of artificial intelligence-driven threat detection methods and adversarial attacks targeting artificial intelligence within industrial control environments,examining both their benefits and associated risks.A systematic literature review was conducted across major scientific databases,including IEEE,Elsevier,Springer Nature,ACM,MDPI,and Wiley,covering peer-reviewed journal and conference papers published between 2017 and 2026.Studies were selected based on predefined inclusion and exclusion criteria following a structured screening process.Based on an analysis of 101 selected studies,this survey categorizes artificial intelligence-based threat detection approaches across the physical,control,and application layers of industrial control systems and examines poisoning,evasion,and extraction attacks targeting industrial artificial intelligence.The findings identify key research trends,highlight unresolved security challenges,and discuss implications for the secure deployment of artificial intelligence-enabled cybersecurity solutions in industrial control systems. 展开更多
关键词 Industrial control system industrial Internet of Things cyber-physical systems artificial intelligence machine learning adversarial attacks CYBERSECURITY cyber threat SURVEY
在线阅读 下载PDF
X-MalNet: A CNN-Based Malware Detection Model with Visual and Structural Interpretability
19
作者 Kirubavathi Ganapathiyappan Heba GMohamed +3 位作者 Abhishek Yadav Guru Akshya Chinnaswamy Ateeq Ur Rehman Habib Hamam 《Computers, Materials & Continua》 2026年第2期1506-1523,共18页
The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques,which are often unable to adapt to rapidly evolving attack patterns.To address t... The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques,which are often unable to adapt to rapidly evolving attack patterns.To address these challenges,this study proposes X-MalNet,a lightweight Convolutional Neural Network(CNN)framework designed for static malware classification through image-based representations of binary executables.By converting malware binaries into grayscale images,the model extracts distinctive structural and texture-level features that signify malicious intent,thereby eliminating the dependence on manual feature engineering or dynamic behavioral analysis.Built upon a modified AlexNet architecture,X-MalNet employs transfer learning to enhance generalization and reduce computational cost,enabling efficient training and deployment on limited hardware resources.To promote interpretability and transparency,the framework integrates Gradient-weighted Class ActivationMapping(Grad-CAM)and Deep SHapleyAdditive exPlanations(DeepSHAP),offering spatial and pixel-level visualizations that reveal howspecific image regions influence classification outcomes.These explainability components support security analysts in validating the model’s reasoning,strengthening confidence in AI-assisted malware detection.Comprehensive experiments on the Malimg and Malevis benchmark datasets confirm the superior performance of X-MalNet,achieving classification accuracies of 99.15% and 98.72%,respectively.Further robustness evaluations using FastGradient SignMethod(FGSM)and Projected Gradient Descent(PGD)adversarial attacks demonstrate the model’s resilience against perturbed inputs.In conclusion,X-MalNet emerges as a scalable,interpretable,and robust malware detection framework that effectively balances accuracy,efficiency,and explainability.Its lightweight design and adversarial stability position it as a promising solution for real-world cybersecurity deployments,advancing the development of trustworthy,automated,and transparent malware classification systems. 展开更多
关键词 Malware detection CNNS AlexNet image classification transfer learning techniques cybersecurity measures adversarial attack strategies
在线阅读 下载PDF
A Novel Unsupervised Structural Attack and Defense for Graph Classification
20
作者 Yadong Wang Zhiwei Zhang +2 位作者 Pengpeng Qiao Ye Yuan Guoren Wang 《Computers, Materials & Continua》 2026年第1期1761-1782,共22页
Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev... Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations. 展开更多
关键词 Graph classification graph neural networks adversarial attack
在线阅读 下载PDF
上一页 1 2 82 下一页 到第
使用帮助 返回顶部