In this paper, we study the decentralized federated learning problem, which involves the collaborative training of a global model among multiple devices while ensuring data privacy.In classical federated learning, the...In this paper, we study the decentralized federated learning problem, which involves the collaborative training of a global model among multiple devices while ensuring data privacy.In classical federated learning, the communication channel between the devices poses a potential risk of compromising private information. To reduce the risk of adversary eavesdropping in the communication channel, we propose TRADE(transmit difference weight) concept. This concept replaces the decentralized federated learning algorithm's transmitted weight parameters with differential weight parameters, enhancing the privacy data against eavesdropping. Subsequently, by integrating the TRADE concept with the primal-dual stochastic gradient descent(SGD)algorithm, we propose a decentralized TRADE primal-dual SGD algorithm. We demonstrate that our proposed algorithm's convergence properties are the same as those of the primal-dual SGD algorithm while providing enhanced privacy protection. We validate the algorithm's performance on fault diagnosis task using the Case Western Reserve University dataset, and image classification tasks using the CIFAR-10 and CIFAR-100 datasets,revealing model accuracy comparable to centralized federated learning. Additionally, the experiments confirm the algorithm's privacy protection capability.展开更多
Recently,Internet ofThings(IoT)has been increasingly integrated into the automotive sector,enabling the development of diverse applications such as the Internet of Vehicles(IoV)and intelligent connected vehicles.Lever...Recently,Internet ofThings(IoT)has been increasingly integrated into the automotive sector,enabling the development of diverse applications such as the Internet of Vehicles(IoV)and intelligent connected vehicles.Leveraging IoVtechnologies,operational data fromcore vehicle components can be collected and analyzed to construct fault diagnosis models,thereby enhancing vehicle safety.However,automakers often struggle to acquire sufficient fault data to support effective model training.To address this challenge,a robust and efficient federated learning method(REFL)is constructed for machinery fault diagnosis in collaborative IoV,which can organize multiple companies to collaboratively develop a comprehensive fault diagnosis model while keeping their data locally.In the REFL,the gradient-based adversary algorithm is first introduced to the fault diagnosis field to enhance the deep learning model robustness.Moreover,the adaptive gradient processing process is designed to improve the model training speed and ensure the model accuracy under unbalance data scenarios.The proposed REFL is evaluated on non-independent and identically distributed(non-IID)real-world machinery fault dataset.Experiment results demonstrate that the REFL can achieve better performance than traditional learning methods and are promising for real industrial fault diagnosis.展开更多
Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Althoug...Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.展开更多
Most existing secret sharing schemes are constructed to realize generalaccess structure, which is defined in terms of authorized groups of participants, and is unable tobe applied directly to the design of intrusion t...Most existing secret sharing schemes are constructed to realize generalaccess structure, which is defined in terms of authorized groups of participants, and is unable tobe applied directly to the design of intrusion tolerant system, which often concerns corruptiblegroups of participants instead of authorized ones. Instead, the generalized adversary structure,which specifies the corruptible subsets of participants, can be determined directly by exploit ofthe system setting and the attributes of all participants. In this paper an efficient secret sharingscheme realizing generalized adversary structure is proposed, and it is proved that the schemesatisfies both properties of the secret sharing scheme, i.e., the reconstruction property and theperfect property. The main features of this scheme are that it performs modular additions andsubtractions only, and each share appears in multiple share sets and is thus replicated. The formeris an advantage in terms of computational complexity, and the latter is an advantage when recoveryof some corrupted participants is necessary. So our scheme can achieve lower computation cost andhigher availability. Some reduction on the scheme is also done finally, based on an equivalencerelation defined over adversary structure. Analysis shows that reduced scheme still preserves theproperties of the original one.展开更多
The performance of deep recommendation models degrades significantly under data poisoning attacks.While adversarial training methods such as Vulnerability-Aware Training(VAT)enhance robustness by injecting perturbatio...The performance of deep recommendation models degrades significantly under data poisoning attacks.While adversarial training methods such as Vulnerability-Aware Training(VAT)enhance robustness by injecting perturbations into embeddings,they remain limited by coarse-grained noise and a static defense strategy,leaving models susceptible to adaptive attacks.This study proposes a novel framework,Self-Purification Data Sanitization(SPD),which integrates vulnerability-aware adversarial training with dynamic label correction.Specifically,SPD first identifies high-risk users through a fragility scoring mechanism,then applies self-purification by replacing suspicious interactions with model-predicted high-confidence labels during training.This closed-loop process continuously sanitizes the training data and breaks the protection ceiling of conventional adversarial training.Experiments demonstrate that SPD significantly improves the robustness of both Matrix Factorization(MF)and LightGCN models against various poisoning attacks.We show that SPD effectively suppresses malicious gradient propagation and maintains recommendation accuracy.Evaluations on Gowalla and Yelp2018 confirmthat SPD-trainedmodels withstandmultiple attack strategies—including Random,Bandwagon,DP,and Rev attacks—while preserving performance.展开更多
Over the years,Generative Adversarial Networks(GANs)have revolutionized the medical imaging industry for applications such as image synthesis,denoising,super resolution,data augmentation,and cross-modality translation...Over the years,Generative Adversarial Networks(GANs)have revolutionized the medical imaging industry for applications such as image synthesis,denoising,super resolution,data augmentation,and cross-modality translation.The objective of this review is to evaluate the advances,relevances,and limitations of GANs in medical imaging.An organised literature review was conducted following the guidelines of PRISMA(Preferred Reporting Items for Systematic Reviews and Meta-Analyses).The literature considered included peer-reviewed papers published between 2020 and 2025 across databases including PubMed,IEEE Xplore,and Scopus.The studies related to applications of GAN architectures in medical imaging with reported experimental outcomes and published in English in reputable journals and conferences were considered for the review.Thesis,white papers,communication letters,and non-English articles were not included for the same.CLAIM based quality assessment criteria were applied to the included studies to assess the quality.The study classifies diverse GAN architectures,summarizing their clinical applications,technical performances,and their implementation hardships.Key findings reveal the increasing applications of GANs for enhancing diagnostic accuracy,reducing data scarcity through synthetic data generation,and supporting modality translation.However,concerns such as limited generalizability,lack of clinical validation,and regulatory constraints persist.This review provides a comprehensive study of the prevailing scenario of GANs in medical imaging and highlights crucial research gaps and future directions.Though GANs hold transformative capability for medical imaging,their integration into clinical use demands further validation,interpretability,and regulatory alignment.展开更多
The growing use of Portable Document Format(PDF)files across various sectors such as education,government,and business has inadvertently turned them into a major target for cyberattacks.Cybercriminals take advantage o...The growing use of Portable Document Format(PDF)files across various sectors such as education,government,and business has inadvertently turned them into a major target for cyberattacks.Cybercriminals take advantage of the inherent flexibility and layered structure ofPDFs to inject malicious content,often employing advanced obfuscation techniques to evade detection by traditional signature-based security systems.These conventional methods are no longer adequate,especially against sophisticated threats like zero-day exploits and polymorphic malware.In response to these challenges,this study introduces a machine learning-based detection framework specifically designed to combat such threats.Central to the proposed solution is a stacked ensemble learning model that combines the strengths of four high-performing classifiers:Random Forest(RF),Extreme Gradient Boosting(XGB),LightGBM(LGBM),and CatBoost(CB).These models operate in parallel as base learners,each capturing different aspects of the data.Their outputs are then refined by a Gradient Boosting Classifier(GBC),which serves as a meta-learner to enhance prediction accuracy.To ensure the model remains both efficient and effective,Principal Component Analysis(PCA)is applied to reduce feature dimensionality while preserving critical information necessary for malware classification.The model is trained and validated using the CIC-Evasive PDFMalware2022 dataset,which includes a wide range of both malicious and benign PDF samples.The results demonstrate that the framework achieves impressive performance,with 97.10% accuracy and a 97.39% F1-score,surpassing several existing techniques.To enhance trust and interpretability,the system incorporates Local Interpretable Model-agnostic Explanations(LIME),which provides user-friendly insights into the rationale behind each prediction.This research emphasizes how the integration of ensemble learning,feature reduction,and explainable AI can lead to a practical and scalable solution for detecting complex PDF-based threats.The proposed framework lays the foundation for the next generation of intelligent,resilient cybersecurity systems that can address ever-evolving attack strategies.展开更多
In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free...In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.展开更多
Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces ...Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces a visual evaluation index named confidence centroid skewing quadrilateral,which is based on a classification confidence-based confusion matrix,offering a quantitative and visual comparison of the adversarial robustness among different classification algorithms,and enhances intuitiveness and interpretability of attack impacts.We first conduct a validity test and sensitive analysis of the method.Then,prove its effectiveness through the experiments of five classification algorithms including artificial neural network(ANN),logistic regression(LR),support vector machine(SVM),convolutional neural network(CNN)and transformer against three adversarial attacks such as fast gradient sign method(FGSM),DeepFool,and projected gradient descent(PGD)attack.展开更多
Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subn...Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.展开更多
High-resolution remote sensing imagery is essential for critical applications such as precision agriculture,urban management planning,and military reconnaissance.Although significant progress has been made in singleim...High-resolution remote sensing imagery is essential for critical applications such as precision agriculture,urban management planning,and military reconnaissance.Although significant progress has been made in singleimage super-resolution(SISR)using generative adversarial networks(GANs),existing approaches still face challenges in recovering high-frequency details,effectively utilizing features,maintaining structural integrity,and ensuring training stability—particularly when dealing with the complex textures characteristic of remote sensing imagery.To address these limitations,this paper proposes the Improved ResidualModule and AttentionMechanism Network(IRMANet),a novel architecture specifically designed for remote sensing image reconstruction.IRMANet builds upon the Super-Resolution Generative Adversarial Network(SRGAN)framework and introduces several key innovations.First,the Enhanced Residual Unit(ERU)enhances feature reuse and stabilizes training through deep residual connections.Second,the Self-Attention Residual Block(SARB)incorporates a self-attentionmechanism into the Improved Residual Module(IRM)to effectivelymodel long-range dependencies and automatically emphasize salient features.Additionally,the IRM adopts amulti-scale feature fusion strategy to facilitate synergistic interactions between local detail and global semantic information.The effectiveness of each component is validated through ablation studies,while comprehensive comparative experiments on standard remote sensing datasets demonstrate that IRMANet significantly outperforms both the baseline and state-of-the-art methods in terms of perceptual quality and quantitative metrics.Specifically,compared to the baseline model,at a magnification factor of 2,IRMANet achieves an improvement of 0.24 dB in peak signal-to-noise ratio(PSNR)and 0.54 in structural similarity index(SSIM);at a magnification factor of 4,it achieves gains of 0.22 dB in PSNR and 0.51 in SSIM.These results confirm that the proposedmethod effectively enhances detail representation and structural reconstruction accuracy in complex remote sensing scenarios,offering robust technical support for high-precision detection and identification of both military and civilian aircraft.展开更多
The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques,which are often unable to adapt to rapidly evolving attack patterns.To address t...The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques,which are often unable to adapt to rapidly evolving attack patterns.To address these challenges,this study proposes X-MalNet,a lightweight Convolutional Neural Network(CNN)framework designed for static malware classification through image-based representations of binary executables.By converting malware binaries into grayscale images,the model extracts distinctive structural and texture-level features that signify malicious intent,thereby eliminating the dependence on manual feature engineering or dynamic behavioral analysis.Built upon a modified AlexNet architecture,X-MalNet employs transfer learning to enhance generalization and reduce computational cost,enabling efficient training and deployment on limited hardware resources.To promote interpretability and transparency,the framework integrates Gradient-weighted Class ActivationMapping(Grad-CAM)and Deep SHapleyAdditive exPlanations(DeepSHAP),offering spatial and pixel-level visualizations that reveal howspecific image regions influence classification outcomes.These explainability components support security analysts in validating the model’s reasoning,strengthening confidence in AI-assisted malware detection.Comprehensive experiments on the Malimg and Malevis benchmark datasets confirm the superior performance of X-MalNet,achieving classification accuracies of 99.15% and 98.72%,respectively.Further robustness evaluations using FastGradient SignMethod(FGSM)and Projected Gradient Descent(PGD)adversarial attacks demonstrate the model’s resilience against perturbed inputs.In conclusion,X-MalNet emerges as a scalable,interpretable,and robust malware detection framework that effectively balances accuracy,efficiency,and explainability.Its lightweight design and adversarial stability position it as a promising solution for real-world cybersecurity deployments,advancing the development of trustworthy,automated,and transparent malware classification systems.展开更多
Common strong noise interferences like metal splashes,smoke,and arc light during welding can seriously pollute the laser stripe images,causing the tracking model to drift and leading to tracking failure.At present,the...Common strong noise interferences like metal splashes,smoke,and arc light during welding can seriously pollute the laser stripe images,causing the tracking model to drift and leading to tracking failure.At present,there are already many mature methods for identifying and extracting feature points of linear laser stripes.When the laser stripe forms a curved shape on the surface of the workpiece,these linear methods will no longer be applicable.To eliminate interference sources,enhance the robustness of the weld tracking model,and effectively extract the feature points of curved laser stripes under strong noise conditions.This paper proposes a Conditional Generative Adversarial Network(CGAN)based anti-interference recognition method for welding images.The generator adopts an improved U-Net++structure,adds a Multi-scale Channel Attention module(MS-CAM),introduces Deep Supervision,and proposes a Multi-output Fusion strategy(MOFS)in the output result to en-hance the image inpainting effect;the discriminator uses PatchGAN.The center of the laser stripe is obtained using the grayscale center of mass method and then combined with polynomial fitting to extract the feature points of the weld seam.The experimental results show that the PSNR of the inpainting image is 26.24 dB,the SSIM is 0.98,and the LPIPS is 0.032.The centerline of the inpainting image and the centerline of the noise-free image laser stripe are fitted with a curve.The error of centerline feature points is no more than 5%,confirming the superiority and feasibility of the method.展开更多
To address the issues of insufficient and imbalanced data samples in proton exchange membrane fuel cell(PEMFC)performance degradation prediction,this study proposes a data augmentation-based model to predict PEMFC per...To address the issues of insufficient and imbalanced data samples in proton exchange membrane fuel cell(PEMFC)performance degradation prediction,this study proposes a data augmentation-based model to predict PEMFC performance degradation.Firstly,an improved generative adversarial network(IGAN)with adaptive gradient penalty coefficient is proposed to address the problems of excessively fast gradient descent and insufficient diversity of generated samples.Then,the IGANis used to generate datawith a distribution analogous to real data,therebymitigating the insufficiency and imbalance of original PEMFC samples and providing the predictionmodel with training data rich in feature information.Finally,a convolutional neural network-bidirectional long short-termmemory(CNN-BiLSTM)model is adopted to predict PEMFC performance degradation.Experimental results show that the data generated by the proposed IGAN exhibits higher quality than that generated by the original GAN,and can fully characterize and enrich the original data’s features.Using the augmented data,the prediction accuracy of the CNN-BiLSTM model is significantly improved,rendering it applicable to tasks of predicting PEMFC performance degradation.展开更多
As attack techniques evolve and data volumes increase,the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential.Artificial intelligence holds...As attack techniques evolve and data volumes increase,the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential.Artificial intelligence holds significant potential to improve the operational efficiency and cybersecurity of these systems.However,its dependence on cyber-based infrastructures expands the attack surface and introduces the risk that adversarial manipulations of artificial intelligence models may cause physical harm.To address these concerns,this study presents a comprehensive review of artificial intelligence-driven threat detection methods and adversarial attacks targeting artificial intelligence within industrial control environments,examining both their benefits and associated risks.A systematic literature review was conducted across major scientific databases,including IEEE,Elsevier,Springer Nature,ACM,MDPI,and Wiley,covering peer-reviewed journal and conference papers published between 2017 and 2026.Studies were selected based on predefined inclusion and exclusion criteria following a structured screening process.Based on an analysis of 101 selected studies,this survey categorizes artificial intelligence-based threat detection approaches across the physical,control,and application layers of industrial control systems and examines poisoning,evasion,and extraction attacks targeting industrial artificial intelligence.The findings identify key research trends,highlight unresolved security challenges,and discuss implications for the secure deployment of artificial intelligence-enabled cybersecurity solutions in industrial control systems.展开更多
Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Tr...Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation.展开更多
Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global...Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global model through compromised updates,posing significant threats to model integrity and becoming a key focus in FL security.Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity,resulting in limited stealth and adaptability.To address the heterogeneity of malicious client devices,this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack(CASBA).By incorporating measurements of clients’computational and communication capabilities,CASBA employs a dynamic hierarchical attack strategy that adaptively aligns attack intensity with available resources.Furthermore,an improved deep convolutional generative adversarial network(DCGAN)is integrated into the attack pipeline to embed triggers without modifying original data,significantly enhancing stealthiness.Comparative experiments with Shadow Backdoor Attack(SBA)across multiple scenarios demonstrate that CASBA dynamically adjusts resource consumption based on device capabilities,reducing average memory usage per iteration by 5.8%.CASBA improves resource efficiency while keeping the drop in attack success rate within 3%.Additionally,the effectiveness of CASBA against three robust FL algorithms is also validated.展开更多
In Human–Robot Interaction(HRI),generating robot trajectories that accurately reflect user intentions while ensuring physical realism remains challenging,especially in unstructured environments.In this study,we devel...In Human–Robot Interaction(HRI),generating robot trajectories that accurately reflect user intentions while ensuring physical realism remains challenging,especially in unstructured environments.In this study,we develop a multimodal framework that integrates symbolic task reasoning with continuous trajectory generation.The approach employs transformer models and adversarial training to map high-level intent to robotic motion.Information from multiple data sources,such as voice traits,hand and body keypoints,visual observations,and recorded paths,is integrated simultaneously.These signals are mapped into a shared representation that supports interpretable reasoning while enabling smooth and realistic motion generation.Based on this design,two different learning strategies are investigated.In the first step,grammar-constrained Linear Temporal Logic(LTL)expressions are created from multimodal human inputs.These expressions are subsequently decoded into robot trajectories.The second method generates trajectories directly from symbolic intent and linguistic data,bypassing an intermediate logical representation.Transformer encoders combine multiple types of information,and autoregressive transformer decoders generate motion sequences.Adding smoothness and speed limits during training increases the likelihood of physical feasibility.To improve the realism and stability of the generated trajectories during training,an adversarial discriminator is also included to guide them toward the distribution of actual robot motion.Tests on the NATSGLD dataset indicate that the complete system exhibits stable training behaviour and performance.In normalised coordinates,the logic-based pipeline has an Average Displacement Error(ADE)of 0.040 and a Final Displacement Error(FDE)of 0.036.The adversarial generator makes substantially more progress,reducing ADE to 0.021 and FDE to 0.018.Visual examination confirms that the generated trajectories closely align with observed motion patterns while preserving smooth temporal dynamics.展开更多
User identity linkage(UIL)across online social networks seeks to match accounts belonging to the same real-world individual.This cross-platformmapping enables accurate user modeling but also raises serious privacy ris...User identity linkage(UIL)across online social networks seeks to match accounts belonging to the same real-world individual.This cross-platformmapping enables accurate user modeling but also raises serious privacy risks.Over the past decade,the research community has developed a wide range of UIL methods,from structural embeddings tomultimodal fusion architectures.However,corresponding adversarial and defensive approaches remain fragmented and comparatively understudied.In this survey,we provide a unified overview of both mapping and antimappingmethods for UIL.We categorize representativemappingmodels by learning paradigmand datamodality,and systematically compare them with emerging countermeasures including adversarial injection,structural perturbation,and identity obfuscation.To bridge these two threads,we introduce amodality-oriented taxonomy and a formal gametheoretic framing that casts cross-network mapping as a contest between mappers and anti-mappers.This framing allows us to construct a cross-modality dependency matrix,which reveals structural information as themost contested signal,identifies node injection as the most robust defensive strategy,and points to multimodal integration as a promising direction.Our survey underscores the need for balanced,privacy-preserving identity inference and provides a foundation for future research on the adversarial dynamics of social identity mapping and defense.展开更多
基金supported by the National Key Research and Development Program of China(2022YFB3305904)the National Natural Science Foundation of China(62133003,61991403,61991400)+4 种基金the Open Project of State Key Laboratory of Synthetical Automation for Process Industries(SAPI-2024-KFKT-05,SAPI-2024-KFKT-08)China Academy of Engineering Institute of Land Cooperation Consulting Project(2023-DFZD-60-02,N2424004)the Fundamental Research Funds for the Central UniversitiesShanghai Municipal Science and Technology Major Project(2021SHZDZX0100)the Key Research and Development Program of Liaoning Province(2023JH26/10200011)
文摘In this paper, we study the decentralized federated learning problem, which involves the collaborative training of a global model among multiple devices while ensuring data privacy.In classical federated learning, the communication channel between the devices poses a potential risk of compromising private information. To reduce the risk of adversary eavesdropping in the communication channel, we propose TRADE(transmit difference weight) concept. This concept replaces the decentralized federated learning algorithm's transmitted weight parameters with differential weight parameters, enhancing the privacy data against eavesdropping. Subsequently, by integrating the TRADE concept with the primal-dual stochastic gradient descent(SGD)algorithm, we propose a decentralized TRADE primal-dual SGD algorithm. We demonstrate that our proposed algorithm's convergence properties are the same as those of the primal-dual SGD algorithm while providing enhanced privacy protection. We validate the algorithm's performance on fault diagnosis task using the Case Western Reserve University dataset, and image classification tasks using the CIFAR-10 and CIFAR-100 datasets,revealing model accuracy comparable to centralized federated learning. Additionally, the experiments confirm the algorithm's privacy protection capability.
基金supported in part by National key R&D projects(2024YFB4207203)National Natural Science Foundation of China(52401376)+3 种基金the Zhejiang Provincial Natural Science Foundation of China under Grant(No.LTGG24F030004)Hangzhou Key Scientific Research Plan Project(2024SZD1A24)“Pioneer”and“Leading Goose”R&DProgramof Zhejiang(2024C03254,2023C03154)Jiangxi Provincial Gan-Po Elite Support Program(Major Academic and Technical Leaders Cultivation Project,20243BCE51180).
文摘Recently,Internet ofThings(IoT)has been increasingly integrated into the automotive sector,enabling the development of diverse applications such as the Internet of Vehicles(IoV)and intelligent connected vehicles.Leveraging IoVtechnologies,operational data fromcore vehicle components can be collected and analyzed to construct fault diagnosis models,thereby enhancing vehicle safety.However,automakers often struggle to acquire sufficient fault data to support effective model training.To address this challenge,a robust and efficient federated learning method(REFL)is constructed for machinery fault diagnosis in collaborative IoV,which can organize multiple companies to collaboratively develop a comprehensive fault diagnosis model while keeping their data locally.In the REFL,the gradient-based adversary algorithm is first introduced to the fault diagnosis field to enhance the deep learning model robustness.Moreover,the adaptive gradient processing process is designed to improve the model training speed and ensure the model accuracy under unbalance data scenarios.The proposed REFL is evaluated on non-independent and identically distributed(non-IID)real-world machinery fault dataset.Experiment results demonstrate that the REFL can achieve better performance than traditional learning methods and are promising for real industrial fault diagnosis.
基金supported by Key Laboratory of Cyberspace Security,Ministry of Education,China。
文摘Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.
文摘Most existing secret sharing schemes are constructed to realize generalaccess structure, which is defined in terms of authorized groups of participants, and is unable tobe applied directly to the design of intrusion tolerant system, which often concerns corruptiblegroups of participants instead of authorized ones. Instead, the generalized adversary structure,which specifies the corruptible subsets of participants, can be determined directly by exploit ofthe system setting and the attributes of all participants. In this paper an efficient secret sharingscheme realizing generalized adversary structure is proposed, and it is proved that the schemesatisfies both properties of the secret sharing scheme, i.e., the reconstruction property and theperfect property. The main features of this scheme are that it performs modular additions andsubtractions only, and each share appears in multiple share sets and is thus replicated. The formeris an advantage in terms of computational complexity, and the latter is an advantage when recoveryof some corrupted participants is necessary. So our scheme can achieve lower computation cost andhigher availability. Some reduction on the scheme is also done finally, based on an equivalencerelation defined over adversary structure. Analysis shows that reduced scheme still preserves theproperties of the original one.
文摘The performance of deep recommendation models degrades significantly under data poisoning attacks.While adversarial training methods such as Vulnerability-Aware Training(VAT)enhance robustness by injecting perturbations into embeddings,they remain limited by coarse-grained noise and a static defense strategy,leaving models susceptible to adaptive attacks.This study proposes a novel framework,Self-Purification Data Sanitization(SPD),which integrates vulnerability-aware adversarial training with dynamic label correction.Specifically,SPD first identifies high-risk users through a fragility scoring mechanism,then applies self-purification by replacing suspicious interactions with model-predicted high-confidence labels during training.This closed-loop process continuously sanitizes the training data and breaks the protection ceiling of conventional adversarial training.Experiments demonstrate that SPD significantly improves the robustness of both Matrix Factorization(MF)and LightGCN models against various poisoning attacks.We show that SPD effectively suppresses malicious gradient propagation and maintains recommendation accuracy.Evaluations on Gowalla and Yelp2018 confirmthat SPD-trainedmodels withstandmultiple attack strategies—including Random,Bandwagon,DP,and Rev attacks—while preserving performance.
基金supported by Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/540/46.
文摘Over the years,Generative Adversarial Networks(GANs)have revolutionized the medical imaging industry for applications such as image synthesis,denoising,super resolution,data augmentation,and cross-modality translation.The objective of this review is to evaluate the advances,relevances,and limitations of GANs in medical imaging.An organised literature review was conducted following the guidelines of PRISMA(Preferred Reporting Items for Systematic Reviews and Meta-Analyses).The literature considered included peer-reviewed papers published between 2020 and 2025 across databases including PubMed,IEEE Xplore,and Scopus.The studies related to applications of GAN architectures in medical imaging with reported experimental outcomes and published in English in reputable journals and conferences were considered for the review.Thesis,white papers,communication letters,and non-English articles were not included for the same.CLAIM based quality assessment criteria were applied to the included studies to assess the quality.The study classifies diverse GAN architectures,summarizing their clinical applications,technical performances,and their implementation hardships.Key findings reveal the increasing applications of GANs for enhancing diagnostic accuracy,reducing data scarcity through synthetic data generation,and supporting modality translation.However,concerns such as limited generalizability,lack of clinical validation,and regulatory constraints persist.This review provides a comprehensive study of the prevailing scenario of GANs in medical imaging and highlights crucial research gaps and future directions.Though GANs hold transformative capability for medical imaging,their integration into clinical use demands further validation,interpretability,and regulatory alignment.
文摘The growing use of Portable Document Format(PDF)files across various sectors such as education,government,and business has inadvertently turned them into a major target for cyberattacks.Cybercriminals take advantage of the inherent flexibility and layered structure ofPDFs to inject malicious content,often employing advanced obfuscation techniques to evade detection by traditional signature-based security systems.These conventional methods are no longer adequate,especially against sophisticated threats like zero-day exploits and polymorphic malware.In response to these challenges,this study introduces a machine learning-based detection framework specifically designed to combat such threats.Central to the proposed solution is a stacked ensemble learning model that combines the strengths of four high-performing classifiers:Random Forest(RF),Extreme Gradient Boosting(XGB),LightGBM(LGBM),and CatBoost(CB).These models operate in parallel as base learners,each capturing different aspects of the data.Their outputs are then refined by a Gradient Boosting Classifier(GBC),which serves as a meta-learner to enhance prediction accuracy.To ensure the model remains both efficient and effective,Principal Component Analysis(PCA)is applied to reduce feature dimensionality while preserving critical information necessary for malware classification.The model is trained and validated using the CIC-Evasive PDFMalware2022 dataset,which includes a wide range of both malicious and benign PDF samples.The results demonstrate that the framework achieves impressive performance,with 97.10% accuracy and a 97.39% F1-score,surpassing several existing techniques.To enhance trust and interpretability,the system incorporates Local Interpretable Model-agnostic Explanations(LIME),which provides user-friendly insights into the rationale behind each prediction.This research emphasizes how the integration of ensemble learning,feature reduction,and explainable AI can lead to a practical and scalable solution for detecting complex PDF-based threats.The proposed framework lays the foundation for the next generation of intelligent,resilient cybersecurity systems that can address ever-evolving attack strategies.
文摘In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.
文摘Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces a visual evaluation index named confidence centroid skewing quadrilateral,which is based on a classification confidence-based confusion matrix,offering a quantitative and visual comparison of the adversarial robustness among different classification algorithms,and enhances intuitiveness and interpretability of attack impacts.We first conduct a validity test and sensitive analysis of the method.Then,prove its effectiveness through the experiments of five classification algorithms including artificial neural network(ANN),logistic regression(LR),support vector machine(SVM),convolutional neural network(CNN)and transformer against three adversarial attacks such as fast gradient sign method(FGSM),DeepFool,and projected gradient descent(PGD)attack.
基金supported by a grant(No.CRPG-25-2054)under the Cybersecurity Research and Innovation Pioneers Initiative,provided by the National Cybersecurity Authority(NCA)in the Kingdom of Saudi Arabia.
文摘Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.
基金funded by the Henan Province Key R&D Program Project,“Research and Application Demonstration of Class Ⅱ Superlattice Medium Wave High Temperature Infrared Detector Technology”,grant number 231111210400.
文摘High-resolution remote sensing imagery is essential for critical applications such as precision agriculture,urban management planning,and military reconnaissance.Although significant progress has been made in singleimage super-resolution(SISR)using generative adversarial networks(GANs),existing approaches still face challenges in recovering high-frequency details,effectively utilizing features,maintaining structural integrity,and ensuring training stability—particularly when dealing with the complex textures characteristic of remote sensing imagery.To address these limitations,this paper proposes the Improved ResidualModule and AttentionMechanism Network(IRMANet),a novel architecture specifically designed for remote sensing image reconstruction.IRMANet builds upon the Super-Resolution Generative Adversarial Network(SRGAN)framework and introduces several key innovations.First,the Enhanced Residual Unit(ERU)enhances feature reuse and stabilizes training through deep residual connections.Second,the Self-Attention Residual Block(SARB)incorporates a self-attentionmechanism into the Improved Residual Module(IRM)to effectivelymodel long-range dependencies and automatically emphasize salient features.Additionally,the IRM adopts amulti-scale feature fusion strategy to facilitate synergistic interactions between local detail and global semantic information.The effectiveness of each component is validated through ablation studies,while comprehensive comparative experiments on standard remote sensing datasets demonstrate that IRMANet significantly outperforms both the baseline and state-of-the-art methods in terms of perceptual quality and quantitative metrics.Specifically,compared to the baseline model,at a magnification factor of 2,IRMANet achieves an improvement of 0.24 dB in peak signal-to-noise ratio(PSNR)and 0.54 in structural similarity index(SSIM);at a magnification factor of 4,it achieves gains of 0.22 dB in PSNR and 0.51 in SSIM.These results confirm that the proposedmethod effectively enhances detail representation and structural reconstruction accuracy in complex remote sensing scenarios,offering robust technical support for high-precision detection and identification of both military and civilian aircraft.
文摘The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques,which are often unable to adapt to rapidly evolving attack patterns.To address these challenges,this study proposes X-MalNet,a lightweight Convolutional Neural Network(CNN)framework designed for static malware classification through image-based representations of binary executables.By converting malware binaries into grayscale images,the model extracts distinctive structural and texture-level features that signify malicious intent,thereby eliminating the dependence on manual feature engineering or dynamic behavioral analysis.Built upon a modified AlexNet architecture,X-MalNet employs transfer learning to enhance generalization and reduce computational cost,enabling efficient training and deployment on limited hardware resources.To promote interpretability and transparency,the framework integrates Gradient-weighted Class ActivationMapping(Grad-CAM)and Deep SHapleyAdditive exPlanations(DeepSHAP),offering spatial and pixel-level visualizations that reveal howspecific image regions influence classification outcomes.These explainability components support security analysts in validating the model’s reasoning,strengthening confidence in AI-assisted malware detection.Comprehensive experiments on the Malimg and Malevis benchmark datasets confirm the superior performance of X-MalNet,achieving classification accuracies of 99.15% and 98.72%,respectively.Further robustness evaluations using FastGradient SignMethod(FGSM)and Projected Gradient Descent(PGD)adversarial attacks demonstrate the model’s resilience against perturbed inputs.In conclusion,X-MalNet emerges as a scalable,interpretable,and robust malware detection framework that effectively balances accuracy,efficiency,and explainability.Its lightweight design and adversarial stability position it as a promising solution for real-world cybersecurity deployments,advancing the development of trustworthy,automated,and transparent malware classification systems.
基金Supported by the"The 14th Five Year Plan"Hubei Provincial ad-vantaged characteristic disciplines(groups)project of Wuhan University of Science and Technology(Grant No.2023B0404)National Natural Science Foundation of China(Grant Nos.52275503 and 72471181)+2 种基金Hubei Provincial Outstanding Youth Fund of China(Grant No.2023AFA092)Hubei Provincial Natural Science Foundation of China(Grant No.2023AFB915)Hubei Provincial Key Research and Development Plan Project of China(Grant No.2023BAB048).
文摘Common strong noise interferences like metal splashes,smoke,and arc light during welding can seriously pollute the laser stripe images,causing the tracking model to drift and leading to tracking failure.At present,there are already many mature methods for identifying and extracting feature points of linear laser stripes.When the laser stripe forms a curved shape on the surface of the workpiece,these linear methods will no longer be applicable.To eliminate interference sources,enhance the robustness of the weld tracking model,and effectively extract the feature points of curved laser stripes under strong noise conditions.This paper proposes a Conditional Generative Adversarial Network(CGAN)based anti-interference recognition method for welding images.The generator adopts an improved U-Net++structure,adds a Multi-scale Channel Attention module(MS-CAM),introduces Deep Supervision,and proposes a Multi-output Fusion strategy(MOFS)in the output result to en-hance the image inpainting effect;the discriminator uses PatchGAN.The center of the laser stripe is obtained using the grayscale center of mass method and then combined with polynomial fitting to extract the feature points of the weld seam.The experimental results show that the PSNR of the inpainting image is 26.24 dB,the SSIM is 0.98,and the LPIPS is 0.032.The centerline of the inpainting image and the centerline of the noise-free image laser stripe are fitted with a curve.The error of centerline feature points is no more than 5%,confirming the superiority and feasibility of the method.
基金supported by the Jiangsu Engineering Research Center of the Key Technology for Intelligent Manufacturing Equipment and the Suqian Key Laboratory of Intelligent Manufacturing(Grant No.M202108).
文摘To address the issues of insufficient and imbalanced data samples in proton exchange membrane fuel cell(PEMFC)performance degradation prediction,this study proposes a data augmentation-based model to predict PEMFC performance degradation.Firstly,an improved generative adversarial network(IGAN)with adaptive gradient penalty coefficient is proposed to address the problems of excessively fast gradient descent and insufficient diversity of generated samples.Then,the IGANis used to generate datawith a distribution analogous to real data,therebymitigating the insufficiency and imbalance of original PEMFC samples and providing the predictionmodel with training data rich in feature information.Finally,a convolutional neural network-bidirectional long short-termmemory(CNN-BiLSTM)model is adopted to predict PEMFC performance degradation.Experimental results show that the data generated by the proposed IGAN exhibits higher quality than that generated by the original GAN,and can fully characterize and enrich the original data’s features.Using the augmented data,the prediction accuracy of the CNN-BiLSTM model is significantly improved,rendering it applicable to tasks of predicting PEMFC performance degradation.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(RS-2023-00242528,50%)supported by the Korea Internet&Security Agency(KISA)through the Information Security Specialized University Support Project(50%).
文摘As attack techniques evolve and data volumes increase,the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential.Artificial intelligence holds significant potential to improve the operational efficiency and cybersecurity of these systems.However,its dependence on cyber-based infrastructures expands the attack surface and introduces the risk that adversarial manipulations of artificial intelligence models may cause physical harm.To address these concerns,this study presents a comprehensive review of artificial intelligence-driven threat detection methods and adversarial attacks targeting artificial intelligence within industrial control environments,examining both their benefits and associated risks.A systematic literature review was conducted across major scientific databases,including IEEE,Elsevier,Springer Nature,ACM,MDPI,and Wiley,covering peer-reviewed journal and conference papers published between 2017 and 2026.Studies were selected based on predefined inclusion and exclusion criteria following a structured screening process.Based on an analysis of 101 selected studies,this survey categorizes artificial intelligence-based threat detection approaches across the physical,control,and application layers of industrial control systems and examines poisoning,evasion,and extraction attacks targeting industrial artificial intelligence.The findings identify key research trends,highlight unresolved security challenges,and discuss implications for the secure deployment of artificial intelligence-enabled cybersecurity solutions in industrial control systems.
文摘Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation.
基金supported by the National Natural Science Foundation of China(Grant No.62172123)the Key Research and Development Program of Heilongjiang Province,China(GrantNo.2022ZX01A36).
文摘Federated Learning(FL)protects data privacy through a distributed training mechanism,yet its decentralized nature also introduces new security vulnerabilities.Backdoor attacks inject malicious triggers into the global model through compromised updates,posing significant threats to model integrity and becoming a key focus in FL security.Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity,resulting in limited stealth and adaptability.To address the heterogeneity of malicious client devices,this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack(CASBA).By incorporating measurements of clients’computational and communication capabilities,CASBA employs a dynamic hierarchical attack strategy that adaptively aligns attack intensity with available resources.Furthermore,an improved deep convolutional generative adversarial network(DCGAN)is integrated into the attack pipeline to embed triggers without modifying original data,significantly enhancing stealthiness.Comparative experiments with Shadow Backdoor Attack(SBA)across multiple scenarios demonstrate that CASBA dynamically adjusts resource consumption based on device capabilities,reducing average memory usage per iteration by 5.8%.CASBA improves resource efficiency while keeping the drop in attack success rate within 3%.Additionally,the effectiveness of CASBA against three robust FL algorithms is also validated.
基金The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number(PSAU/2024/01/32082).
文摘In Human–Robot Interaction(HRI),generating robot trajectories that accurately reflect user intentions while ensuring physical realism remains challenging,especially in unstructured environments.In this study,we develop a multimodal framework that integrates symbolic task reasoning with continuous trajectory generation.The approach employs transformer models and adversarial training to map high-level intent to robotic motion.Information from multiple data sources,such as voice traits,hand and body keypoints,visual observations,and recorded paths,is integrated simultaneously.These signals are mapped into a shared representation that supports interpretable reasoning while enabling smooth and realistic motion generation.Based on this design,two different learning strategies are investigated.In the first step,grammar-constrained Linear Temporal Logic(LTL)expressions are created from multimodal human inputs.These expressions are subsequently decoded into robot trajectories.The second method generates trajectories directly from symbolic intent and linguistic data,bypassing an intermediate logical representation.Transformer encoders combine multiple types of information,and autoregressive transformer decoders generate motion sequences.Adding smoothness and speed limits during training increases the likelihood of physical feasibility.To improve the realism and stability of the generated trajectories during training,an adversarial discriminator is also included to guide them toward the distribution of actual robot motion.Tests on the NATSGLD dataset indicate that the complete system exhibits stable training behaviour and performance.In normalised coordinates,the logic-based pipeline has an Average Displacement Error(ADE)of 0.040 and a Final Displacement Error(FDE)of 0.036.The adversarial generator makes substantially more progress,reducing ADE to 0.021 and FDE to 0.018.Visual examination confirms that the generated trajectories closely align with observed motion patterns while preserving smooth temporal dynamics.
基金funded by the National Key R&D Program of China under Grant(No.2022YFB3102901)National Natural Science Foundation of China(Nos.62072115,62102094)Shanghai Science and Technology Innovation Action Plan Project(No.22510713600).
文摘User identity linkage(UIL)across online social networks seeks to match accounts belonging to the same real-world individual.This cross-platformmapping enables accurate user modeling but also raises serious privacy risks.Over the past decade,the research community has developed a wide range of UIL methods,from structural embeddings tomultimodal fusion architectures.However,corresponding adversarial and defensive approaches remain fragmented and comparatively understudied.In this survey,we provide a unified overview of both mapping and antimappingmethods for UIL.We categorize representativemappingmodels by learning paradigmand datamodality,and systematically compare them with emerging countermeasures including adversarial injection,structural perturbation,and identity obfuscation.To bridge these two threads,we introduce amodality-oriented taxonomy and a formal gametheoretic framing that casts cross-network mapping as a contest between mappers and anti-mappers.This framing allows us to construct a cross-modality dependency matrix,which reveals structural information as themost contested signal,identifies node injection as the most robust defensive strategy,and points to multimodal integration as a promising direction.Our survey underscores the need for balanced,privacy-preserving identity inference and provides a foundation for future research on the adversarial dynamics of social identity mapping and defense.