Large language models(LLMs)have revolutionized AI applications across diverse domains.However,their widespread deployment has introduced critical security vulnerabilities,particularly prompt injection attacks that man...Large language models(LLMs)have revolutionized AI applications across diverse domains.However,their widespread deployment has introduced critical security vulnerabilities,particularly prompt injection attacks that manipulate model behavior through malicious instructions.Following Kitchenham’s guidelines,this systematic review synthesizes 128 peer-reviewed studies from 2022 to 2025 to provide a unified understanding of this rapidly evolving threat landscape.Our findings reveal a swift progression from simple direct injections to sophisticated multimodal attacks,achieving over 90%success rates against unprotected systems.In response,defense mechanisms show varying effectiveness:input preprocessing achieves 60%–80%detection rates and advanced architectural defenses demonstrate up to 95%protection against known patterns,though significant gaps persist against novel attack vectors.We identified 37 distinct defense approaches across three categories,but standardized evaluation frameworks remain limited.Our analysis attributes these vulnerabilities to fundamental LLM architectural limitations,such as the inability to distinguish instructions from data and attention mechanism vulnerabilities.This highlights critical research directions such as formal verification methods,standardized evaluation protocols,and architectural innovations for inherently secure LLM designs.展开更多
Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev...Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.展开更多
Federated Learning(FL)enables joint training over distributed devices without data exchange but is highly vulnerable to attacks by adversaries in the form of model poisoning and malicious update injection.This work pr...Federated Learning(FL)enables joint training over distributed devices without data exchange but is highly vulnerable to attacks by adversaries in the form of model poisoning and malicious update injection.This work proposes Secured-FL,a blockchain-based defensive framework that combines smart contract-based authentication,clustering-driven outlier elimination,and dynamic threshold adjustment to defend against adversarial attacks.The framework was implemented on a private Ethereum network with a Proof-of-Authority consensus algorithm to ensure tamper-resistant and auditable model updates.Large-scale simulation on the Cyber Data dataset,under up to 50%malicious client settings,demonstrates Secured-FL achieves 6%-12%higher accuracy,9%-15%lower latency,and approximately 14%less computational expense compared to the PPSS benchmark framework.Additional tests,including confusion matrices,ROC and Precision-Recall curves,and ablation tests,confirm the interpretability and robustness of the defense.Tests for scalability also show consistent performance up to 500 clients,affirming appropriateness to reasonably large deployments.These results make Secured-FL a feasible,adversarially resilient FL paradigm with promising potential for application in smart cities,medicine,and other mission-critical IoT deployments.展开更多
The increasing intelligence of power systems is transforming distribution networks into Cyber-Physical Distribution Systems(CPDS).While enabling advanced functionalities,the tight interdependence between cyber and phy...The increasing intelligence of power systems is transforming distribution networks into Cyber-Physical Distribution Systems(CPDS).While enabling advanced functionalities,the tight interdependence between cyber and physical layers introduces significant security challenges and amplifies operational risks.To address these critical issues,this paper proposes a comprehensive risk assessment framework that explicitly incorporates the physical dependence of information systems.A Bayesian attack graph is employed to quantitatively evaluate the likelihood of successful cyber attacks.By analyzing the critical scenario of fault current path misjudgment,we define novel system-level and node-level risk coupling indices to preciselymeasure the cascading impacts across cyber and physical domains.Furthermore,an attack-responsive power recovery optimization model is established,integrating DistFlowbased physical constraints and sophisticated modeling of information-dependent interference.To enhance resilience against varying attack scenarios,a defense resource allocation model is constructed,where the complex Mixed-Integer Nonlinear Programming(MINLP)problem is efficiently linearized into a Mixed-Integer Linear Programming(MILP)formulation.Finally,to mitigate the impact of targeted attacks,the optimal deployment of terminal defense resources is determined using a Stackelberg game-theoretic approach,aiming to minimize overall system risk.The robustness and effectiveness of the proposed integrated framework are rigorously validated through extensive simulations under diverse attack intensities and defense resource constraints.展开更多
Human Resource(HR)operations increasingly rely on cloud-based platforms that provide hiring,payroll,employee management,and compliance services.These systems,typically built on multi-tenant microservice architectures,...Human Resource(HR)operations increasingly rely on cloud-based platforms that provide hiring,payroll,employee management,and compliance services.These systems,typically built on multi-tenant microservice architectures,offer scalability and efficiency but also expand the attack surface for adversaries.Ransomware has emerged as a leading threat in this domain,capable of halting workflows and exposing sensitive employee records.Traditional defenses such as static hardening and signature-based detection often fail to address the dynamic requirements of HR Software as a Service(SaaS),where continuous availability and privacy compliance are critical.This paper presents a Moving Target Defense(MTD)framework for HR SaaS that combines container mutation,IP hopping,and node reassignment to randomize the attack surface without pausing services.Many prior defenses for cloud or IoT rely on static hardening or signature-driven detection and do not meet HR SaaS needs such as uninterrupted sessions,privacy compliance,and live service continuity.This paper presents a MTD framework for HR SaaS that combines container mutation,IP hopping,and node reassignment to randomize the attack surface without pausing services.The framework runs on Kubernetes and uses a KL-divergence-based anomaly detector that monitors HR access logs across five modules(onboarding,employee records,leave,payroll,and exit).In simulation with realistic HR traffic,the approach reaches 96.9% average detection accuracy with AUC 0.94-0.98,cuts mean time to containment to 91.4 s,and lowers the ransomware encryption rate to 13.2%.Measured overheads for CPU,memory,and per-mutation latency remainmodest.Comparedwith priorMTDand non-MTD baselines,the design provides stronger containment without service interruption and aligns with zero-trust and compliance goals.Its modular implementation and control-plane orchestration support stepwise,enterprise-scale deployment in HR SaaS environments.展开更多
The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a...The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a crucial research question arises:How can we differentiate between AI-generated and human-authored text?Existing detectors face some challenges,such as operating as black boxes,relying on supervised training,and being vulnerable to manipulation and misinformation.To tackle these challenges,we propose an innovative unsupervised white-box detection method that utilizes a“dual-driven verification mechanism”to achieve high-performance detection,even in the presence of obfuscated attacks in the text content.To be more specific,we initially employ the SpaceInfi strategy to enhance the difficulty of detecting the text content.Subsequently,we randomly select vulnerable spots from the text and perturb them using another pre-trained language model(e.g.,T5).Finally,we apply a dual-driven defense mechanism(D3M)that validates text content with perturbations,whether generated by a model or authored by a human,based on the dimensions of Information TransmissionQuality and Information TransmissionDensity.Through experimental validation,our proposed novelmethod demonstrates state-of-the-art(SOTA)performancewhen exposed to equivalent levels of perturbation intensity across multiple benchmarks,thereby showcasing the effectiveness of our strategies.展开更多
Brown spot(BS)of rice,caused by Bipolaris oryzae,is a serious concern that not only causes quantitative losses but also affects grain quality.To manage this disease,the use of resistant genetic sources and QTLs is an ...Brown spot(BS)of rice,caused by Bipolaris oryzae,is a serious concern that not only causes quantitative losses but also affects grain quality.To manage this disease,the use of resistant genetic sources and QTLs is an eco-friendly and economical option.In the current study,F_(3) progenies derived from a cross of susceptible parent PMS-18-B(PAU 10845-1-1-1-1)×resistant parent RP Path 77(RP patho-17)were used to identify potential QTLs linked to BS resistance and to associate this resistance with a temporal spike in defense-related enzymes.展开更多
The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address th...The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address this critical challenge,this paper proposes a dynamic defense framework named Zero-day-aware Stackelberg Game-based Multi-Agent Distributed Deep Deterministic Policy Gradient(ZSG-MAD3PG).The framework integrates Stackelberg game modeling with the Multi-Agent Distributed Deep Deterministic Policy Gradient(MAD3PG)algorithm and incorporates defensive deception(DD)strategies to achieve adaptive and efficient protection.While conventional methods typically incur considerable resource overhead and exhibit higher latency due to static or rigid defensive mechanisms,the proposed ZSG-MAD3PG framework mitigates these limitations through multi-stage game modeling and adaptive learning,enabling more efficient resource utilization and faster response times.The Stackelberg-based architecture allows defenders to dynamically optimize packet sampling strategies,while attackers adjust their tactics to reach rapid equilibrium.Furthermore,dynamic deception techniques reduce the time required for the concealment of attacks and the overall system burden.A lightweight behavioral fingerprinting detection mechanism further enhances real-time zero-day attack identification within industrial device clusters.ZSG-MAD3PG demonstrates higher true positive rates(TPR)and lower false alarm rates(FAR)compared to existing methods,while also achieving improved latency,resource efficiency,and stealth adaptability in IIoT zero-day defense scenarios.展开更多
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca...Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.展开更多
Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by de...Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by designing defense strategy on the basis of identifying attack strategy,maintaining stable operation of NCSs.To solve this attack-defense game problem,this letter investigates optimal secure control of NCSs under FDIAs.First,for the alterations of energy caused by false data,a novel attack-defense game model is constructed,which considers the changes of energy caused by the actions of the defender and attacker in the forward and feedback channels.展开更多
Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susce...Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research.展开更多
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor...With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.展开更多
The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the rob...The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model.展开更多
Deep neural networks(DNN)have achieved unprecedented success in numerous machine learning tasks in various domains.However,the existence of adversarial examples raises our concerns in adopting deep learning to safety-...Deep neural networks(DNN)have achieved unprecedented success in numerous machine learning tasks in various domains.However,the existence of adversarial examples raises our concerns in adopting deep learning to safety-critical applications.As a result,we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types,such as images,graphs and text.Thus,it is necessary to provide a systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures.In this survey,we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples,for three most popular data types,including images,graphs and text.展开更多
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ...As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.展开更多
This paper is concerned with a scenario of multiple attackers trying to intercept a target with active defense.Three types of agents are considered in the guidance:The multiple attackers,the target and the defender,wh...This paper is concerned with a scenario of multiple attackers trying to intercept a target with active defense.Three types of agents are considered in the guidance:The multiple attackers,the target and the defender,where the attackers aim to pursuit the target from different directions and evade from the defender simultaneously.The guidance engagement is formulated in the framework of a zero-sum two-person differential game between the two opposing teams,such that the measurements on the maneuver of the target or estimations on the defending strategy of the defender can be absent.Cooperation of the attackers resides in two aspects:redundant interception under the threat of the defender and the relative intercept geometry with the target.The miss distances,the relative intercept angle errors and the costs of the agents are combined into a single performance index of the game.Such formulation enables a unitary approach to the design of guidance laws for the agents.To minimize the control efforts and miss distances for the attackers,an optimization method is proposed to find the best anticipated miss distances to the defender under the constraint that the defender is endowed with a capture radius.Numerical simulations with two cases are conducted to illustrate the effectiveness of the proposed cooperative guidance law.展开更多
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio...Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.展开更多
For better reflecting the interactive defense between targets in practical combat scenarios,the basic weapon-target allocation(WTA)framework needs to be improved.A multi-stage attack WTA method is proposed.First,a def...For better reflecting the interactive defense between targets in practical combat scenarios,the basic weapon-target allocation(WTA)framework needs to be improved.A multi-stage attack WTA method is proposed.First,a defense area analysis is presented according to the targets’positions and the radii of the defense areas to analyze the interactive coverage and protection between targets’defense areas.Second,with the coverage status and coverage layer number,a multi-stage attack planning method is proposed and the multi-stage attack objective function model is established.Simulation is conducted with interactive defense combat scenarios,the traditional WTA method and the multi-stage WTA method are compared,and the objective function model is validated with the Monte-Carlo method.The results suggest that if the combat scenario involves interactive coverage of targets’defense areas,it is imperative to analyze the defense areas and apply the multi-stage attack method to weakening the target defense progressively for better combat effectiveness.展开更多
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li...These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods.展开更多
Heat stress hinders the growth and productivity of sweetpotato plants,predominantly through oxidative damage to cellular membranes.Therefore,the development of efficient approaches for mitigating heat-related impairme...Heat stress hinders the growth and productivity of sweetpotato plants,predominantly through oxidative damage to cellular membranes.Therefore,the development of efficient approaches for mitigating heat-related impairments is essential for the long-term production of sweetpotatoes.Melatonin has been recognised for its capacity to assist plants in dealing with abiotic stress conditions.This research aimed to investigate how different doses of exogenous melatonin influence heat damage in sweetpotato plants.Heat stress drastically affected shoot and root fresh weight by 31.8 and 44.5%,respectively.This reduction resulted in oxidative stress characterised by increased formation of hydrogen peroxide(H_(2)O_(2))by 804.4%,superoxide ion(O_(2)^(·-))by 211.5%and malondialdehyde(MDA)by 234.2%.Heat stress also reduced chlorophyll concentration,photosystemⅡefficiency(F_v/F_m)by 15.3%and gaseous exchange.However,pre-treatment with 100μmol L^(-1)melatonin increased growth and reduced oxidative damage to sweetpotato plants under heat stress.In particular,melatonin decreased H_(2)O_(2),O_(2)^(·-)and MDA by 64.8%,42.7%and 38.2%,respectively.Melatonin also mitigated the decline in chlorophyll levels and improved stomatal traits,gaseous exchange and F_(v)/F_(m)(13%).Results suggested that the favorable outcomes of melatonin treatment can be associated with elevated antioxidant enzyme activity and an increase in non-enzymatic antioxidants and osmo-protectants.Overall,these findings indicate that exogenous melatonin can improve heat stress tolerance in sweetpotatoes.This stu dy will assist re searchers in further investigating how melatonin makes sweetpotatoes more resistant to heat stress.展开更多
基金supported by 2023 Higher Education Scientific Research Planning Project of China Society of Higher Education(No.23PG0408)2023 Philosophy and Social Science Research Programs in Jiangsu Province(No.2023SJSZ0993)+2 种基金Nantong Science and Technology Project(No.JC2023070)Key Project of Jiangsu Province Education Science 14th Five-Year Plan(Grant No.B-b/2024/02/41)the Open Fund of Advanced Cryptography and System Security Key Laboratory of Sichuan Province(Grant No.SKLACSS-202407).
文摘Large language models(LLMs)have revolutionized AI applications across diverse domains.However,their widespread deployment has introduced critical security vulnerabilities,particularly prompt injection attacks that manipulate model behavior through malicious instructions.Following Kitchenham’s guidelines,this systematic review synthesizes 128 peer-reviewed studies from 2022 to 2025 to provide a unified understanding of this rapidly evolving threat landscape.Our findings reveal a swift progression from simple direct injections to sophisticated multimodal attacks,achieving over 90%success rates against unprotected systems.In response,defense mechanisms show varying effectiveness:input preprocessing achieves 60%–80%detection rates and advanced architectural defenses demonstrate up to 95%protection against known patterns,though significant gaps persist against novel attack vectors.We identified 37 distinct defense approaches across three categories,but standardized evaluation frameworks remain limited.Our analysis attributes these vulnerabilities to fundamental LLM architectural limitations,such as the inability to distinguish instructions from data and attention mechanism vulnerabilities.This highlights critical research directions such as formal verification methods,standardized evaluation protocols,and architectural innovations for inherently secure LLM designs.
基金funded by the National Key Research and Development Program of China(Grant No.2024YFE0209000)the NSFC(Grant No.U23B2019).
文摘Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.
文摘Federated Learning(FL)enables joint training over distributed devices without data exchange but is highly vulnerable to attacks by adversaries in the form of model poisoning and malicious update injection.This work proposes Secured-FL,a blockchain-based defensive framework that combines smart contract-based authentication,clustering-driven outlier elimination,and dynamic threshold adjustment to defend against adversarial attacks.The framework was implemented on a private Ethereum network with a Proof-of-Authority consensus algorithm to ensure tamper-resistant and auditable model updates.Large-scale simulation on the Cyber Data dataset,under up to 50%malicious client settings,demonstrates Secured-FL achieves 6%-12%higher accuracy,9%-15%lower latency,and approximately 14%less computational expense compared to the PPSS benchmark framework.Additional tests,including confusion matrices,ROC and Precision-Recall curves,and ablation tests,confirm the interpretability and robustness of the defense.Tests for scalability also show consistent performance up to 500 clients,affirming appropriateness to reasonably large deployments.These results make Secured-FL a feasible,adversarially resilient FL paradigm with promising potential for application in smart cities,medicine,and other mission-critical IoT deployments.
基金supported by China Southern Power Grid Company Limited(066500KK52222006).
文摘The increasing intelligence of power systems is transforming distribution networks into Cyber-Physical Distribution Systems(CPDS).While enabling advanced functionalities,the tight interdependence between cyber and physical layers introduces significant security challenges and amplifies operational risks.To address these critical issues,this paper proposes a comprehensive risk assessment framework that explicitly incorporates the physical dependence of information systems.A Bayesian attack graph is employed to quantitatively evaluate the likelihood of successful cyber attacks.By analyzing the critical scenario of fault current path misjudgment,we define novel system-level and node-level risk coupling indices to preciselymeasure the cascading impacts across cyber and physical domains.Furthermore,an attack-responsive power recovery optimization model is established,integrating DistFlowbased physical constraints and sophisticated modeling of information-dependent interference.To enhance resilience against varying attack scenarios,a defense resource allocation model is constructed,where the complex Mixed-Integer Nonlinear Programming(MINLP)problem is efficiently linearized into a Mixed-Integer Linear Programming(MILP)formulation.Finally,to mitigate the impact of targeted attacks,the optimal deployment of terminal defense resources is determined using a Stackelberg game-theoretic approach,aiming to minimize overall system risk.The robustness and effectiveness of the proposed integrated framework are rigorously validated through extensive simulations under diverse attack intensities and defense resource constraints.
文摘Human Resource(HR)operations increasingly rely on cloud-based platforms that provide hiring,payroll,employee management,and compliance services.These systems,typically built on multi-tenant microservice architectures,offer scalability and efficiency but also expand the attack surface for adversaries.Ransomware has emerged as a leading threat in this domain,capable of halting workflows and exposing sensitive employee records.Traditional defenses such as static hardening and signature-based detection often fail to address the dynamic requirements of HR Software as a Service(SaaS),where continuous availability and privacy compliance are critical.This paper presents a Moving Target Defense(MTD)framework for HR SaaS that combines container mutation,IP hopping,and node reassignment to randomize the attack surface without pausing services.Many prior defenses for cloud or IoT rely on static hardening or signature-driven detection and do not meet HR SaaS needs such as uninterrupted sessions,privacy compliance,and live service continuity.This paper presents a MTD framework for HR SaaS that combines container mutation,IP hopping,and node reassignment to randomize the attack surface without pausing services.The framework runs on Kubernetes and uses a KL-divergence-based anomaly detector that monitors HR access logs across five modules(onboarding,employee records,leave,payroll,and exit).In simulation with realistic HR traffic,the approach reaches 96.9% average detection accuracy with AUC 0.94-0.98,cuts mean time to containment to 91.4 s,and lowers the ransomware encryption rate to 13.2%.Measured overheads for CPU,memory,and per-mutation latency remainmodest.Comparedwith priorMTDand non-MTD baselines,the design provides stronger containment without service interruption and aligns with zero-trust and compliance goals.Its modular implementation and control-plane orchestration support stepwise,enterprise-scale deployment in HR SaaS environments.
文摘The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a crucial research question arises:How can we differentiate between AI-generated and human-authored text?Existing detectors face some challenges,such as operating as black boxes,relying on supervised training,and being vulnerable to manipulation and misinformation.To tackle these challenges,we propose an innovative unsupervised white-box detection method that utilizes a“dual-driven verification mechanism”to achieve high-performance detection,even in the presence of obfuscated attacks in the text content.To be more specific,we initially employ the SpaceInfi strategy to enhance the difficulty of detecting the text content.Subsequently,we randomly select vulnerable spots from the text and perturb them using another pre-trained language model(e.g.,T5).Finally,we apply a dual-driven defense mechanism(D3M)that validates text content with perturbations,whether generated by a model or authored by a human,based on the dimensions of Information TransmissionQuality and Information TransmissionDensity.Through experimental validation,our proposed novelmethod demonstrates state-of-the-art(SOTA)performancewhen exposed to equivalent levels of perturbation intensity across multiple benchmarks,thereby showcasing the effectiveness of our strategies.
基金supported by Punjab Agricultural University,Ludhiana,India,for providing the infrastructure and other facilities for conducting experiments.All other forms of support and financial assistance are duly acknowledged.
文摘Brown spot(BS)of rice,caused by Bipolaris oryzae,is a serious concern that not only causes quantitative losses but also affects grain quality.To manage this disease,the use of resistant genetic sources and QTLs is an eco-friendly and economical option.In the current study,F_(3) progenies derived from a cross of susceptible parent PMS-18-B(PAU 10845-1-1-1-1)×resistant parent RP Path 77(RP patho-17)were used to identify potential QTLs linked to BS resistance and to associate this resistance with a temporal spike in defense-related enzymes.
基金funded in part by the Humanities and Social Sciences Planning Foundation of Ministry of Education of China under Grant No.24YJAZH123National Undergraduate Innovation and Entrepreneurship Training Program of China under Grant No.202510347069the Huzhou Science and Technology Planning Foundation under Grant No.2023GZ04.
文摘The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address this critical challenge,this paper proposes a dynamic defense framework named Zero-day-aware Stackelberg Game-based Multi-Agent Distributed Deep Deterministic Policy Gradient(ZSG-MAD3PG).The framework integrates Stackelberg game modeling with the Multi-Agent Distributed Deep Deterministic Policy Gradient(MAD3PG)algorithm and incorporates defensive deception(DD)strategies to achieve adaptive and efficient protection.While conventional methods typically incur considerable resource overhead and exhibit higher latency due to static or rigid defensive mechanisms,the proposed ZSG-MAD3PG framework mitigates these limitations through multi-stage game modeling and adaptive learning,enabling more efficient resource utilization and faster response times.The Stackelberg-based architecture allows defenders to dynamically optimize packet sampling strategies,while attackers adjust their tactics to reach rapid equilibrium.Furthermore,dynamic deception techniques reduce the time required for the concealment of attacks and the overall system burden.A lightweight behavioral fingerprinting detection mechanism further enhances real-time zero-day attack identification within industrial device clusters.ZSG-MAD3PG demonstrates higher true positive rates(TPR)and lower false alarm rates(FAR)compared to existing methods,while also achieving improved latency,resource efficiency,and stealth adaptability in IIoT zero-day defense scenarios.
基金Supported by the National Natural Science Foundation of China(U1903214,62372339,62371350,61876135)the Ministry of Education Industry University Cooperative Education Project(202102246004,220800006041043,202002142012)the Fundamental Research Funds for the Central Universities(2042023kf1033)。
文摘Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.
基金supported in part by the National Science Foundation of China(62373240,62273224,U24A20259).
文摘Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by designing defense strategy on the basis of identifying attack strategy,maintaining stable operation of NCSs.To solve this attack-defense game problem,this letter investigates optimal secure control of NCSs under FDIAs.First,for the alterations of energy caused by false data,a novel attack-defense game model is constructed,which considers the changes of energy caused by the actions of the defender and attacker in the forward and feedback channels.
基金supported in part by the National Natural Science Foundation of China under Grants No.62372087 and No.62072076the Research Fund of State Key Laboratory of Processors under Grant No.CLQ202310the CSC scholarship.
文摘Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research.
基金Ant Financial,Zhejiang University Financial Technology Research Center.
文摘With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.
基金the National Nat-ural Science Foundation of China under Grant No.62072406,No.U19B2016,No.U20B2038 and No.61871398the Natural Science Foundation of Zhejiang Province under Grant No.LY19F020025the Major Special Funding for“Science and Tech-nology Innovation 2025”in Ningbo under Grant No.2018B10063.
文摘The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model.
基金supported by National Science Foundation(NSF),USA(Nos.IIS-1845081 and CNS-1815636).
文摘Deep neural networks(DNN)have achieved unprecedented success in numerous machine learning tasks in various domains.However,the existence of adversarial examples raises our concerns in adopting deep learning to safety-critical applications.As a result,we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types,such as images,graphs and text.Thus,it is necessary to provide a systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures.In this survey,we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples,for three most popular data types,including images,graphs and text.
基金supported by the National Natural Science Foundation of China(61771154)the Fundamental Research Funds for the Central Universities(3072022CF0601)supported by Key Laboratory of Advanced Marine Communication and Information Technology,Ministry of Industry and Information Technology,Harbin Engineering University,Harbin,China.
文摘As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.
基金supported by the Science and Technology Innovation 2030-Key Project of “New Generation Artificial Intelligence”,China(No.2020AAA0108200)the National Natural Science Foundation of China(Nos.61873011,61922008,61973013 and 61803014)+3 种基金the Defense Industrial Technology Development Program,China(No.JCKY2019601C106)the Innovation Zone Project,China(No.18-163-00-TS-001-00134)the Foundation Strengthening Program Technology Field Fund,China(No.2019-JCJQ-JJ-243)the Fund from Key Laboratory of Dependable Service Computing in Cyber Physical Society,China(No.CPSDSC202001)。
文摘This paper is concerned with a scenario of multiple attackers trying to intercept a target with active defense.Three types of agents are considered in the guidance:The multiple attackers,the target and the defender,where the attackers aim to pursuit the target from different directions and evade from the defender simultaneously.The guidance engagement is formulated in the framework of a zero-sum two-person differential game between the two opposing teams,such that the measurements on the maneuver of the target or estimations on the defending strategy of the defender can be absent.Cooperation of the attackers resides in two aspects:redundant interception under the threat of the defender and the relative intercept geometry with the target.The miss distances,the relative intercept angle errors and the costs of the agents are combined into a single performance index of the game.Such formulation enables a unitary approach to the design of guidance laws for the agents.To minimize the control efforts and miss distances for the attackers,an optimization method is proposed to find the best anticipated miss distances to the defender under the constraint that the defender is endowed with a capture radius.Numerical simulations with two cases are conducted to illustrate the effectiveness of the proposed cooperative guidance law.
基金Taif University,Taif,Saudi Arabia through Taif University Researchers Supporting Project Number(TURSP-2020/115).
文摘Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.
基金the National Natural Science Foundation of China(41871376,41971416,41631072).
文摘For better reflecting the interactive defense between targets in practical combat scenarios,the basic weapon-target allocation(WTA)framework needs to be improved.A multi-stage attack WTA method is proposed.First,a defense area analysis is presented according to the targets’positions and the radii of the defense areas to analyze the interactive coverage and protection between targets’defense areas.Second,with the coverage status and coverage layer number,a multi-stage attack planning method is proposed and the multi-stage attack objective function model is established.Simulation is conducted with interactive defense combat scenarios,the traditional WTA method and the multi-stage WTA method are compared,and the objective function model is validated with the Monte-Carlo method.The results suggest that if the combat scenario involves interactive coverage of targets’defense areas,it is imperative to analyze the defense areas and apply the multi-stage attack method to weakening the target defense progressively for better combat effectiveness.
文摘These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods.
基金supported jointly by the earmarked fund for CARS-10-GW2the key research and development program of Hainan Province(Grant No.ZDYF2020226)+1 种基金Collaborative innovation center of Nanfan and high-efficiency tropical agriculture,Hainan University(Grant No.XTCX2022NYC21)funding of Hainan University[Grant No.KYQD(ZR)22123]。
文摘Heat stress hinders the growth and productivity of sweetpotato plants,predominantly through oxidative damage to cellular membranes.Therefore,the development of efficient approaches for mitigating heat-related impairments is essential for the long-term production of sweetpotatoes.Melatonin has been recognised for its capacity to assist plants in dealing with abiotic stress conditions.This research aimed to investigate how different doses of exogenous melatonin influence heat damage in sweetpotato plants.Heat stress drastically affected shoot and root fresh weight by 31.8 and 44.5%,respectively.This reduction resulted in oxidative stress characterised by increased formation of hydrogen peroxide(H_(2)O_(2))by 804.4%,superoxide ion(O_(2)^(·-))by 211.5%and malondialdehyde(MDA)by 234.2%.Heat stress also reduced chlorophyll concentration,photosystemⅡefficiency(F_v/F_m)by 15.3%and gaseous exchange.However,pre-treatment with 100μmol L^(-1)melatonin increased growth and reduced oxidative damage to sweetpotato plants under heat stress.In particular,melatonin decreased H_(2)O_(2),O_(2)^(·-)and MDA by 64.8%,42.7%and 38.2%,respectively.Melatonin also mitigated the decline in chlorophyll levels and improved stomatal traits,gaseous exchange and F_(v)/F_(m)(13%).Results suggested that the favorable outcomes of melatonin treatment can be associated with elevated antioxidant enzyme activity and an increase in non-enzymatic antioxidants and osmo-protectants.Overall,these findings indicate that exogenous melatonin can improve heat stress tolerance in sweetpotatoes.This stu dy will assist re searchers in further investigating how melatonin makes sweetpotatoes more resistant to heat stress.