1 Introduction Constrained Reinforcement Learning(CRL),modeled as a Constrained Markov Decision Process(CMDP)[1,2],is commonly used to address applications with security restrictions.Previous works[3]primarily focused...1 Introduction Constrained Reinforcement Learning(CRL),modeled as a Constrained Markov Decision Process(CMDP)[1,2],is commonly used to address applications with security restrictions.Previous works[3]primarily focused on the single-constraint issue,overlooking the more common multi-constraint setting which involves extensive computations and combinatorial optimization of multiple Lagrange multipliers.展开更多
Lunar core samples are the key materials for accurately assessing and developing lunar resources.However,the difficulty of maintaining borehole stability in the lunar coring process limits the depth of lunar coring.He...Lunar core samples are the key materials for accurately assessing and developing lunar resources.However,the difficulty of maintaining borehole stability in the lunar coring process limits the depth of lunar coring.Here,a strategy of using a reinforcement fluid that undergoes a phase transition spontaneously in a vacuum environment to reinforce the borehole is proposed.Based on this strategy,a reinforcement liquid suitable for a wide temperature range and a high vacuum environment was developed.A feasibility study on reinforcing the borehole with the reinforcement liquid was carried out,and it is found that the cohesion of the simulated lunar soil can be increased from 2 to 800 kPa after using the reinforcement liquid.Further,a series of coring experiments are conducted using a selfdeveloped high vacuum(vacuum degree of 5 Pa)and low-temperature(between-30 and 50℃)simulation platform.It is confirmed that the high-boiling-point reinforcement liquid pre-placed in the drill pipe can be released spontaneously during the drilling process and finally complete the reinforcement of the borehole.The reinforcement effect of the borehole is better when the solute concentration is between0.15 and 0.25 g/mL.展开更多
This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary obj...This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary objective is to explore the unknown environments to locate and track targets effectively. To address this problem, we propose a novel Multi-Agent Reinforcement Learning (MARL) method based on Graph Neural Network (GNN). Firstly, a method is introduced for encoding continuous-space multi-UAV problem data into spatial graphs which establish essential relationships among agents, obstacles, and targets. Secondly, a Graph AttenTion network (GAT) model is presented, which focuses exclusively on adjacent nodes, learns attention weights adaptively and allows agents to better process information in dynamic environments. Reward functions are specifically designed to tackle exploration challenges in environments with sparse rewards. By introducing a framework that integrates centralized training and distributed execution, the advancement of models is facilitated. Simulation results show that the proposed method outperforms the existing MARL method in search rate and tracking performance with less collisions. The experiments show that the proposed method can be extended to applications with a larger number of agents, which provides a potential solution to the challenging problem of multi-UAV autonomous tracking in dynamic unknown environments.展开更多
To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforce...To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforcement learning with rule-based decision-making methods.A risk assessment model for lane-change maneuvers considering uncertain predictions of surrounding vehicles is established as a safety filter to improve learning efficiency while correcting dangerous actions for safety enhancement.On this basis,a Risk-fused DDQN is constructed utilizing the model-based risk assessment and supervision mechanism.The proposed reinforcement learning algorithm sets up a separate experience buffer for dangerous trials and punishes such actions,which is shown to improve the sampling efficiency and training outcomes.Compared with conventional DDQN methods,the proposed algorithm improves the convergence value of cumulated reward by 7.6%and 2.2%in the two constructed scenarios in the simulation study and reduces the number of training episodes by 52.2%and 66.8%respectively.The success rate of lane change is improved by 57.3%while the time headway is increased at least by 16.5%in real vehicle tests,which confirms the higher training efficiency,scenario adaptability,and security of the proposed Risk-fused DDQN.展开更多
Granite residual soil (GRS) is a type of weathering soil that can decompose upon contact with water, potentially causing geological hazards. In this study, cement, an alkaline solution, and glass fiber were used to re...Granite residual soil (GRS) is a type of weathering soil that can decompose upon contact with water, potentially causing geological hazards. In this study, cement, an alkaline solution, and glass fiber were used to reinforce GRS. The effects of cement content and SiO_(2)/Na2O ratio of the alkaline solution on the static and dynamic strengths of GRS were discussed. Microscopically, the reinforcement mechanism and coupling effect were examined using X-ray diffraction (XRD), micro-computed tomography (micro-CT), and scanning electron microscopy (SEM). The results indicated that the addition of 2% cement and an alkaline solution with an SiO_(2)/Na2O ratio of 0.5 led to the densest matrix, lowest porosity, and highest static compressive strength, which was 4994 kPa with a dynamic impact resistance of 75.4 kN after adding glass fiber. The compressive strength and dynamic impact resistance were a result of the coupling effect of cement hydration, a pozzolanic reaction of clay minerals in the GRS, and the alkali activation of clay minerals. Excessive cement addition or an excessively high SiO_(2)/Na2O ratio in the alkaline solution can have negative effects, such as the destruction of C-(A)-S-H gels by the alkaline solution and hindering the production of N-A-S-H gels. This can result in damage to the matrix of reinforced GRS, leading to a decrease in both static and dynamic strengths. This study suggests that further research is required to gain a more precise understanding of the effects of this mixture in terms of reducing our carbon footprint and optimizing its properties. The findings indicate that cement and alkaline solution are appropriate for GRS and that the reinforced GRS can be used for high-strength foundation and embankment construction. The study provides an analysis of strategies for mitigating and managing GRS slope failures, as well as enhancing roadbed performance.展开更多
Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision u...Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision under multi-constraint conditions,a hierarchical intelligent decision-making method based on Deep Reinforcement Learning(DRL)was proposed.First,an intelligent decision-making framework of“DRL evasion decision”+“impact prediction guidance decision”was established:it takes the impact point deviation correction ability as the constraint and the maximum miss distance as the objective,and effectively solves the problem of poor decisionmaking effect caused by the large IEI decision space.Second,to solve the sparse reward problem faced by evasion decision-making,a hierarchical decision-making method consisting of maneuver timing decision and maneuver duration decision was proposed,and the corresponding Markov Decision Process(MDP)was designed.A detailed simulation experiment was designed to analyze the advantages and computational complexity of the proposed method.Simulation results show that the proposed model has good performance and low computational resource requirement.The minimum miss distance is 21.3 m under the condition of guaranteeing the impact point accuracy,and the single decision-making time is 4.086 ms on an STM32F407 single-chip microcomputer,which has engineering application value.展开更多
Grouting has been the most effective approach to mitigate water inrush disasters in underground engineering due to its ability to plug groundwater and enhance rock strength.Nevertheless,there is a lack of potent numer...Grouting has been the most effective approach to mitigate water inrush disasters in underground engineering due to its ability to plug groundwater and enhance rock strength.Nevertheless,there is a lack of potent numerical tools for assessing the grouting effectiveness in water-rich fractured strata.In this study,the hydro-mechanical coupled discontinuous deformation analysis(HM-DDA)is inaugurally extended to simulate the grouting process in a water-rich discrete fracture network(DFN),including the slurry migration,fracture dilation,water plugging in a seepage field,and joint reinforcement after coagulation.To validate the capabilities of the developed method,several numerical examples are conducted incorporating the Newtonian fluid and Bingham slurry.The simulation results closely align with the analytical solutions.Additionally,a set of compression tests is conducted on the fresh and grouted rock specimens to verify the reinforcement method and calibrate the rational properties of reinforced joints.An engineering-scale model based on a real water inrush case of the Yonglian tunnel in a water-rich fractured zone has been established.The model demonstrates the effectiveness of grouting reinforcement in mitigating water inrush disaster.The results indicate that increased grouting pressure greatly affects the regulation of water outflow from the tunnel face and the prevention of rock detachment face after excavation.展开更多
Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-...Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-robot control.Empowering cooperative MARL with multi-task decision-making capabilities is expected to further broaden its application scope.In multi-task scenarios,cooperative MARL algorithms need to address 3 types of multi-task problems:reward-related multi-task,arising from different reward functions;multi-domain multi-task,caused by differences in state and action spaces,state transition functions;and scalability-related multi-task,resulting from the dynamic variation in the number of agents.Most existing studies focus on scalability-related multitask problems.However,with the increasing integration between large language models(LLMs)and multi-agent systems,a growing number of LLM-based multi-agent systems have emerged,enabling more complex multi-task cooperation.This paper provides a comprehensive review of the latest advances in this field.By combining multi-task reinforcement learning with cooperative MARL,we categorize and analyze the 3 major types of multi-task problems under multi-agent settings,offering more fine-grained classifications and summarizing key insights for each.In addition,we summarize commonly used benchmarks and discuss future directions of research in this area,which hold promise for further enhancing the multi-task cooperation capabilities of multi-agent systems and expanding their practical applications in the real world.展开更多
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa...Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.展开更多
Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous...Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous control process of SMR can be divided into three stages,say,state diagnosis,autonomous decision-making and coordinated control.In this paper,the autonomous state recognition and task planning of unmanned SMR are investigated.An operating condition recognition method based on the knowledge base of SMR operation is proposed by using the artificial neural network(ANN)technology,which constructs a basis for the state judgment of intelligent reactor control path planning.An improved reinforcement learning path planning algorithm is utilized to implement the path transfer decision-makingThis algorithm performs condition transitions with minimal cost under specified modes.In summary,the full range control path intelligent decision-planning technology of SMR is realized,thus provides some theoretical basis for the design and build of unmanned SMR in the future.展开更多
This review provides a comprehensive overview of natural rubber(NR)composites,focusing on their properties,compounding aspects,and renewable practices involving natural fibre reinforcement.The properties of NR are inf...This review provides a comprehensive overview of natural rubber(NR)composites,focusing on their properties,compounding aspects,and renewable practices involving natural fibre reinforcement.The properties of NR are influenced by the compounding process,which incorporates ingredients such as elastomers,vulcanizing agents,accelerators,activators,and fillers like carbon black and silica.While effective in enhancing properties,these fillers lack biodegradability,prompting the exploration of sustainable alternatives.The potential of natural fibres as renewable reinforcements in NR composites is thoroughly covered in this review,highlighting both their advan-tages,such as improved sustainability,and the challenges they present,such as compatibility with the rubber matrix.Surface treatment methods,including alkali and silane treatments,are also discussed as solutions to improve fibre-matrix adhesion and mitigate these challenges.Additionally,the review highlights the potential of oil palm empty fruit bunch(EFB)fibres as a natural fibre reinforcement.The abundance of EFB fibres and their alignment with sustainable practices make them promising substitutes for conventional fillers,contributing to valuable knowledge and supporting the broader move towards renewable reinforcement to improve sustain-ability without compromising the key properties of rubber composites.展开更多
AIM:To investigate the refractive and the histological changes in guinea pig eyes after posterior scleral reinforcement with scleral allografts.METHODS:Four-week-old guinea pigs were implanted with scleral allografts,...AIM:To investigate the refractive and the histological changes in guinea pig eyes after posterior scleral reinforcement with scleral allografts.METHODS:Four-week-old guinea pigs were implanted with scleral allografts,and the changes of refraction,corneal curvature and axis length were monitored for 51d.The effects of methylprednisolone(MPS)on refraction parameters were also evaluated.And the microstructure and ultra-microstructure of eyes were observed on the 9d and 51d after operation.Repeated-measures analysis of variance and one-way analysis of variance were used.RESULTS:The refraction outcome of the implanted eye decreased after operation,and the refraction change of the 3 mm scleral allografts group was significantly different with control group(P=0.005)and the sham surgical group(P=0.004).After the application of MPS solution,the reduction of refraction outcome was statistically suppressed(P=0.008).The inflammatory encapsulation appeared 9d after surgery.On 51d after operation,the loose implanted materials were absorbed,while the adherent implanted materials with MPS group were still tightly attached to the recipient’s eyeball.CONCLUSION:After implantation of scleral allografts,the refraction of guinea pig eyes fluctuated from a decrease to an increase.The outcome of the scleral allografts is affected by implantation methods and the inflammatory response.Stability of the material can be improved by MPS.展开更多
Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic top...Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic topology of Flying Ad Hoc Networks(FANETs)present significant challenges for maintaining reliable,low-latency communication.Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable.To overcome these limitations,this paper proposes an improved routing protocol based on reinforcement learning.This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware.The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption,delay,and link quality.Additionally,a Kalman filter is integrated to predict UAV mobility,improving the stability of communication links under dynamic network conditions.Simulation experiments were conducted using realistic scenarios,varying the number of UAVs to assess scalability.An analysis was conducted on key performance metrics,including the packet delivery ratio,end-to-end delay,and total energy consumption.The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15%and reduces delay by up to 25.5%when compared to conventional GEO and QGEO protocols.However,this improvement comes at the cost of higher energy consumption due to additional computations and control overhead.Despite this trade-off,the proposed solution ensures reliable and efficient communication,making it well-suited for large-scale UAV networks operating in complex urban environments.展开更多
The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet ...The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
In multiple Unmanned Aerial Vehicles(UAV)systems,achieving efficient navigation is essential for executing complex tasks and enhancing autonomy.Traditional navigation methods depend on predefined control strategies an...In multiple Unmanned Aerial Vehicles(UAV)systems,achieving efficient navigation is essential for executing complex tasks and enhancing autonomy.Traditional navigation methods depend on predefined control strategies and trajectory planning and often perform poorly in complex environments.To improve the UAV-environment interaction efficiency,this study proposes a multi-UAV integrated navigation algorithm based on Deep Reinforcement Learning(DRL).This algorithm integrates the Inertial Navigation System(INS),Global Navigation Satellite System(GNSS),and Visual Navigation System(VNS)for comprehensive information fusion.Specifically,an improved multi-UAV integrated navigation algorithm called Information Fusion with MultiAgent Deep Deterministic Policy Gradient(IF-MADDPG)was developed.This algorithm enables UAVs to learn collaboratively and optimize their flight trajectories in real time.Through simulations and experiments,test scenarios in GNSS-denied environments were constructed to evaluate the effectiveness of the algorithm.The experimental results demonstrate that the IF-MADDPG algorithm significantly enhances the collaborative navigation capabilities of multiple UAVs in formation maintenance and GNSS-denied environments.Additionally,it has advantages in terms of mission completion time.This study provides a novel approach for efficient collaboration in multi-UAV systems,which significantly improves the robustness and adaptability of navigation systems.展开更多
The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability an...The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability and resource efficiency,particularly in mass customization scenarios where production schedules must accommodate dynamic and personalized demands.To address the challenges of dynamic task allocation,uncertainty,and realtime decision-making,this paper proposes Pathfinder,a deep reinforcement learning-based scheduling framework.Pathfinder models scheduling data through three key matrices:execution time(the time required for a job to complete),completion time(the actual time at which a job is finished),and efficiency(the performance of executing a single job).By leveraging neural networks,Pathfinder extracts essential features from these matrices,enabling intelligent decision-making in dynamic production environments.Unlike traditional approaches with fixed scheduling rules,Pathfinder dynamically selects from ten diverse scheduling rules,optimizing decisions based on real-time environmental conditions.To further enhance scheduling efficiency,a specialized reward function is designed to support dynamic task allocation and real-time adjustments.This function helps Pathfinder continuously refine its scheduling strategy,improving machine utilization and minimizing job completion times.Through reinforcement learning,Pathfinder adapts to evolving production demands,ensuring robust performance in real-world applications.Experimental results demonstrate that Pathfinder outperforms traditional scheduling approaches,offering improved coordination and efficiency in smart factories.By integrating deep reinforcement learning,adaptable scheduling strategies,and an innovative reward function,Pathfinder provides an effective solution to the growing challenges of multi-robot job scheduling in mass customization environments.展开更多
The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP...The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP)and greedy algorithms,have been effective in solving small problem instances but often struggle with scalability and efficiency as the problem size increases.DP,for instance,has exponential time complexity and can become computationally prohibitive for large problem instances.On the other hand,greedy algorithms offer faster solutions but may not always yield the optimal results,especially when the problem involves complex constraints or large numbers of items.This paper introduces a novel reinforcement learning(RL)approach to solve the knapsack problem by enhancing the state representation within the learning environment.We propose a representation where item weights and volumes are expressed as ratios relative to the knapsack’s capacity,and item values are normalized to represent their percentage of the total value across all items.This novel state modification leads to a 5%improvement in accuracy compared to the state-of-the-art RL-based algorithms,while significantly reducing execution time.Our RL-based method outperforms DP by over 9000 times in terms of speed,making it highly scalable for larger problem instances.Furthermore,we improve the performance of the RL model by incorporating Noisy layers into the neural network architecture.The addition of Noisy layers enhances the exploration capabilities of the agent,resulting in an additional accuracy boost of 0.2%–0.5%.The results demonstrate that our approach not only outperforms existing RL techniques,such as the Transformer model in terms of accuracy,but also provides a substantial improvement than DP in computational efficiency.This combination of enhanced accuracy and speed presents a promising solution for tackling large-scale optimization problems in real-world applications,where both precision and time are critical factors.展开更多
The high maneuverability of modern fighters in close air combat imposes significant cognitive demands on pilots,making rapid,accurate decision-making challenging.While reinforcement learning(RL)has shown promise in th...The high maneuverability of modern fighters in close air combat imposes significant cognitive demands on pilots,making rapid,accurate decision-making challenging.While reinforcement learning(RL)has shown promise in this domain,the existing methods often lack strategic depth and generalization in complex,high-dimensional environments.To address these limitations,this paper proposes an optimized self-play method enhanced by advancements in fighter modeling,neural network design,and algorithmic frameworks.This study employs a six-degree-of-freedom(6-DOF)F-16 fighter model based on open-source aerodynamic data,featuring airborne equipment and a realistic visual simulation platform,unlike traditional 3-DOF models.To capture temporal dynamics,Long Short-Term Memory(LSTM)layers are integrated into the neural network,complemented by delayed input stacking.The RL environment incorporates expert strategies,curiositydriven rewards,and curriculum learning to improve adaptability and strategic decision-making.Experimental results demonstrate that the proposed approach achieves a winning rate exceeding90%against classical single-agent methods.Additionally,through enhanced 3D visual platforms,we conducted human-agent confrontation experiments,where the agent attained an average winning rate of over 75%.The agent's maneuver trajectories closely align with human pilot strategies,showcasing its potential in decision-making and pilot training applications.This study highlights the effectiveness of integrating advanced modeling and self-play techniques in developing robust air combat decision-making systems.展开更多
Current damage detection methods based on model updating and sensitivity Jacobian matrixes show a low convergence ratio and computational efficiency for online calculations.The aim of this paper is to construct a real...Current damage detection methods based on model updating and sensitivity Jacobian matrixes show a low convergence ratio and computational efficiency for online calculations.The aim of this paper is to construct a real-time automated damage detection method by developing a theory-assisted adaptive mutiagent twin delayed deep deterministic(TA2-MATD3)policy gradient algorithm.First,the theoretical framework of reinforcement-learning-driven damage detection is established.To address the disadvantages of traditional mutiagent twin delayed deep deterministic(MATD3)method,the theory-assisted mechanism and the adaptive experience playback mechanism are introduced.Moreover,a historical residential house built in 1889 was taken as an example,using its 12-month structural health monitoring data.TA2-MATD3 was compared with existing damage detection methods in terms of the convergence ratio,online computing efficiency,and damage detection accuracy.The results show that the computational efficiency of TA2-MATD3 is approximately 117–160 times that of the traditional methods.The convergence ratio of damage detection on the training set is approximately 97%,and that on the test set is in the range of 86.2%–91.9%.In addition,the main apparent damages found in the field survey were identified by TA2-MATD3.The results indicate that the proposed method can significantly improve the online computing efficiency and damage detection accuracy.This research can provide novel perspectives for the use of reinforcement learning methods to conduct damage detection in online structural health monitoring.展开更多
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha...Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.展开更多
基金supported by the Fundamental Research Funds for the Central Universities(No.2023JBZX011)the Aeronautical Science Foundation of China(No.202300010M5001).
文摘1 Introduction Constrained Reinforcement Learning(CRL),modeled as a Constrained Markov Decision Process(CMDP)[1,2],is commonly used to address applications with security restrictions.Previous works[3]primarily focused on the single-constraint issue,overlooking the more common multi-constraint setting which involves extensive computations and combinatorial optimization of multiple Lagrange multipliers.
基金National Natural Science Foundation of China (Nos.U2013603,51827901,and 52403383)Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2019ZT08G315)+1 种基金Institute of New Energy and Low-Carbon Technology (Sichuan University)State Key Laboratory of Coal Mine Disaster Dynamics and Control of Chongqing University。
文摘Lunar core samples are the key materials for accurately assessing and developing lunar resources.However,the difficulty of maintaining borehole stability in the lunar coring process limits the depth of lunar coring.Here,a strategy of using a reinforcement fluid that undergoes a phase transition spontaneously in a vacuum environment to reinforce the borehole is proposed.Based on this strategy,a reinforcement liquid suitable for a wide temperature range and a high vacuum environment was developed.A feasibility study on reinforcing the borehole with the reinforcement liquid was carried out,and it is found that the cohesion of the simulated lunar soil can be increased from 2 to 800 kPa after using the reinforcement liquid.Further,a series of coring experiments are conducted using a selfdeveloped high vacuum(vacuum degree of 5 Pa)and low-temperature(between-30 and 50℃)simulation platform.It is confirmed that the high-boiling-point reinforcement liquid pre-placed in the drill pipe can be released spontaneously during the drilling process and finally complete the reinforcement of the borehole.The reinforcement effect of the borehole is better when the solute concentration is between0.15 and 0.25 g/mL.
基金supported by the National Natural Science Foundation of China(Nos.12272104,U22B2013).
文摘This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary objective is to explore the unknown environments to locate and track targets effectively. To address this problem, we propose a novel Multi-Agent Reinforcement Learning (MARL) method based on Graph Neural Network (GNN). Firstly, a method is introduced for encoding continuous-space multi-UAV problem data into spatial graphs which establish essential relationships among agents, obstacles, and targets. Secondly, a Graph AttenTion network (GAT) model is presented, which focuses exclusively on adjacent nodes, learns attention weights adaptively and allows agents to better process information in dynamic environments. Reward functions are specifically designed to tackle exploration challenges in environments with sparse rewards. By introducing a framework that integrates centralized training and distributed execution, the advancement of models is facilitated. Simulation results show that the proposed method outperforms the existing MARL method in search rate and tracking performance with less collisions. The experiments show that the proposed method can be extended to applications with a larger number of agents, which provides a potential solution to the challenging problem of multi-UAV autonomous tracking in dynamic unknown environments.
基金Supported by National Key Research and Development Program of China(Grant No.2022YFE0117100)National Science Foundation of China(Grant No.52102468,52325212)Fundamental Research Funds for the Central Universities。
文摘To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforcement learning with rule-based decision-making methods.A risk assessment model for lane-change maneuvers considering uncertain predictions of surrounding vehicles is established as a safety filter to improve learning efficiency while correcting dangerous actions for safety enhancement.On this basis,a Risk-fused DDQN is constructed utilizing the model-based risk assessment and supervision mechanism.The proposed reinforcement learning algorithm sets up a separate experience buffer for dangerous trials and punishes such actions,which is shown to improve the sampling efficiency and training outcomes.Compared with conventional DDQN methods,the proposed algorithm improves the convergence value of cumulated reward by 7.6%and 2.2%in the two constructed scenarios in the simulation study and reduces the number of training episodes by 52.2%and 66.8%respectively.The success rate of lane change is improved by 57.3%while the time headway is increased at least by 16.5%in real vehicle tests,which confirms the higher training efficiency,scenario adaptability,and security of the proposed Risk-fused DDQN.
基金the support provided by the National Natural Science Foundation of China(Grant Nos.52278336 and 42302032)Guangdong Basic and Applied Research Foundation(Grant Nos.2023B1515020061).
文摘Granite residual soil (GRS) is a type of weathering soil that can decompose upon contact with water, potentially causing geological hazards. In this study, cement, an alkaline solution, and glass fiber were used to reinforce GRS. The effects of cement content and SiO_(2)/Na2O ratio of the alkaline solution on the static and dynamic strengths of GRS were discussed. Microscopically, the reinforcement mechanism and coupling effect were examined using X-ray diffraction (XRD), micro-computed tomography (micro-CT), and scanning electron microscopy (SEM). The results indicated that the addition of 2% cement and an alkaline solution with an SiO_(2)/Na2O ratio of 0.5 led to the densest matrix, lowest porosity, and highest static compressive strength, which was 4994 kPa with a dynamic impact resistance of 75.4 kN after adding glass fiber. The compressive strength and dynamic impact resistance were a result of the coupling effect of cement hydration, a pozzolanic reaction of clay minerals in the GRS, and the alkali activation of clay minerals. Excessive cement addition or an excessively high SiO_(2)/Na2O ratio in the alkaline solution can have negative effects, such as the destruction of C-(A)-S-H gels by the alkaline solution and hindering the production of N-A-S-H gels. This can result in damage to the matrix of reinforced GRS, leading to a decrease in both static and dynamic strengths. This study suggests that further research is required to gain a more precise understanding of the effects of this mixture in terms of reducing our carbon footprint and optimizing its properties. The findings indicate that cement and alkaline solution are appropriate for GRS and that the reinforced GRS can be used for high-strength foundation and embankment construction. The study provides an analysis of strategies for mitigating and managing GRS slope failures, as well as enhancing roadbed performance.
基金co-supported by the National Natural Science Foundation of China(No.62103432)the China Postdoctoral Science Foundation(No.284881)the Young Talent fund of University Association for Science and Technology in Shaanxi,China(No.20210108)。
文摘Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision under multi-constraint conditions,a hierarchical intelligent decision-making method based on Deep Reinforcement Learning(DRL)was proposed.First,an intelligent decision-making framework of“DRL evasion decision”+“impact prediction guidance decision”was established:it takes the impact point deviation correction ability as the constraint and the maximum miss distance as the objective,and effectively solves the problem of poor decisionmaking effect caused by the large IEI decision space.Second,to solve the sparse reward problem faced by evasion decision-making,a hierarchical decision-making method consisting of maneuver timing decision and maneuver duration decision was proposed,and the corresponding Markov Decision Process(MDP)was designed.A detailed simulation experiment was designed to analyze the advantages and computational complexity of the proposed method.Simulation results show that the proposed model has good performance and low computational resource requirement.The minimum miss distance is 21.3 m under the condition of guaranteeing the impact point accuracy,and the single decision-making time is 4.086 ms on an STM32F407 single-chip microcomputer,which has engineering application value.
基金supported by the China Scholarship Council(CSC,Grant No.202108050072)JSPS KAKENHI(Grant No.JP19KK0121)。
文摘Grouting has been the most effective approach to mitigate water inrush disasters in underground engineering due to its ability to plug groundwater and enhance rock strength.Nevertheless,there is a lack of potent numerical tools for assessing the grouting effectiveness in water-rich fractured strata.In this study,the hydro-mechanical coupled discontinuous deformation analysis(HM-DDA)is inaugurally extended to simulate the grouting process in a water-rich discrete fracture network(DFN),including the slurry migration,fracture dilation,water plugging in a seepage field,and joint reinforcement after coagulation.To validate the capabilities of the developed method,several numerical examples are conducted incorporating the Newtonian fluid and Bingham slurry.The simulation results closely align with the analytical solutions.Additionally,a set of compression tests is conducted on the fresh and grouted rock specimens to verify the reinforcement method and calibrate the rational properties of reinforced joints.An engineering-scale model based on a real water inrush case of the Yonglian tunnel in a water-rich fractured zone has been established.The model demonstrates the effectiveness of grouting reinforcement in mitigating water inrush disaster.The results indicate that increased grouting pressure greatly affects the regulation of water outflow from the tunnel face and the prevention of rock detachment face after excavation.
基金The National Natural Science Foundation of China(62136008,62293541)The Beijing Natural Science Foundation(4232056)The Beijing Nova Program(20240484514).
文摘Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-robot control.Empowering cooperative MARL with multi-task decision-making capabilities is expected to further broaden its application scope.In multi-task scenarios,cooperative MARL algorithms need to address 3 types of multi-task problems:reward-related multi-task,arising from different reward functions;multi-domain multi-task,caused by differences in state and action spaces,state transition functions;and scalability-related multi-task,resulting from the dynamic variation in the number of agents.Most existing studies focus on scalability-related multitask problems.However,with the increasing integration between large language models(LLMs)and multi-agent systems,a growing number of LLM-based multi-agent systems have emerged,enabling more complex multi-task cooperation.This paper provides a comprehensive review of the latest advances in this field.By combining multi-task reinforcement learning with cooperative MARL,we categorize and analyze the 3 major types of multi-task problems under multi-agent settings,offering more fine-grained classifications and summarizing key insights for each.In addition,we summarize commonly used benchmarks and discuss future directions of research in this area,which hold promise for further enhancing the multi-task cooperation capabilities of multi-agent systems and expanding their practical applications in the real world.
基金National Key Research and Development Program(2021YFB2900604)。
文摘Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.
文摘Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous control process of SMR can be divided into three stages,say,state diagnosis,autonomous decision-making and coordinated control.In this paper,the autonomous state recognition and task planning of unmanned SMR are investigated.An operating condition recognition method based on the knowledge base of SMR operation is proposed by using the artificial neural network(ANN)technology,which constructs a basis for the state judgment of intelligent reactor control path planning.An improved reinforcement learning path planning algorithm is utilized to implement the path transfer decision-makingThis algorithm performs condition transitions with minimal cost under specified modes.In summary,the full range control path intelligent decision-planning technology of SMR is realized,thus provides some theoretical basis for the design and build of unmanned SMR in the future.
基金funded under the Collaborative Research Initiative Grant Scheme(C-RIGS),grant number C-RIGS24-016-0022 from IIUM.
文摘This review provides a comprehensive overview of natural rubber(NR)composites,focusing on their properties,compounding aspects,and renewable practices involving natural fibre reinforcement.The properties of NR are influenced by the compounding process,which incorporates ingredients such as elastomers,vulcanizing agents,accelerators,activators,and fillers like carbon black and silica.While effective in enhancing properties,these fillers lack biodegradability,prompting the exploration of sustainable alternatives.The potential of natural fibres as renewable reinforcements in NR composites is thoroughly covered in this review,highlighting both their advan-tages,such as improved sustainability,and the challenges they present,such as compatibility with the rubber matrix.Surface treatment methods,including alkali and silane treatments,are also discussed as solutions to improve fibre-matrix adhesion and mitigate these challenges.Additionally,the review highlights the potential of oil palm empty fruit bunch(EFB)fibres as a natural fibre reinforcement.The abundance of EFB fibres and their alignment with sustainable practices make them promising substitutes for conventional fillers,contributing to valuable knowledge and supporting the broader move towards renewable reinforcement to improve sustain-ability without compromising the key properties of rubber composites.
基金Supported by the Scientific Research Project of Shanghai Municipal Health Commission(No.202140416)the Clinical Research Boosting Program of the Ninth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine(No.JYLJ202117).
文摘AIM:To investigate the refractive and the histological changes in guinea pig eyes after posterior scleral reinforcement with scleral allografts.METHODS:Four-week-old guinea pigs were implanted with scleral allografts,and the changes of refraction,corneal curvature and axis length were monitored for 51d.The effects of methylprednisolone(MPS)on refraction parameters were also evaluated.And the microstructure and ultra-microstructure of eyes were observed on the 9d and 51d after operation.Repeated-measures analysis of variance and one-way analysis of variance were used.RESULTS:The refraction outcome of the implanted eye decreased after operation,and the refraction change of the 3 mm scleral allografts group was significantly different with control group(P=0.005)and the sham surgical group(P=0.004).After the application of MPS solution,the reduction of refraction outcome was statistically suppressed(P=0.008).The inflammatory encapsulation appeared 9d after surgery.On 51d after operation,the loose implanted materials were absorbed,while the adherent implanted materials with MPS group were still tightly attached to the recipient’s eyeball.CONCLUSION:After implantation of scleral allografts,the refraction of guinea pig eyes fluctuated from a decrease to an increase.The outcome of the scleral allografts is affected by implantation methods and the inflammatory response.Stability of the material can be improved by MPS.
基金funded by Hung Yen University of Technology and Education under grand number UTEHY.L.2025.62.
文摘Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic topology of Flying Ad Hoc Networks(FANETs)present significant challenges for maintaining reliable,low-latency communication.Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable.To overcome these limitations,this paper proposes an improved routing protocol based on reinforcement learning.This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware.The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption,delay,and link quality.Additionally,a Kalman filter is integrated to predict UAV mobility,improving the stability of communication links under dynamic network conditions.Simulation experiments were conducted using realistic scenarios,varying the number of UAVs to assess scalability.An analysis was conducted on key performance metrics,including the packet delivery ratio,end-to-end delay,and total energy consumption.The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15%and reduces delay by up to 25.5%when compared to conventional GEO and QGEO protocols.However,this improvement comes at the cost of higher energy consumption due to additional computations and control overhead.Despite this trade-off,the proposed solution ensures reliable and efficient communication,making it well-suited for large-scale UAV networks operating in complex urban environments.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
基金co-supported by the National Natural Science Foundation of China(Nos.92371201 and 52192633)the Natural Science Foundation of Shaanxi Province of China(No.2022JC-03)the Aeronautical Science Foundation of China(No.ASFC-20220019070002)。
文摘In multiple Unmanned Aerial Vehicles(UAV)systems,achieving efficient navigation is essential for executing complex tasks and enhancing autonomy.Traditional navigation methods depend on predefined control strategies and trajectory planning and often perform poorly in complex environments.To improve the UAV-environment interaction efficiency,this study proposes a multi-UAV integrated navigation algorithm based on Deep Reinforcement Learning(DRL).This algorithm integrates the Inertial Navigation System(INS),Global Navigation Satellite System(GNSS),and Visual Navigation System(VNS)for comprehensive information fusion.Specifically,an improved multi-UAV integrated navigation algorithm called Information Fusion with MultiAgent Deep Deterministic Policy Gradient(IF-MADDPG)was developed.This algorithm enables UAVs to learn collaboratively and optimize their flight trajectories in real time.Through simulations and experiments,test scenarios in GNSS-denied environments were constructed to evaluate the effectiveness of the algorithm.The experimental results demonstrate that the IF-MADDPG algorithm significantly enhances the collaborative navigation capabilities of multiple UAVs in formation maintenance and GNSS-denied environments.Additionally,it has advantages in terms of mission completion time.This study provides a novel approach for efficient collaboration in multi-UAV systems,which significantly improves the robustness and adaptability of navigation systems.
基金supported by National Natural Science Foundation of China under Grant No.62372110Fujian Provincial Natural Science of Foundation under Grants 2023J02008,2024H0009.
文摘The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability and resource efficiency,particularly in mass customization scenarios where production schedules must accommodate dynamic and personalized demands.To address the challenges of dynamic task allocation,uncertainty,and realtime decision-making,this paper proposes Pathfinder,a deep reinforcement learning-based scheduling framework.Pathfinder models scheduling data through three key matrices:execution time(the time required for a job to complete),completion time(the actual time at which a job is finished),and efficiency(the performance of executing a single job).By leveraging neural networks,Pathfinder extracts essential features from these matrices,enabling intelligent decision-making in dynamic production environments.Unlike traditional approaches with fixed scheduling rules,Pathfinder dynamically selects from ten diverse scheduling rules,optimizing decisions based on real-time environmental conditions.To further enhance scheduling efficiency,a specialized reward function is designed to support dynamic task allocation and real-time adjustments.This function helps Pathfinder continuously refine its scheduling strategy,improving machine utilization and minimizing job completion times.Through reinforcement learning,Pathfinder adapts to evolving production demands,ensuring robust performance in real-world applications.Experimental results demonstrate that Pathfinder outperforms traditional scheduling approaches,offering improved coordination and efficiency in smart factories.By integrating deep reinforcement learning,adaptable scheduling strategies,and an innovative reward function,Pathfinder provides an effective solution to the growing challenges of multi-robot job scheduling in mass customization environments.
基金supported in part by the Research Start-Up Funds of South-Central Minzu University under Grants YZZ23002,YZY23001,and YZZ18006in part by the Hubei Provincial Natural Science Foundation of China under Grants 2024AFB842 and 2023AFB202+3 种基金in part by the Knowledge Innovation Program of Wuhan Basic Research underGrant 2023010201010151in part by the Spring Sunshine Program of Ministry of Education of the People’s Republic of China under Grant HZKY20220331in part by the Funds for Academic Innovation Teams and Research Platformof South-CentralMinzu University Grant Number:XT224003,PTZ24001in part by the Career Development Fund(CDF)of the Agency for Science,Technology and Research(A*STAR)(Grant Number:C233312007).
文摘The knapsack problem is a classical combinatorial optimization problem widely encountered in areas such as logistics,resource allocation,and portfolio optimization.Traditional methods,including dynamic program-ming(DP)and greedy algorithms,have been effective in solving small problem instances but often struggle with scalability and efficiency as the problem size increases.DP,for instance,has exponential time complexity and can become computationally prohibitive for large problem instances.On the other hand,greedy algorithms offer faster solutions but may not always yield the optimal results,especially when the problem involves complex constraints or large numbers of items.This paper introduces a novel reinforcement learning(RL)approach to solve the knapsack problem by enhancing the state representation within the learning environment.We propose a representation where item weights and volumes are expressed as ratios relative to the knapsack’s capacity,and item values are normalized to represent their percentage of the total value across all items.This novel state modification leads to a 5%improvement in accuracy compared to the state-of-the-art RL-based algorithms,while significantly reducing execution time.Our RL-based method outperforms DP by over 9000 times in terms of speed,making it highly scalable for larger problem instances.Furthermore,we improve the performance of the RL model by incorporating Noisy layers into the neural network architecture.The addition of Noisy layers enhances the exploration capabilities of the agent,resulting in an additional accuracy boost of 0.2%–0.5%.The results demonstrate that our approach not only outperforms existing RL techniques,such as the Transformer model in terms of accuracy,but also provides a substantial improvement than DP in computational efficiency.This combination of enhanced accuracy and speed presents a promising solution for tackling large-scale optimization problems in real-world applications,where both precision and time are critical factors.
基金co-supported by the National Natural Science Foundation of China(No.91852115)。
文摘The high maneuverability of modern fighters in close air combat imposes significant cognitive demands on pilots,making rapid,accurate decision-making challenging.While reinforcement learning(RL)has shown promise in this domain,the existing methods often lack strategic depth and generalization in complex,high-dimensional environments.To address these limitations,this paper proposes an optimized self-play method enhanced by advancements in fighter modeling,neural network design,and algorithmic frameworks.This study employs a six-degree-of-freedom(6-DOF)F-16 fighter model based on open-source aerodynamic data,featuring airborne equipment and a realistic visual simulation platform,unlike traditional 3-DOF models.To capture temporal dynamics,Long Short-Term Memory(LSTM)layers are integrated into the neural network,complemented by delayed input stacking.The RL environment incorporates expert strategies,curiositydriven rewards,and curriculum learning to improve adaptability and strategic decision-making.Experimental results demonstrate that the proposed approach achieves a winning rate exceeding90%against classical single-agent methods.Additionally,through enhanced 3D visual platforms,we conducted human-agent confrontation experiments,where the agent attained an average winning rate of over 75%.The agent's maneuver trajectories closely align with human pilot strategies,showcasing its potential in decision-making and pilot training applications.This study highlights the effectiveness of integrating advanced modeling and self-play techniques in developing robust air combat decision-making systems.
基金supported by National Key Research and Development Program of China(2023YFF0906100)National Natural Science Foundation of China(52408008)Key Research and Development Program of Jiangsu Province(BE2022833).
文摘Current damage detection methods based on model updating and sensitivity Jacobian matrixes show a low convergence ratio and computational efficiency for online calculations.The aim of this paper is to construct a real-time automated damage detection method by developing a theory-assisted adaptive mutiagent twin delayed deep deterministic(TA2-MATD3)policy gradient algorithm.First,the theoretical framework of reinforcement-learning-driven damage detection is established.To address the disadvantages of traditional mutiagent twin delayed deep deterministic(MATD3)method,the theory-assisted mechanism and the adaptive experience playback mechanism are introduced.Moreover,a historical residential house built in 1889 was taken as an example,using its 12-month structural health monitoring data.TA2-MATD3 was compared with existing damage detection methods in terms of the convergence ratio,online computing efficiency,and damage detection accuracy.The results show that the computational efficiency of TA2-MATD3 is approximately 117–160 times that of the traditional methods.The convergence ratio of damage detection on the training set is approximately 97%,and that on the test set is in the range of 86.2%–91.9%.In addition,the main apparent damages found in the field survey were identified by TA2-MATD3.The results indicate that the proposed method can significantly improve the online computing efficiency and damage detection accuracy.This research can provide novel perspectives for the use of reinforcement learning methods to conduct damage detection in online structural health monitoring.
基金The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/337/46)The research team thanks the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/SERC/13/352-4.
文摘Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.