Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic top...Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic topology of Flying Ad Hoc Networks(FANETs)present significant challenges for maintaining reliable,low-latency communication.Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable.To overcome these limitations,this paper proposes an improved routing protocol based on reinforcement learning.This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware.The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption,delay,and link quality.Additionally,a Kalman filter is integrated to predict UAV mobility,improving the stability of communication links under dynamic network conditions.Simulation experiments were conducted using realistic scenarios,varying the number of UAVs to assess scalability.An analysis was conducted on key performance metrics,including the packet delivery ratio,end-to-end delay,and total energy consumption.The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15%and reduces delay by up to 25.5%when compared to conventional GEO and QGEO protocols.However,this improvement comes at the cost of higher energy consumption due to additional computations and control overhead.Despite this trade-off,the proposed solution ensures reliable and efficient communication,making it well-suited for large-scale UAV networks operating in complex urban environments.展开更多
The increasing occurrence of corrosion-related damage in steel pipelines has led to the growing use of composite-based repair techniques as an efficient alternative to traditional replacement methods.Computer modeling...The increasing occurrence of corrosion-related damage in steel pipelines has led to the growing use of composite-based repair techniques as an efficient alternative to traditional replacement methods.Computer modeling and structural analysis were performed for the repair reinforcement of a steel pipeline with a composite bandage.A preliminary analysis of possible contact interaction schemes was implemented based on the theory of cylindrical shells,taking into account transverse shear deformations.The finite element method was used for a detailed study of the stress state of the composite bandage and the reinforced section of the pipeline.The limit state of the reinforced section was assessed based on the von Mises criterion for steel and the Tsai-Wu criterion for composites.The effectiveness of the repair was demonstrated on a pipeline whose wall thickness had decreased by 20%as a result of corrosion damage.At a nominal pressure of P=6 MPa,the maximum normal stress in the weakened area reached 381 MPa.The installation of a composite bandage reduced this stress to 312 MPa,making the repaired section virtually as strong as the undamaged pipeline.Due to the linearity of the problem,the results obtained can be easily used to find critical internal pressure values.展开更多
In this paper,a hierarchical reinforcement learning(HRL)based real-time formation control approach is proposed for heterogeneous aerial-ground agents(HAGAs).Initially,to address the issue of imprecise modeling of HAGA...In this paper,a hierarchical reinforcement learning(HRL)based real-time formation control approach is proposed for heterogeneous aerial-ground agents(HAGAs).Initially,to address the issue of imprecise modeling of HAGAs,a unified heterogeneous chained system model is constructed using the hand-position method.Subsequently,a hierarchical framework is designed:(1)To decouple multi-agent collaborative interactions and individual dynamic rules through hierarchical resolution,which enables controller design to be independent of direct reliance on neighborhood collaborative errors.(2)By adopting a dual-layer framework that separates collaborative topology management from individual control strategies,seamless switching between multiple task scenarios can be achieved simply by reconstructing the collaborative topology of the first layer.Moreover,to overcome the issue of non-asymptotic stability of tracking errors caused by the discount factor in traditional optimal control,a cost function based on the derivative of the tracking error is introduced.This not only addresses the error issue caused by the discount factor but also effectively resolves the problem of the unboundedness of the quadratic cost function.Finally,the efficacy of the proposed algorithm is substantiated through simulation experiments.展开更多
Humanoid robots hold significant promise for social interaction and emotional companionship.However,their effectiveness hinges on the ability to convey nuanced and authentic emotions.Here,we presented a universal huma...Humanoid robots hold significant promise for social interaction and emotional companionship.However,their effectiveness hinges on the ability to convey nuanced and authentic emotions.Here,we presented a universal humanoid robot head with a facial kinematics model.Using a reinforcement learning framework guided by symmetry assessment,emotion decoupling,and MLLM authenticity evaluation,our system autonomously learns to generate adaptive facial expressions through dynamic landmark adjustments.By transferring the simulation training results to real-world environments,the robot can perform natural and expressive expressions.Another novel feature is the independent regulation of emotion intensity and expression magnitude across emotional categories,which enhances the ability to achieve culturally adaptive and socially resonant robotic expressions significantly.This research advances adaptive humanoid interaction,offering an easier and more efficient pathway toward culturally resonant and psychologically plausible robotic expressions.展开更多
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision...Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.展开更多
A shaking table test was performed to investigate the different responses of piles with and without cement-soil reinforcement,considering both inertial and kinematic interactions.A comparison of the dynamic shear stre...A shaking table test was performed to investigate the different responses of piles with and without cement-soil reinforcement,considering both inertial and kinematic interactions.A comparison of the dynamic shear stress−strain hysteresis curves of soil profiles on the pile side with and without cement-soil reinforced piles indicates that cement-soil reinforced piles not only bear more tremendous shear stress but also have smaller strains under the action of cyclic shear stress.Furthermore,the cement-soil on the pile side not only shares part of the shear stress and modifies the bending moment distribution but also significantly enhances the resistance of the pile-side soil,reducing the lateral displacement of the superstructure.Cement-soil reinforcement reduced shear strains,inhibited sand liquefaction,and reduced superstructure displacements by 27%−47%(instantaneous)and 40%−65%(permanent).The proportion of horizontal load sharing between cement-soil reinforcement and saturated sand is considered,along with the change pattern of the subgrade reaction after sand liquefaction.An equivalent subgrade reaction calculation method is proposed,which accounts for the horizontal load-sharing ratios of soils with two different strengths.The test results indicate that the pile stress and displacement,estimated using the equivalent subgrade reaction,are in good agreement with the observed results.展开更多
With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier...With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics.展开更多
Wi-Fi technology has evolved significantly since its introduction in 1997,advancing to Wi-Fi 6 as the latest standard,with Wi-Fi 7 currently under development.Despite these advancements,integrating machine learning in...Wi-Fi technology has evolved significantly since its introduction in 1997,advancing to Wi-Fi 6 as the latest standard,with Wi-Fi 7 currently under development.Despite these advancements,integrating machine learning into Wi-Fi networks remains challenging,especially in decentralized environments with multiple access points(mAPs).This paper is a short review that summarizes the potential applications of federated reinforcement learning(FRL)across eight key areas of Wi-Fi functionality,including channel access,link adaptation,beamforming,multi-user transmissions,channel bonding,multi-link operation,spatial reuse,and multi-basic servic set(multi-BSS)coordination.FRL is highlighted as a promising framework for enabling decentralized training and decision-making while preserving data privacy.To illustrate its role in practice,we present a case study on link activation in a multi-link operation(MLO)environment with multiple APs.Through theoretical discussion and simulation results,the study demonstrates how FRL can improve performance and reliability,paving the way for more adaptive and collaborative Wi-Fi networks in the era of Wi-Fi 7 and beyond.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Tr...Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation.展开更多
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ...At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.展开更多
While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance re...While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance remains underexplored in field investigations.To evaluate the practical applicability of this emerging technique in adverse shallow sea channels,a field experiment was conducted using three communication modes:orthogonal frequency division multiplexing(OFDM),M-ary frequency-shift keying(MFSK),and direct sequence spread spectrum(DSSS)for reinforcement learning-driven adaptive modulation.Specifically,a Q-learning method is used to select the optimal modulation mode according to the channel quality quantified by signal-to-noise ratio,multipath spread length,and Doppler frequency offset.Experimental results demonstrate that the reinforcement learning-based adaptive modulation scheme outperformed fixed threshold detection in terms of total throughput and average bit error rate,surpassing conventional adaptive modulation strategies.展开更多
As joint operations have become a key trend in modern military development,unmanned aerial vehicles(UAVs)play an increasingly important role in enhancing the intelligence and responsiveness of combat systems.However,t...As joint operations have become a key trend in modern military development,unmanned aerial vehicles(UAVs)play an increasingly important role in enhancing the intelligence and responsiveness of combat systems.However,the heterogeneity of aircraft,partial observability,and dynamic uncertainty in operational airspace pose significant challenges to autonomous collision avoidance using traditional methods.To address these issues,this paper proposes an adaptive collision avoidance approach for UAVs based on deep reinforcement learning.First,a unified uncertainty model incorporating dynamic wind fields is constructed to capture the complexity of joint operational environments.Then,to effectively handle the heterogeneity between manned and unmanned aircraft and the limitations of dynamic observations,a sector-based partial observation mechanism is designed.A Dynamic Threat Prioritization Assessment algorithm is also proposed to evaluate potential collision threats from multiple dimensions,including time to closest approach,minimum separation distance,and aircraft type.Furthermore,a Hierarchical Prioritized Experience Replay(HPER)mechanism is introduced,which classifies experience samples into high,medium,and low priority levels to preferentially sample critical experiences,thereby improving learning efficiency and accelerating policy convergence.Simulation results show that the proposed HPER-D3QN algorithm outperforms existing methods in terms of learning speed,environmental adaptability,and robustness,significantly enhancing collision avoidance performance and convergence rate.Finally,transfer experiments on a high-fidelity battlefield airspace simulation platform validate the proposed method's deployment potential and practical applicability in complex,real-world joint operational scenarios.展开更多
Unmanned Aerial Vehicle(UAV)plays a prominent role in various fields,and autonomous navigation is a crucial component of UAV intelligence.Deep Reinforcement Learning(DRL)has expanded the research avenues for addressin...Unmanned Aerial Vehicle(UAV)plays a prominent role in various fields,and autonomous navigation is a crucial component of UAV intelligence.Deep Reinforcement Learning(DRL)has expanded the research avenues for addressing challenges in autonomous navigation.Nonetheless,challenges persist,including getting stuck in local optima,consuming excessive computations during action space exploration,and neglecting deterministic experience.This paper proposes a noise-driven enhancement strategy.In accordance with the overall learning phases,a global noise control method is designed,while a differentiated local noise control method is developed by analyzing the exploration demands of four typical situations encountered by UAV during navigation.Both methods are integrated into a dual-model for noise control to regulate action space exploration.Furthermore,noise dual experience replay buffers are designed to optimize the rational utilization of both deterministic and noisy experience.In uncertain environments,based on the Twin Delay Deep Deterministic Policy Gradient(TD3)algorithm with Long Short-Term Memory(LSTM)network and Priority Experience Replay(PER),a Noise-Driven Enhancement Priority Memory TD3(NDE-PMTD3)is developed.We established a simulation environment to compare different algorithms,and the performance of the algorithms is analyzed in various scenarios.The training results indicate that the proposed algorithm accelerates the convergence speed and enhances the convergence stability.In test experiments,the proposed algorithm successfully and efficiently performs autonomous navigation tasks in diverse environments,demonstrating superior generalization results.展开更多
Underwater images frequently suffer from chromatic distortion,blurred details,and low contrast,posing significant challenges for enhancement.This paper introduces AquaTree,a novel underwater image enhancement(UIE)meth...Underwater images frequently suffer from chromatic distortion,blurred details,and low contrast,posing significant challenges for enhancement.This paper introduces AquaTree,a novel underwater image enhancement(UIE)method that reformulates the task as a Markov Decision Process(MDP)through the integration of Monte Carlo Tree Search(MCTS)and deep reinforcement learning(DRL).The framework employs an action space of 25 enhancement operators,strategically grouped for basic attribute adjustment,color component balance,correction,and deblurring.Exploration within MCTS is guided by a dual-branch convolutional network,enabling intelligent sequential operator selection.Our core contributions include:(1)a multimodal state representation combining CIELab color histograms with deep perceptual features,(2)a dual-objective reward mechanism optimizing chromatic fidelity and perceptual consistency,and(3)an alternating training strategy co-optimizing enhancement sequences and network parameters.We further propose two inference schemes:an MCTS-based approach prioritizing accuracy at higher computational cost,and an efficient network policy enabling real-time processing with minimal quality loss.Comprehensive evaluations on the UIEB Dataset and Color correction and haze removal comparisons on the U45 Dataset demonstrate AquaTree’s superiority,significantly outperforming nine state-of-the-art methods across five established underwater image quality metrics.展开更多
Understanding the reinforcement effect of the newly developed prestressed reinforcement components(PRCs)(a system composed of prestressed steel bars(PSBs),protective sleeves,lateral pressure plates(LPPs),and anchoring...Understanding the reinforcement effect of the newly developed prestressed reinforcement components(PRCs)(a system composed of prestressed steel bars(PSBs),protective sleeves,lateral pressure plates(LPPs),and anchoring elements)is technically significant for the rational design of prestressed subgrade.A three-dimensional finite element model was established and verified based on a novel static model test and utilized to systematically analyze the influence of prestress levels and reinforcement modes on the reinforcement effect of the subgrade.The results show that the PRCs provide additional confining pressure to the subgrade through the diffusion effect of the prestress,which can therefore effectively improve the service performance of the subgrade.Compared to the unreinforced conventional subgrades,the settlements of prestressreinforced subgrades are reduced.The settlement attenuation rate(Rs)near the LPPs is larger than that at the subgrade center,and increasing the prestress positively contributes to the stability of the subgrade structure.In the multi-row reinforcement mode,the reinforcement effect of PRCs can extend from the reinforced area to the unreinforced area.In addition,as the horizontal distance from the LPPs increases,the additional confining pressure converted by the PSBs and LPPs gradually diminishes when spreading to the core load bearing area of the subgrade,resulting in a decrease in the Rs.Under the singlerow reinforcement mode,PRCs can be strategically arranged according to the local areas where subgrade defects readily occurred or observed,to obtain the desired reinforcement effect.Moreover,excessive prestress should not be applied near the subgrade shoulder line to avoid the shear failure of the subgrade shoulder.PRCs can be flexibly used for preventing and treating various subgrade defects of newly constructed or existing railway lines,achieving targeted and classified prevention,and effectively improving the bearing performance and deformation resistance of the subgrade.The research results are instructive for further elucidating the prestress reinforcement effect of PRCs on railway subgrades.展开更多
Dear Editor,This letter introduces a novel approach to address the bearings-only target motion analysis(BO-TMA)problem by incorporating deep reinforcement learning(DRL)techniques.Conventional methods often exhibit bia...Dear Editor,This letter introduces a novel approach to address the bearings-only target motion analysis(BO-TMA)problem by incorporating deep reinforcement learning(DRL)techniques.Conventional methods often exhibit biases and struggle to achieve accurate results,especially when confronted with high levels of noise.In this letter,we formulate the BO-TMA problem as a Markov decision process(MDP)and process it within a DRL framework.Simulation results demonstrate that the proposed DRL-based estimator achieves reduced bias and lower errors compared to existing estimators.展开更多
Lunar core samples are the key materials for accurately assessing and developing lunar resources.However,the difficulty of maintaining borehole stability in the lunar coring process limits the depth of lunar coring.He...Lunar core samples are the key materials for accurately assessing and developing lunar resources.However,the difficulty of maintaining borehole stability in the lunar coring process limits the depth of lunar coring.Here,a strategy of using a reinforcement fluid that undergoes a phase transition spontaneously in a vacuum environment to reinforce the borehole is proposed.Based on this strategy,a reinforcement liquid suitable for a wide temperature range and a high vacuum environment was developed.A feasibility study on reinforcing the borehole with the reinforcement liquid was carried out,and it is found that the cohesion of the simulated lunar soil can be increased from 2 to 800 kPa after using the reinforcement liquid.Further,a series of coring experiments are conducted using a selfdeveloped high vacuum(vacuum degree of 5 Pa)and low-temperature(between-30 and 50℃)simulation platform.It is confirmed that the high-boiling-point reinforcement liquid pre-placed in the drill pipe can be released spontaneously during the drilling process and finally complete the reinforcement of the borehole.The reinforcement effect of the borehole is better when the solute concentration is between0.15 and 0.25 g/mL.展开更多
Carbon fiber reinforced polymer(CFRP)is an advanced material widely used in bridge structures,demonstrating a promising application prospect.CFRP possesses excellent mechanical properties,construction advantages,and d...Carbon fiber reinforced polymer(CFRP)is an advanced material widely used in bridge structures,demonstrating a promising application prospect.CFRP possesses excellent mechanical properties,construction advantages,and durability benefits.Its application in bridge reinforcement can significantly enhance the overall performance of the reinforced bridge,thereby improving the durability and extending the service life of the bridge.Therefore,it is necessary to further explore how CFRP can be effectively applied in bridge reinforcement projects to improve the quality of such projects and ensure the safety of bridges during operation.展开更多
基金funded by Hung Yen University of Technology and Education under grand number UTEHY.L.2025.62.
文摘Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic topology of Flying Ad Hoc Networks(FANETs)present significant challenges for maintaining reliable,low-latency communication.Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable.To overcome these limitations,this paper proposes an improved routing protocol based on reinforcement learning.This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware.The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption,delay,and link quality.Additionally,a Kalman filter is integrated to predict UAV mobility,improving the stability of communication links under dynamic network conditions.Simulation experiments were conducted using realistic scenarios,varying the number of UAVs to assess scalability.An analysis was conducted on key performance metrics,including the packet delivery ratio,end-to-end delay,and total energy consumption.The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15%and reduces delay by up to 25.5%when compared to conventional GEO and QGEO protocols.However,this improvement comes at the cost of higher energy consumption due to additional computations and control overhead.Despite this trade-off,the proposed solution ensures reliable and efficient communication,making it well-suited for large-scale UAV networks operating in complex urban environments.
文摘The increasing occurrence of corrosion-related damage in steel pipelines has led to the growing use of composite-based repair techniques as an efficient alternative to traditional replacement methods.Computer modeling and structural analysis were performed for the repair reinforcement of a steel pipeline with a composite bandage.A preliminary analysis of possible contact interaction schemes was implemented based on the theory of cylindrical shells,taking into account transverse shear deformations.The finite element method was used for a detailed study of the stress state of the composite bandage and the reinforced section of the pipeline.The limit state of the reinforced section was assessed based on the von Mises criterion for steel and the Tsai-Wu criterion for composites.The effectiveness of the repair was demonstrated on a pipeline whose wall thickness had decreased by 20%as a result of corrosion damage.At a nominal pressure of P=6 MPa,the maximum normal stress in the weakened area reached 381 MPa.The installation of a composite bandage reduced this stress to 312 MPa,making the repaired section virtually as strong as the undamaged pipeline.Due to the linearity of the problem,the results obtained can be easily used to find critical internal pressure values.
基金supported by the National Natural Science Foundation of China(Grant Nos.T2421001,61922053,62403298)the Natural Science Foundation of Shanghai(Grant No.25ZR1401119)+1 种基金the China Postdoctoral Science Foundation(Grant No.2024M751933)the Shanghai Post-doctoral Excellence Program(Grant No.2023316)。
文摘In this paper,a hierarchical reinforcement learning(HRL)based real-time formation control approach is proposed for heterogeneous aerial-ground agents(HAGAs).Initially,to address the issue of imprecise modeling of HAGAs,a unified heterogeneous chained system model is constructed using the hand-position method.Subsequently,a hierarchical framework is designed:(1)To decouple multi-agent collaborative interactions and individual dynamic rules through hierarchical resolution,which enables controller design to be independent of direct reliance on neighborhood collaborative errors.(2)By adopting a dual-layer framework that separates collaborative topology management from individual control strategies,seamless switching between multiple task scenarios can be achieved simply by reconstructing the collaborative topology of the first layer.Moreover,to overcome the issue of non-asymptotic stability of tracking errors caused by the discount factor in traditional optimal control,a cost function based on the derivative of the tracking error is introduced.This not only addresses the error issue caused by the discount factor but also effectively resolves the problem of the unboundedness of the quadratic cost function.Finally,the efficacy of the proposed algorithm is substantiated through simulation experiments.
基金supported by the National Natural Science Foundation of China(Grant No.52405041)the Major Program of the Zhejiang Provincial Natural Science Foundation of China(Grant No.LD25E050001)the Key R&D Program of Zhejiang Province(Grant No.2025C01186)。
文摘Humanoid robots hold significant promise for social interaction and emotional companionship.However,their effectiveness hinges on the ability to convey nuanced and authentic emotions.Here,we presented a universal humanoid robot head with a facial kinematics model.Using a reinforcement learning framework guided by symmetry assessment,emotion decoupling,and MLLM authenticity evaluation,our system autonomously learns to generate adaptive facial expressions through dynamic landmark adjustments.By transferring the simulation training results to real-world environments,the robot can perform natural and expressive expressions.Another novel feature is the independent regulation of emotion intensity and expression magnitude across emotional categories,which enhances the ability to achieve culturally adaptive and socially resonant robotic expressions significantly.This research advances adaptive humanoid interaction,offering an easier and more efficient pathway toward culturally resonant and psychologically plausible robotic expressions.
基金funded by the Beijing Engineering Research Center of Electric Rail Transportation.
文摘Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.
基金Project(52078129)supported by the National Natural Science Foundation of ChinaProject(MTF2023009)supported by the Open Project of Key Laboratory of Transport Industry of Comprehensive Transportation Theory(Nanjing Modern Multimodal Transportation Laboratory),ChinaProject(2242024K40037)supported by the Fundamental Research Funds for the Central Universities,China。
文摘A shaking table test was performed to investigate the different responses of piles with and without cement-soil reinforcement,considering both inertial and kinematic interactions.A comparison of the dynamic shear stress−strain hysteresis curves of soil profiles on the pile side with and without cement-soil reinforced piles indicates that cement-soil reinforced piles not only bear more tremendous shear stress but also have smaller strains under the action of cyclic shear stress.Furthermore,the cement-soil on the pile side not only shares part of the shear stress and modifies the bending moment distribution but also significantly enhances the resistance of the pile-side soil,reducing the lateral displacement of the superstructure.Cement-soil reinforcement reduced shear strains,inhibited sand liquefaction,and reduced superstructure displacements by 27%−47%(instantaneous)and 40%−65%(permanent).The proportion of horizontal load sharing between cement-soil reinforcement and saturated sand is considered,along with the change pattern of the subgrade reaction after sand liquefaction.An equivalent subgrade reaction calculation method is proposed,which accounts for the horizontal load-sharing ratios of soils with two different strengths.The test results indicate that the pile stress and displacement,estimated using the equivalent subgrade reaction,are in good agreement with the observed results.
文摘With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics.
基金funded by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,Saudi Arabia,grant number RG-2-611-42(A.O.A.).
文摘Wi-Fi technology has evolved significantly since its introduction in 1997,advancing to Wi-Fi 6 as the latest standard,with Wi-Fi 7 currently under development.Despite these advancements,integrating machine learning into Wi-Fi networks remains challenging,especially in decentralized environments with multiple access points(mAPs).This paper is a short review that summarizes the potential applications of federated reinforcement learning(FRL)across eight key areas of Wi-Fi functionality,including channel access,link adaptation,beamforming,multi-user transmissions,channel bonding,multi-link operation,spatial reuse,and multi-basic servic set(multi-BSS)coordination.FRL is highlighted as a promising framework for enabling decentralized training and decision-making while preserving data privacy.To illustrate its role in practice,we present a case study on link activation in a multi-link operation(MLO)environment with multiple APs.Through theoretical discussion and simulation results,the study demonstrates how FRL can improve performance and reliability,paving the way for more adaptive and collaborative Wi-Fi networks in the era of Wi-Fi 7 and beyond.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
文摘Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation.
文摘At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.
基金funding from the National Key Research and Development Program of China(No.2018YFE0110000)the National Natural Science Foundation of China(No.11274259,No.11574258)the Science and Technology Commission Foundation of Shanghai(21DZ1205500)in support of the present research.
文摘While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance remains underexplored in field investigations.To evaluate the practical applicability of this emerging technique in adverse shallow sea channels,a field experiment was conducted using three communication modes:orthogonal frequency division multiplexing(OFDM),M-ary frequency-shift keying(MFSK),and direct sequence spread spectrum(DSSS)for reinforcement learning-driven adaptive modulation.Specifically,a Q-learning method is used to select the optimal modulation mode according to the channel quality quantified by signal-to-noise ratio,multipath spread length,and Doppler frequency offset.Experimental results demonstrate that the reinforcement learning-based adaptive modulation scheme outperformed fixed threshold detection in terms of total throughput and average bit error rate,surpassing conventional adaptive modulation strategies.
基金supported by the National Key Research and Development Program of China(No.2022YFB4300902).
文摘As joint operations have become a key trend in modern military development,unmanned aerial vehicles(UAVs)play an increasingly important role in enhancing the intelligence and responsiveness of combat systems.However,the heterogeneity of aircraft,partial observability,and dynamic uncertainty in operational airspace pose significant challenges to autonomous collision avoidance using traditional methods.To address these issues,this paper proposes an adaptive collision avoidance approach for UAVs based on deep reinforcement learning.First,a unified uncertainty model incorporating dynamic wind fields is constructed to capture the complexity of joint operational environments.Then,to effectively handle the heterogeneity between manned and unmanned aircraft and the limitations of dynamic observations,a sector-based partial observation mechanism is designed.A Dynamic Threat Prioritization Assessment algorithm is also proposed to evaluate potential collision threats from multiple dimensions,including time to closest approach,minimum separation distance,and aircraft type.Furthermore,a Hierarchical Prioritized Experience Replay(HPER)mechanism is introduced,which classifies experience samples into high,medium,and low priority levels to preferentially sample critical experiences,thereby improving learning efficiency and accelerating policy convergence.Simulation results show that the proposed HPER-D3QN algorithm outperforms existing methods in terms of learning speed,environmental adaptability,and robustness,significantly enhancing collision avoidance performance and convergence rate.Finally,transfer experiments on a high-fidelity battlefield airspace simulation platform validate the proposed method's deployment potential and practical applicability in complex,real-world joint operational scenarios.
基金the Collaborative Innovation Project of Shanghai,China for the financial support。
文摘Unmanned Aerial Vehicle(UAV)plays a prominent role in various fields,and autonomous navigation is a crucial component of UAV intelligence.Deep Reinforcement Learning(DRL)has expanded the research avenues for addressing challenges in autonomous navigation.Nonetheless,challenges persist,including getting stuck in local optima,consuming excessive computations during action space exploration,and neglecting deterministic experience.This paper proposes a noise-driven enhancement strategy.In accordance with the overall learning phases,a global noise control method is designed,while a differentiated local noise control method is developed by analyzing the exploration demands of four typical situations encountered by UAV during navigation.Both methods are integrated into a dual-model for noise control to regulate action space exploration.Furthermore,noise dual experience replay buffers are designed to optimize the rational utilization of both deterministic and noisy experience.In uncertain environments,based on the Twin Delay Deep Deterministic Policy Gradient(TD3)algorithm with Long Short-Term Memory(LSTM)network and Priority Experience Replay(PER),a Noise-Driven Enhancement Priority Memory TD3(NDE-PMTD3)is developed.We established a simulation environment to compare different algorithms,and the performance of the algorithms is analyzed in various scenarios.The training results indicate that the proposed algorithm accelerates the convergence speed and enhances the convergence stability.In test experiments,the proposed algorithm successfully and efficiently performs autonomous navigation tasks in diverse environments,demonstrating superior generalization results.
基金supported by theHubei Provincial Technology Innovation Special Project and the Natural Science Foundation of Hubei Province under Grants 2023BEB024,2024AFC066,respectively.
文摘Underwater images frequently suffer from chromatic distortion,blurred details,and low contrast,posing significant challenges for enhancement.This paper introduces AquaTree,a novel underwater image enhancement(UIE)method that reformulates the task as a Markov Decision Process(MDP)through the integration of Monte Carlo Tree Search(MCTS)and deep reinforcement learning(DRL).The framework employs an action space of 25 enhancement operators,strategically grouped for basic attribute adjustment,color component balance,correction,and deblurring.Exploration within MCTS is guided by a dual-branch convolutional network,enabling intelligent sequential operator selection.Our core contributions include:(1)a multimodal state representation combining CIELab color histograms with deep perceptual features,(2)a dual-objective reward mechanism optimizing chromatic fidelity and perceptual consistency,and(3)an alternating training strategy co-optimizing enhancement sequences and network parameters.We further propose two inference schemes:an MCTS-based approach prioritizing accuracy at higher computational cost,and an efficient network policy enabling real-time processing with minimal quality loss.Comprehensive evaluations on the UIEB Dataset and Color correction and haze removal comparisons on the U45 Dataset demonstrate AquaTree’s superiority,significantly outperforming nine state-of-the-art methods across five established underwater image quality metrics.
基金supported by the National Natural Science Foundation of China(Grant Nos.51978672 and 52308335)the Natural Science Funding of Hunan Province(Grant No.2023JJ41054)the Natural Science Research Project of Anhui Educational Committee(Grant No.2023AH051170)。
文摘Understanding the reinforcement effect of the newly developed prestressed reinforcement components(PRCs)(a system composed of prestressed steel bars(PSBs),protective sleeves,lateral pressure plates(LPPs),and anchoring elements)is technically significant for the rational design of prestressed subgrade.A three-dimensional finite element model was established and verified based on a novel static model test and utilized to systematically analyze the influence of prestress levels and reinforcement modes on the reinforcement effect of the subgrade.The results show that the PRCs provide additional confining pressure to the subgrade through the diffusion effect of the prestress,which can therefore effectively improve the service performance of the subgrade.Compared to the unreinforced conventional subgrades,the settlements of prestressreinforced subgrades are reduced.The settlement attenuation rate(Rs)near the LPPs is larger than that at the subgrade center,and increasing the prestress positively contributes to the stability of the subgrade structure.In the multi-row reinforcement mode,the reinforcement effect of PRCs can extend from the reinforced area to the unreinforced area.In addition,as the horizontal distance from the LPPs increases,the additional confining pressure converted by the PSBs and LPPs gradually diminishes when spreading to the core load bearing area of the subgrade,resulting in a decrease in the Rs.Under the singlerow reinforcement mode,PRCs can be strategically arranged according to the local areas where subgrade defects readily occurred or observed,to obtain the desired reinforcement effect.Moreover,excessive prestress should not be applied near the subgrade shoulder line to avoid the shear failure of the subgrade shoulder.PRCs can be flexibly used for preventing and treating various subgrade defects of newly constructed or existing railway lines,achieving targeted and classified prevention,and effectively improving the bearing performance and deformation resistance of the subgrade.The research results are instructive for further elucidating the prestress reinforcement effect of PRCs on railway subgrades.
基金supported by the Zhejiang Provincial Natural Science Foundation of China(LZ23F030006)the National Natural Science Foundation of China(62173299,U23B2060)+1 种基金the Joint Fund of Ministry of Education for Pre-Research of Equipment(8091B022147,8091B032234,8091B042220)the Fundamental Research Funds for Xi’an Jiaotong University(xtr072022001).
文摘Dear Editor,This letter introduces a novel approach to address the bearings-only target motion analysis(BO-TMA)problem by incorporating deep reinforcement learning(DRL)techniques.Conventional methods often exhibit biases and struggle to achieve accurate results,especially when confronted with high levels of noise.In this letter,we formulate the BO-TMA problem as a Markov decision process(MDP)and process it within a DRL framework.Simulation results demonstrate that the proposed DRL-based estimator achieves reduced bias and lower errors compared to existing estimators.
基金National Natural Science Foundation of China (Nos.U2013603,51827901,and 52403383)Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2019ZT08G315)+1 种基金Institute of New Energy and Low-Carbon Technology (Sichuan University)State Key Laboratory of Coal Mine Disaster Dynamics and Control of Chongqing University。
文摘Lunar core samples are the key materials for accurately assessing and developing lunar resources.However,the difficulty of maintaining borehole stability in the lunar coring process limits the depth of lunar coring.Here,a strategy of using a reinforcement fluid that undergoes a phase transition spontaneously in a vacuum environment to reinforce the borehole is proposed.Based on this strategy,a reinforcement liquid suitable for a wide temperature range and a high vacuum environment was developed.A feasibility study on reinforcing the borehole with the reinforcement liquid was carried out,and it is found that the cohesion of the simulated lunar soil can be increased from 2 to 800 kPa after using the reinforcement liquid.Further,a series of coring experiments are conducted using a selfdeveloped high vacuum(vacuum degree of 5 Pa)and low-temperature(between-30 and 50℃)simulation platform.It is confirmed that the high-boiling-point reinforcement liquid pre-placed in the drill pipe can be released spontaneously during the drilling process and finally complete the reinforcement of the borehole.The reinforcement effect of the borehole is better when the solute concentration is between0.15 and 0.25 g/mL.
文摘Carbon fiber reinforced polymer(CFRP)is an advanced material widely used in bridge structures,demonstrating a promising application prospect.CFRP possesses excellent mechanical properties,construction advantages,and durability benefits.Its application in bridge reinforcement can significantly enhance the overall performance of the reinforced bridge,thereby improving the durability and extending the service life of the bridge.Therefore,it is necessary to further explore how CFRP can be effectively applied in bridge reinforcement projects to improve the quality of such projects and ensure the safety of bridges during operation.