期刊文献+
共找到79,866篇文章
< 1 2 250 >
每页显示 20 50 100
Rule-Guidance Reinforcement Learning for Lane Change Decision-making:A Risk Assessment Approach 被引量:1
1
作者 Lu Xiong Zhuoren Li +2 位作者 Danyang Zhong Puhang Xu Chen Tang 《Chinese Journal of Mechanical Engineering》 2025年第2期344-359,共16页
To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforce... To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforcement learning with rule-based decision-making methods.A risk assessment model for lane-change maneuvers considering uncertain predictions of surrounding vehicles is established as a safety filter to improve learning efficiency while correcting dangerous actions for safety enhancement.On this basis,a Risk-fused DDQN is constructed utilizing the model-based risk assessment and supervision mechanism.The proposed reinforcement learning algorithm sets up a separate experience buffer for dangerous trials and punishes such actions,which is shown to improve the sampling efficiency and training outcomes.Compared with conventional DDQN methods,the proposed algorithm improves the convergence value of cumulated reward by 7.6%and 2.2%in the two constructed scenarios in the simulation study and reduces the number of training episodes by 52.2%and 66.8%respectively.The success rate of lane change is improved by 57.3%while the time headway is increased at least by 16.5%in real vehicle tests,which confirms the higher training efficiency,scenario adaptability,and security of the proposed Risk-fused DDQN. 展开更多
关键词 Autonomous driving reinforcement learning DECISION-MAKING Risk assessment Safety filter
在线阅读 下载PDF
A Survey of Cooperative Multi-agent Reinforcement Learning for Multi-task Scenarios 被引量:1
2
作者 Jiajun CHAI Zijie ZHAO +1 位作者 Yuanheng ZHU Dongbin ZHAO 《Artificial Intelligence Science and Engineering》 2025年第2期98-121,共24页
Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-... Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-robot control.Empowering cooperative MARL with multi-task decision-making capabilities is expected to further broaden its application scope.In multi-task scenarios,cooperative MARL algorithms need to address 3 types of multi-task problems:reward-related multi-task,arising from different reward functions;multi-domain multi-task,caused by differences in state and action spaces,state transition functions;and scalability-related multi-task,resulting from the dynamic variation in the number of agents.Most existing studies focus on scalability-related multitask problems.However,with the increasing integration between large language models(LLMs)and multi-agent systems,a growing number of LLM-based multi-agent systems have emerged,enabling more complex multi-task cooperation.This paper provides a comprehensive review of the latest advances in this field.By combining multi-task reinforcement learning with cooperative MARL,we categorize and analyze the 3 major types of multi-task problems under multi-agent settings,offering more fine-grained classifications and summarizing key insights for each.In addition,we summarize commonly used benchmarks and discuss future directions of research in this area,which hold promise for further enhancing the multi-task cooperation capabilities of multi-agent systems and expanding their practical applications in the real world. 展开更多
关键词 MULTI-TASK multi-agent reinforcement learning large language models
在线阅读 下载PDF
Optimized reinforcement of granite residual soil using a cement and alkaline solution: A coupling effect 被引量:1
3
作者 Bingxiang Yuan Jingkang Liang +5 位作者 Baifa Zhang Weijie Chen Xianlun Huang Qingyu Huang Yun Li Peng Yuan 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第1期509-523,共15页
Granite residual soil (GRS) is a type of weathering soil that can decompose upon contact with water, potentially causing geological hazards. In this study, cement, an alkaline solution, and glass fiber were used to re... Granite residual soil (GRS) is a type of weathering soil that can decompose upon contact with water, potentially causing geological hazards. In this study, cement, an alkaline solution, and glass fiber were used to reinforce GRS. The effects of cement content and SiO_(2)/Na2O ratio of the alkaline solution on the static and dynamic strengths of GRS were discussed. Microscopically, the reinforcement mechanism and coupling effect were examined using X-ray diffraction (XRD), micro-computed tomography (micro-CT), and scanning electron microscopy (SEM). The results indicated that the addition of 2% cement and an alkaline solution with an SiO_(2)/Na2O ratio of 0.5 led to the densest matrix, lowest porosity, and highest static compressive strength, which was 4994 kPa with a dynamic impact resistance of 75.4 kN after adding glass fiber. The compressive strength and dynamic impact resistance were a result of the coupling effect of cement hydration, a pozzolanic reaction of clay minerals in the GRS, and the alkali activation of clay minerals. Excessive cement addition or an excessively high SiO_(2)/Na2O ratio in the alkaline solution can have negative effects, such as the destruction of C-(A)-S-H gels by the alkaline solution and hindering the production of N-A-S-H gels. This can result in damage to the matrix of reinforced GRS, leading to a decrease in both static and dynamic strengths. This study suggests that further research is required to gain a more precise understanding of the effects of this mixture in terms of reducing our carbon footprint and optimizing its properties. The findings indicate that cement and alkaline solution are appropriate for GRS and that the reinforced GRS can be used for high-strength foundation and embankment construction. The study provides an analysis of strategies for mitigating and managing GRS slope failures, as well as enhancing roadbed performance. 展开更多
关键词 Granite residue soil(GRS) reinforcement Coupling effect Alkali activation Mechanical properties
在线阅读 下载PDF
An extended discontinuous deformation analysis for simulation of grouting reinforcement in a water-rich fractured rock tunnel 被引量:1
4
作者 Jingyao Gao Siyu Peng +1 位作者 Guangqi Chen Hongyun Fan 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第1期168-186,共19页
Grouting has been the most effective approach to mitigate water inrush disasters in underground engineering due to its ability to plug groundwater and enhance rock strength.Nevertheless,there is a lack of potent numer... Grouting has been the most effective approach to mitigate water inrush disasters in underground engineering due to its ability to plug groundwater and enhance rock strength.Nevertheless,there is a lack of potent numerical tools for assessing the grouting effectiveness in water-rich fractured strata.In this study,the hydro-mechanical coupled discontinuous deformation analysis(HM-DDA)is inaugurally extended to simulate the grouting process in a water-rich discrete fracture network(DFN),including the slurry migration,fracture dilation,water plugging in a seepage field,and joint reinforcement after coagulation.To validate the capabilities of the developed method,several numerical examples are conducted incorporating the Newtonian fluid and Bingham slurry.The simulation results closely align with the analytical solutions.Additionally,a set of compression tests is conducted on the fresh and grouted rock specimens to verify the reinforcement method and calibrate the rational properties of reinforced joints.An engineering-scale model based on a real water inrush case of the Yonglian tunnel in a water-rich fractured zone has been established.The model demonstrates the effectiveness of grouting reinforcement in mitigating water inrush disaster.The results indicate that increased grouting pressure greatly affects the regulation of water outflow from the tunnel face and the prevention of rock detachment face after excavation. 展开更多
关键词 Discontinuous deformation analysis(DDA) Water-rich fractured rock tunnel Grouting reinforcement Water inrush disaster
在线阅读 下载PDF
Graph-based multi-agent reinforcement learning for collaborative search and tracking of multiple UAVs 被引量:2
5
作者 Bocheng ZHAO Mingying HUO +4 位作者 Zheng LI Wenyu FENG Ze YU Naiming QI Shaohai WANG 《Chinese Journal of Aeronautics》 2025年第3期109-123,共15页
This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary obj... This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary objective is to explore the unknown environments to locate and track targets effectively. To address this problem, we propose a novel Multi-Agent Reinforcement Learning (MARL) method based on Graph Neural Network (GNN). Firstly, a method is introduced for encoding continuous-space multi-UAV problem data into spatial graphs which establish essential relationships among agents, obstacles, and targets. Secondly, a Graph AttenTion network (GAT) model is presented, which focuses exclusively on adjacent nodes, learns attention weights adaptively and allows agents to better process information in dynamic environments. Reward functions are specifically designed to tackle exploration challenges in environments with sparse rewards. By introducing a framework that integrates centralized training and distributed execution, the advancement of models is facilitated. Simulation results show that the proposed method outperforms the existing MARL method in search rate and tracking performance with less collisions. The experiments show that the proposed method can be extended to applications with a larger number of agents, which provides a potential solution to the challenging problem of multi-UAV autonomous tracking in dynamic unknown environments. 展开更多
关键词 Unmanned aerial vehicle(UAV) Multi-agent reinforcement learning(MARL) Graph attention network(GAT) Tracking Dynamic and unknown environment
原文传递
Deep reinforcement learning based integrated evasion and impact hierarchical intelligent policy of exo-atmospheric vehicles 被引量:1
6
作者 Leliang REN Weilin GUO +3 位作者 Yong XIAN Zhenyu LIU Daqiao ZHANG Shaopeng LI 《Chinese Journal of Aeronautics》 2025年第1期409-426,共18页
Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision u... Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision under multi-constraint conditions,a hierarchical intelligent decision-making method based on Deep Reinforcement Learning(DRL)was proposed.First,an intelligent decision-making framework of“DRL evasion decision”+“impact prediction guidance decision”was established:it takes the impact point deviation correction ability as the constraint and the maximum miss distance as the objective,and effectively solves the problem of poor decisionmaking effect caused by the large IEI decision space.Second,to solve the sparse reward problem faced by evasion decision-making,a hierarchical decision-making method consisting of maneuver timing decision and maneuver duration decision was proposed,and the corresponding Markov Decision Process(MDP)was designed.A detailed simulation experiment was designed to analyze the advantages and computational complexity of the proposed method.Simulation results show that the proposed model has good performance and low computational resource requirement.The minimum miss distance is 21.3 m under the condition of guaranteeing the impact point accuracy,and the single decision-making time is 4.086 ms on an STM32F407 single-chip microcomputer,which has engineering application value. 展开更多
关键词 Exo-atmospheric vehicle Integrated evasion and impact Deep reinforcement learning Hierarchical intelligent policy Single-chip microcomputer Miss distance
原文传递
Multi-QoS routing algorithm based on reinforcement learning for LEO satellite networks 被引量:1
7
作者 ZHANG Yifan DONG Tao +1 位作者 LIU Zhihui JIN Shichao 《Journal of Systems Engineering and Electronics》 2025年第1期37-47,共11页
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa... Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link. 展开更多
关键词 low Earth orbit(LEO)satellite network reinforcement learning multi-quality of service(QoS) routing algorithm
在线阅读 下载PDF
Intelligent path planning for small modular reactors based on improved reinforcement learning
8
作者 DONG Yun-Feng ZHOU Wei-Zheng +1 位作者 WANG Zhe-Zheng ZHANG Xiao 《四川大学学报(自然科学版)》 北大核心 2025年第4期1006-1014,共9页
Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous... Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous control process of SMR can be divided into three stages,say,state diagnosis,autonomous decision-making and coordinated control.In this paper,the autonomous state recognition and task planning of unmanned SMR are investigated.An operating condition recognition method based on the knowledge base of SMR operation is proposed by using the artificial neural network(ANN)technology,which constructs a basis for the state judgment of intelligent reactor control path planning.An improved reinforcement learning path planning algorithm is utilized to implement the path transfer decision-makingThis algorithm performs condition transitions with minimal cost under specified modes.In summary,the full range control path intelligent decision-planning technology of SMR is realized,thus provides some theoretical basis for the design and build of unmanned SMR in the future. 展开更多
关键词 Small modular reactor Operating condition recognition Path planning reinforcement learning
在线阅读 下载PDF
Chromogenic Reactions of Starch and Dextrin and Comparative Study of Thin-Layer Chromatography of Oligosaccharides in 35 Batches of Jiulongteng Honey
9
作者 Beiqiao YIN Qi HUANG +4 位作者 Yanyan CHEN Shenggao YIN Zhiqiang ZHU Hanbai LIANG Hao HUANG 《Medicinal Plant》 2025年第4期24-28,共5页
[Objectives]To explore the methods for identifying pure honey.[Methods]Using 35 batches of Jiulongteng honey sourced from various production areas in Guangxi as the research subjects,this study investigated the chromo... [Objectives]To explore the methods for identifying pure honey.[Methods]Using 35 batches of Jiulongteng honey sourced from various production areas in Guangxi as the research subjects,this study investigated the chromogenic reactions of starch and dextrin,as well as the comparative study of thin-layer chromatography of oligosaccharides present in Jiulongteng honey.[Results]None of the 35 batches of Jiulongteng honey samples exhibited blue(indicating starch),green,or reddish-brown(indicating dextrin)coloration,suggesting that no adulterants such as artificially added starch,dextrin,or sugar were present in these samples.Furthermore,none of the 35 batches displayed additional spots below the corresponding positions of the control,indicating that the sugar composition was consistent with the oligosaccharide profile of natural honey.No components inconsistent with the oligosaccharide profile of natural honey were detected.Therefore,it can be concluded that the Jiulongteng honey samples in this experiment were pure and free from adulteration with starch,dextrin,or other sugar substances.[Conclusions]The method employed in this experiment is straightforward and quick to implement,effectively preventing adulterated honey from entering the market.It enhances the efficiency of quality control for Jiulongteng honey and promotes the healthy development of the Jiulongteng honey industry. 展开更多
关键词 Jiulongteng honey Chromogenic reaction thin-layer chromatography STARCH DEXTRIN
在线阅读 下载PDF
Borehole reinforcement based on polymer materials induced by liquid-gas phase transition in simulating lunar coring
10
作者 Dingqiang Mo Tao Liu +6 位作者 Zhiyu Zhao Liangyu Zhu Dongsheng Yang Yifan Wu Cheng Lan Wenchuan Jiang Heping Xie 《International Journal of Mining Science and Technology》 2025年第3期383-398,共16页
Lunar core samples are the key materials for accurately assessing and developing lunar resources.However,the difficulty of maintaining borehole stability in the lunar coring process limits the depth of lunar coring.He... Lunar core samples are the key materials for accurately assessing and developing lunar resources.However,the difficulty of maintaining borehole stability in the lunar coring process limits the depth of lunar coring.Here,a strategy of using a reinforcement fluid that undergoes a phase transition spontaneously in a vacuum environment to reinforce the borehole is proposed.Based on this strategy,a reinforcement liquid suitable for a wide temperature range and a high vacuum environment was developed.A feasibility study on reinforcing the borehole with the reinforcement liquid was carried out,and it is found that the cohesion of the simulated lunar soil can be increased from 2 to 800 kPa after using the reinforcement liquid.Further,a series of coring experiments are conducted using a selfdeveloped high vacuum(vacuum degree of 5 Pa)and low-temperature(between-30 and 50℃)simulation platform.It is confirmed that the high-boiling-point reinforcement liquid pre-placed in the drill pipe can be released spontaneously during the drilling process and finally complete the reinforcement of the borehole.The reinforcement effect of the borehole is better when the solute concentration is between0.15 and 0.25 g/mL. 展开更多
关键词 Lunar coring reinforcement fluid Borehole reinforcement Drill bit cooling
在线阅读 下载PDF
Application of Carbon Fiber Reinforced Polymer in Bridge Reinforcement
11
作者 Yuwei Zhang 《Journal of Architectural Research and Development》 2025年第3期76-80,共5页
Carbon fiber reinforced polymer(CFRP)is an advanced material widely used in bridge structures,demonstrating a promising application prospect.CFRP possesses excellent mechanical properties,construction advantages,and d... Carbon fiber reinforced polymer(CFRP)is an advanced material widely used in bridge structures,demonstrating a promising application prospect.CFRP possesses excellent mechanical properties,construction advantages,and durability benefits.Its application in bridge reinforcement can significantly enhance the overall performance of the reinforced bridge,thereby improving the durability and extending the service life of the bridge.Therefore,it is necessary to further explore how CFRP can be effectively applied in bridge reinforcement projects to improve the quality of such projects and ensure the safety of bridges during operation. 展开更多
关键词 Carbon fiber reinforced polymer Earthquake resistance Bridge reinforcement design
在线阅读 下载PDF
Reinforcement Learning in Mechatronic Systems: A Case Study on DC Motor Control
12
作者 Alexander Nüßgen Alexander Lerch +5 位作者 René Degen Marcus Irmer Martin de Fries Fabian Richter Cecilia Boström Margot Ruschitzka 《Circuits and Systems》 2025年第1期1-24,共24页
The integration of artificial intelligence into the development and production of mechatronic products offers a substantial opportunity to enhance efficiency, adaptability, and system performance. This paper examines ... The integration of artificial intelligence into the development and production of mechatronic products offers a substantial opportunity to enhance efficiency, adaptability, and system performance. This paper examines the utilization of reinforcement learning as a control strategy, with a particular focus on its deployment in pivotal stages of the product development lifecycle, specifically between system architecture and system integration and verification. A controller based on reinforcement learning was developed and evaluated in comparison to traditional proportional-integral controllers in dynamic and fault-prone environments. The results illustrate the superior adaptability, stability, and optimization potential of the reinforcement learning approach, particularly in addressing dynamic disturbances and ensuring robust performance. The study illustrates how reinforcement learning can facilitate the transition from conceptual design to implementation by automating optimization processes, enabling interface automation, and enhancing system-level testing. Based on the aforementioned findings, this paper presents future directions for research, which include the integration of domain-specific knowledge into the reinforcement learning process and the validation of this process in real-world environments. The results underscore the potential of artificial intelligence-driven methodologies to revolutionize the design and deployment of intelligent mechatronic systems. 展开更多
关键词 Artificial Intelligence in Product Development Mechatronic Systems reinforcement Learning for Control System Integration and Verification Adaptive Optimization Processes Knowledge-Based Engineering
在线阅读 下载PDF
StM:a benchmark for evaluating generalization in reinforcement learning
13
作者 YUAN Kaizhao ZHANG Rui +5 位作者 PAN Yansong YI Qi PENG Shaohui GUO Jiaming HE Wenkai HU Xing 《High Technology Letters》 2025年第2期118-130,共13页
The challenge of enhancing the generalization capacity of reinforcement learning(RL)agents remains a formidable obstacle.Existing RL methods,despite achieving superhuman performance on certain benchmarks,often struggl... The challenge of enhancing the generalization capacity of reinforcement learning(RL)agents remains a formidable obstacle.Existing RL methods,despite achieving superhuman performance on certain benchmarks,often struggle with this aspect.A potential reason is that the benchmarks used for training and evaluation may not adequately offer a diverse set of transferable tasks.Although recent studies have developed bench-marking environments to address this shortcoming,they typically fall short in providing tasks that both ensure a solid foundation for generalization and exhibit significant variability.To overcome these limitations,this work introduces the concept that‘objects are composed of more fundamental components’in environment design,as implemented in the proposed environment called summon the magic(StM).This environment generates tasks where objects are derived from extensible and shareable basic components,facilitating strategy reuse and enhancing generalization.Furthermore,two new metrics,adaptation sensitivity range(ASR)and parameter correlation coefficient(PCC),are proposed to better capture and evaluate the generalization process of RL agents.Experimental results show that increasing the number of basic components of the object reduces the proximal policy optimization(PPO)agent’s training-testing gap by 60.9%(in episode reward),significantly alleviating overfitting.Additionally,linear variations in other environmental factors,such as the training monster set proportion and the total number of basic components,uniformly decrease the gap by at least 32.1%.These results highlight StM’s effectiveness in benchmarking and probing the generalization capabilities of RL algorithms. 展开更多
关键词 reinforcement learning(RL) GENERALIZATION BENCHMARK environment
在线阅读 下载PDF
Refractive status and histological changes after posterior scleral reinforcement in guinea pig
14
作者 Yu-Yan Huang Li-Yang Zhou +4 位作者 Guo-Fu Chen Duo Peng Miao-Zhen Pan Ji-Bo Zhou Jia Qu 《International Journal of Ophthalmology(English edition)》 2025年第3期375-382,共8页
AIM:To investigate the refractive and the histological changes in guinea pig eyes after posterior scleral reinforcement with scleral allografts.METHODS:Four-week-old guinea pigs were implanted with scleral allografts,... AIM:To investigate the refractive and the histological changes in guinea pig eyes after posterior scleral reinforcement with scleral allografts.METHODS:Four-week-old guinea pigs were implanted with scleral allografts,and the changes of refraction,corneal curvature and axis length were monitored for 51d.The effects of methylprednisolone(MPS)on refraction parameters were also evaluated.And the microstructure and ultra-microstructure of eyes were observed on the 9d and 51d after operation.Repeated-measures analysis of variance and one-way analysis of variance were used.RESULTS:The refraction outcome of the implanted eye decreased after operation,and the refraction change of the 3 mm scleral allografts group was significantly different with control group(P=0.005)and the sham surgical group(P=0.004).After the application of MPS solution,the reduction of refraction outcome was statistically suppressed(P=0.008).The inflammatory encapsulation appeared 9d after surgery.On 51d after operation,the loose implanted materials were absorbed,while the adherent implanted materials with MPS group were still tightly attached to the recipient’s eyeball.CONCLUSION:After implantation of scleral allografts,the refraction of guinea pig eyes fluctuated from a decrease to an increase.The outcome of the scleral allografts is affected by implantation methods and the inflammatory response.Stability of the material can be improved by MPS. 展开更多
关键词 posterior scleral reinforcement METHYLPREDNISOLONE INFLAMMATION MYOPIA guinea pig
原文传递
Reinforcement learning-enabled swarm intelligence method for computation task offloading in Internet-of-Things blockchain
15
作者 Zhuo Chen Jiahuan Yi +1 位作者 Yang Zhou Wei Luo 《Digital Communications and Networks》 2025年第3期912-924,共13页
Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)du... Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms. 展开更多
关键词 Blockchain Task offloading Swarm intelligence reinforcement learning
在线阅读 下载PDF
Utility-Driven Edge Caching Optimization with Deep Reinforcement Learning under Uncertain Content Popularity
16
作者 Mingoo Kwon Kyeongmin Kim Minseok Song 《Computers, Materials & Continua》 2025年第10期519-537,共19页
Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the... Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the benefit obtained per unit of cache bandwidth usage,degrades when static or greedy caching strategies fail to adapt to changing demand patterns.To address this,we propose a deep reinforcement learning(DRL)-based caching framework built upon the proximal policy optimization(PPO)algorithm.Our approach formulates edge caching as a sequential decision-making problem and introduces a reward model that balances cache hit performance and utility by prioritizing high-demand,high-quality content while penalizing degraded quality delivery.We construct a realistic synthetic dataset that captures both temporal variations and shifting content popularity to validate our model.Experimental results demonstrate that our proposed method improves utility by up to 135.9%and achieves an average improvement of 22.6%compared to traditional greedy algorithms and long short-term memory(LSTM)-based prediction models.Moreover,our method consistently performs well across a variety of utility functions,workload distributions,and storage limitations,underscoring its adaptability and robustness in dynamic video caching environments. 展开更多
关键词 Edge caching video-on-demand reinforcement learning utility optimization
在线阅读 下载PDF
Enhanced deep reinforcement learning for integrated navigation in multi-UAV systems
17
作者 Zhengyang CAO Gang CHEN 《Chinese Journal of Aeronautics》 2025年第8期119-138,共20页
In multiple Unmanned Aerial Vehicles(UAV)systems,achieving efficient navigation is essential for executing complex tasks and enhancing autonomy.Traditional navigation methods depend on predefined control strategies an... In multiple Unmanned Aerial Vehicles(UAV)systems,achieving efficient navigation is essential for executing complex tasks and enhancing autonomy.Traditional navigation methods depend on predefined control strategies and trajectory planning and often perform poorly in complex environments.To improve the UAV-environment interaction efficiency,this study proposes a multi-UAV integrated navigation algorithm based on Deep Reinforcement Learning(DRL).This algorithm integrates the Inertial Navigation System(INS),Global Navigation Satellite System(GNSS),and Visual Navigation System(VNS)for comprehensive information fusion.Specifically,an improved multi-UAV integrated navigation algorithm called Information Fusion with MultiAgent Deep Deterministic Policy Gradient(IF-MADDPG)was developed.This algorithm enables UAVs to learn collaboratively and optimize their flight trajectories in real time.Through simulations and experiments,test scenarios in GNSS-denied environments were constructed to evaluate the effectiveness of the algorithm.The experimental results demonstrate that the IF-MADDPG algorithm significantly enhances the collaborative navigation capabilities of multiple UAVs in formation maintenance and GNSS-denied environments.Additionally,it has advantages in terms of mission completion time.This study provides a novel approach for efficient collaboration in multi-UAV systems,which significantly improves the robustness and adaptability of navigation systems. 展开更多
关键词 Multi-UAV system reinforcement learning Integrated navigation MADDPG Information fusion
原文传递
Dynamic Decoupling-Driven Cooperative Pursuit for Multi-UAV Systems:A Multi-Agent Reinforcement Learning Policy Optimization Approach
18
作者 Lei Lei Chengfu Wu Huaimin Chen 《Computers, Materials & Continua》 2025年第10期1339-1363,共25页
This paper proposes a Multi-Agent Attention Proximal Policy Optimization(MA2PPO)algorithm aiming at the problems such as credit assignment,low collaboration efficiency and weak strategy generalization ability existing... This paper proposes a Multi-Agent Attention Proximal Policy Optimization(MA2PPO)algorithm aiming at the problems such as credit assignment,low collaboration efficiency and weak strategy generalization ability existing in the cooperative pursuit tasks of multiple unmanned aerial vehicles(UAVs).Traditional algorithms often fail to effectively identify critical cooperative relationships in such tasks,leading to low capture efficiency and a significant decline in performance when the scale expands.To tackle these issues,based on the proximal policy optimization(PPO)algorithm,MA2PPO adopts the centralized training with decentralized execution(CTDE)framework and introduces a dynamic decoupling mechanism,that is,sharing the multi-head attention(MHA)mechanism for critics during centralized training to solve the credit assignment problem.This method enables the pursuers to identify highly correlated interactions with their teammates,effectively eliminate irrelevant and weakly relevant interactions,and decompose large-scale cooperation problems into decoupled sub-problems,thereby enhancing the collaborative efficiency and policy stability among multiple agents.Furthermore,a reward function has been devised to facilitate the pursuers to encircle the escapee by combining a formation reward with a distance reward,which incentivizes UAVs to develop sophisticated cooperative pursuit strategies.Experimental results demonstrate the effectiveness of the proposed algorithm in achieving multi-UAV cooperative pursuit and inducing diverse cooperative pursuit behaviors among UAVs.Moreover,experiments on scalability have demonstrated that the algorithm is suitable for large-scale multi-UAV systems. 展开更多
关键词 Multi-agent reinforcement learning multi-UAV systems pursuit-evasion games
在线阅读 下载PDF
Decision-making and confrontation in close-range air combat based on reinforcement learning
19
作者 Mengchao YANG Shengzhe SHAN Weiwei ZHANG 《Chinese Journal of Aeronautics》 2025年第9期401-420,共20页
The high maneuverability of modern fighters in close air combat imposes significant cognitive demands on pilots,making rapid,accurate decision-making challenging.While reinforcement learning(RL)has shown promise in th... The high maneuverability of modern fighters in close air combat imposes significant cognitive demands on pilots,making rapid,accurate decision-making challenging.While reinforcement learning(RL)has shown promise in this domain,the existing methods often lack strategic depth and generalization in complex,high-dimensional environments.To address these limitations,this paper proposes an optimized self-play method enhanced by advancements in fighter modeling,neural network design,and algorithmic frameworks.This study employs a six-degree-of-freedom(6-DOF)F-16 fighter model based on open-source aerodynamic data,featuring airborne equipment and a realistic visual simulation platform,unlike traditional 3-DOF models.To capture temporal dynamics,Long Short-Term Memory(LSTM)layers are integrated into the neural network,complemented by delayed input stacking.The RL environment incorporates expert strategies,curiositydriven rewards,and curriculum learning to improve adaptability and strategic decision-making.Experimental results demonstrate that the proposed approach achieves a winning rate exceeding90%against classical single-agent methods.Additionally,through enhanced 3D visual platforms,we conducted human-agent confrontation experiments,where the agent attained an average winning rate of over 75%.The agent's maneuver trajectories closely align with human pilot strategies,showcasing its potential in decision-making and pilot training applications.This study highlights the effectiveness of integrating advanced modeling and self-play techniques in developing robust air combat decision-making systems. 展开更多
关键词 Air combat Decision making Flight simulation reinforcement learning Self-play
原文传递
DRL-IQA:Deep Reinforcement Learning for Opinion-Unaware Blind Image Quality Assessment
20
作者 Ying Zefeng Pan Da Shi Ping 《China Communications》 2025年第6期237-254,共18页
Most blind image quality assessment(BIQA)methods require a large amount of time to collect human opinion scores as training labels,which limits their usability in practice.Thus,we present an opinion-unaware BIQA metho... Most blind image quality assessment(BIQA)methods require a large amount of time to collect human opinion scores as training labels,which limits their usability in practice.Thus,we present an opinion-unaware BIQA method based on deep reinforcement learning which is trained without subjective scores,named DRL-IQA.Inspired by the human visual perception process,our model is formulated as a quality reinforced agent,which consists of the dynamic distortion generation part and the quality perception part.By considering the image distortion degradation process as a sequential decision-making process,the dynamic distortion generation part can develop a strategy to add as many different distortions as possible to an image,which enriches the distortion space to alleviate overfitting.A reward function calculated from quality degradation after adding distortion is utilized to continuously optimize the strategy.Furthermore,the quality perception part can extract rich quality features from the quality degradation process without using subjective scores,and accurately predict the state values that represent the image quality.Experimental results reveal that our method achieves competitive quality prediction performance compared to other state-of-the-art BIQA methods. 展开更多
关键词 blind image quality assessment deep reinforcement learning opinion-unaware
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部