期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Distributed optimization based on improved push-sum framework for optimization problem with multiple local constraints and its application in smart grid 被引量:2
1
作者 Qian XU Chutian YU +2 位作者 Xiang YUAN Mengli WEI Hongzhe LIU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第9期1253-1260,共8页
In this paper,the optimization problem subject to N nonidentical closed convex set constraints is studied.The aim is to design a corresponding distributed optimization algorithm over the fixed unbalanced graph to solv... In this paper,the optimization problem subject to N nonidentical closed convex set constraints is studied.The aim is to design a corresponding distributed optimization algorithm over the fixed unbalanced graph to solve the considered problem.To this end,with the push-sum framework improved,the distributed optimization algorithm is newly designed,and its strict convergence analysis is given under the assumption that the involved graph is strongly connected.Finally,simulation results support the good performance of the proposed algorithm. 展开更多
关键词 Distributed optimization Nonidentical constraints improved push-sum framework
原文传递
Probabilistic Automata-Based Method for Enhancing Performance of Deep Reinforcement Learning Systems
2
作者 Min Yang Guanjun Liu +1 位作者 Ziyuan Zhou Jiacun Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第11期2327-2339,共13页
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty... Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications. 展开更多
关键词 Deep reinforcement learning(DRL) performance improvement framework probabilistic automata real-time monitoring the key probabilistic decision-making units(PDMU)-action pair
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部