This paper employs the PPO(Proximal Policy Optimization) algorithm to study the risk hedging problem of the Shanghai Stock Exchange(SSE) 50ETF options. First, the action and state spaces were designed based on the cha...This paper employs the PPO(Proximal Policy Optimization) algorithm to study the risk hedging problem of the Shanghai Stock Exchange(SSE) 50ETF options. First, the action and state spaces were designed based on the characteristics of the hedging task, and a reward function was developed according to the cost function of the options. Second, combining the concept of curriculum learning, the agent was guided to adopt a simulated-to-real learning approach for dynamic hedging tasks, reducing the learning difficulty and addressing the issue of insufficient option data. A dynamic hedging strategy for 50ETF options was constructed. Finally, numerical experiments demonstrate the superiority of the designed algorithm over traditional hedging strategies in terms of hedging effectiveness.展开更多
为平抑微源半桥变流器串联星型结构微电网HCSY-MG(half-bridge converter series Y-connection micro-grids)并网系统中微源出力的波动,保证各相直流侧电压之和相等,与并网电流三相平衡,提出1种基于改进近端策略优化PPO(proximal policy...为平抑微源半桥变流器串联星型结构微电网HCSY-MG(half-bridge converter series Y-connection micro-grids)并网系统中微源出力的波动,保证各相直流侧电压之和相等,与并网电流三相平衡,提出1种基于改进近端策略优化PPO(proximal policy optimization)的分布式混合储能系统HESS(hybrid energy storage system)充、放电优化控制策略。在考虑HCSY-MG系统并网电流与分布式HESS特性的条件下,确定影响并网电流的主要系统变量,以及HESS接入系统的最佳拓扑结构。然后结合串联系统的特点,将分布式HESS的充、放电问题转换为深度强化学习的Markov决策过程。同时针对PPO算法中熵损失权重难以确定的问题,提出1种改进的PPO算法,兼顾智能体的收敛性和探索性。最后以某新能源发电基地的典型运行数据为算例,验证所提控制策略的可行性和有效性。展开更多
Bionic gait learning of quadruped robots based on reinforcement learning has become a hot research topic.The proximal policy optimization(PPO)algorithm has a low probability of learning a successful gait from scratch ...Bionic gait learning of quadruped robots based on reinforcement learning has become a hot research topic.The proximal policy optimization(PPO)algorithm has a low probability of learning a successful gait from scratch due to problems such as reward sparsity.To solve the problem,we propose a experience evolution proximal policy optimization(EEPPO)algorithm which integrates PPO with priori knowledge highlighting by evolutionary strategy.We use the successful trained samples as priori knowledge to guide the learning direction in order to increase the success probability of the learning algorithm.To verify the effectiveness of the proposed EEPPO algorithm,we have conducted simulation experiments of the quadruped robot gait learning task on Pybullet.Experimental results show that the central pattern generator based radial basis function(CPG-RBF)network and the policy network are simultaneously updated to achieve the quadruped robot’s bionic diagonal trot gait learning task using key information such as the robot’s speed,posture and joints information.Experimental comparison results with the traditional soft actor-critic(SAC)algorithm validate the superiority of the proposed EEPPO algorithm,which can learn a more stable diagonal trot gait in flat terrain.展开更多
阶梯式碳交易机制以及优化调度模型求解算法是进行园区综合能源系统(community integrated energy system,CIES)优化调度的重要因素,现有文献对这两个因素的考虑不够全面。为此,文中在考虑阶梯式碳交易机制的基础上,提出采用近端策略优...阶梯式碳交易机制以及优化调度模型求解算法是进行园区综合能源系统(community integrated energy system,CIES)优化调度的重要因素,现有文献对这两个因素的考虑不够全面。为此,文中在考虑阶梯式碳交易机制的基础上,提出采用近端策略优化(proximal policy optimization,PPO)算法求解CIES低碳优化调度问题。该方法基于低碳优化调度模型搭建强化学习交互环境,利用设备状态参数及运行参数定义智能体的状态、动作空间及奖励函数,再通过离线训练获取可生成最优策略的智能体。算例分析结果表明,采用PPO算法得到的CIES低碳优化调度方法能够充分发挥阶梯式碳交易机制减少碳排放量和提高能源利用率方面的优势。展开更多
为提高移动机器人在无地图情况下的视觉导航能力,提升导航成功率,提出了一种融合长短期记忆神经网络(long short term memory, LSTM)和近端策略优化算法(proximal policy optimization, PPO)算法的移动机器人视觉导航模型。首先,该模型...为提高移动机器人在无地图情况下的视觉导航能力,提升导航成功率,提出了一种融合长短期记忆神经网络(long short term memory, LSTM)和近端策略优化算法(proximal policy optimization, PPO)算法的移动机器人视觉导航模型。首先,该模型融合LSTM和PPO算法作为视觉导航的网络模型;其次,通过移动机器人动作,与目标距离,运动时间等因素设计奖励函数,用以训练目标;最后,以移动机器人第一视角获得的RGB-D图像及目标点的极性坐标为输入,以移动机器人的连续动作值为输出,实现无地图的端到端视觉导航任务,并根据推理到达未接受过训练的新目标。对比前序算法,该模型在模拟环境中收敛速度更快,旧目标的导航成功率平均提高17.7%,新目标的导航成功率提高23.3%,具有较好的导航性能。展开更多
Autonomous driving systems(ADS)are at the forefront of technological innovation,promising enhanced safety,efficiency,and convenience in transportation.This study investigates the potential of end-to-end reinforcement ...Autonomous driving systems(ADS)are at the forefront of technological innovation,promising enhanced safety,efficiency,and convenience in transportation.This study investigates the potential of end-to-end reinforcement learning(RL)architectures for ADS,specifically focusing on a Go-To-Point task involving lane-keeping and navigation through basic urban environments.The study uses the Proximal Policy Optimization(PPO)algorithm within the CARLA simulation environment.Traditional modular systems,which separate driving tasks into perception,decision-making,and control,provide interpretability and reliability in controlled scenarios but struggle with adaptability to dynamic,real-world conditions.In contrast,end-to-end systems offer a more integrated approach,potentially enhancing flexibility and decision-making cohesion.This research introduces CARLA-GymDrive,a novel framework integrating the CARLA simulator with the Gymnasium API,enabling seamless RL experimentation with both discrete and continuous action spaces.Through a two-phase training regimen,the study evaluates the efficacy of PPO in an end-to-end ADS focused on basic tasks like lane-keeping and waypoint navigation.A comparative analysis with modular architectures is also provided.The findings highlight the strengths of PPO in managing continuous control tasks,achieving smoother and more adaptable driving behaviors than value-based algorithms like Deep Q-Networks.However,challenges remain in generalization and computational demands,with end-to-end systems requiring extensive training time.While the study underscores the potential of end-to-end architectures,it also identifies limitations in scalability and real-world applicability,suggesting that modular systems may currently be more feasible for practical ADS deployment.Nonetheless,the CARLA-GymDrive framework and the insights gained from PPO-based ADS contribute significantly to the field,laying a foundation for future advancements in AD.展开更多
基金supported by the Foundation of Key Laboratory of System Control and Information Processing,Ministry of Education,China,Scip20240111Aeronautical Science Foundation of China,Grant 2024Z071108001the Foundation of Key Laboratory of Traffic Information and Safety of Anhui Higher Education Institutes,Anhui Sanlian University,KLAHEI18018.
文摘This paper employs the PPO(Proximal Policy Optimization) algorithm to study the risk hedging problem of the Shanghai Stock Exchange(SSE) 50ETF options. First, the action and state spaces were designed based on the characteristics of the hedging task, and a reward function was developed according to the cost function of the options. Second, combining the concept of curriculum learning, the agent was guided to adopt a simulated-to-real learning approach for dynamic hedging tasks, reducing the learning difficulty and addressing the issue of insufficient option data. A dynamic hedging strategy for 50ETF options was constructed. Finally, numerical experiments demonstrate the superiority of the designed algorithm over traditional hedging strategies in terms of hedging effectiveness.
文摘为平抑微源半桥变流器串联星型结构微电网HCSY-MG(half-bridge converter series Y-connection micro-grids)并网系统中微源出力的波动,保证各相直流侧电压之和相等,与并网电流三相平衡,提出1种基于改进近端策略优化PPO(proximal policy optimization)的分布式混合储能系统HESS(hybrid energy storage system)充、放电优化控制策略。在考虑HCSY-MG系统并网电流与分布式HESS特性的条件下,确定影响并网电流的主要系统变量,以及HESS接入系统的最佳拓扑结构。然后结合串联系统的特点,将分布式HESS的充、放电问题转换为深度强化学习的Markov决策过程。同时针对PPO算法中熵损失权重难以确定的问题,提出1种改进的PPO算法,兼顾智能体的收敛性和探索性。最后以某新能源发电基地的典型运行数据为算例,验证所提控制策略的可行性和有效性。
基金the National Natural Science Foundation of China(No.62103009)。
文摘Bionic gait learning of quadruped robots based on reinforcement learning has become a hot research topic.The proximal policy optimization(PPO)algorithm has a low probability of learning a successful gait from scratch due to problems such as reward sparsity.To solve the problem,we propose a experience evolution proximal policy optimization(EEPPO)algorithm which integrates PPO with priori knowledge highlighting by evolutionary strategy.We use the successful trained samples as priori knowledge to guide the learning direction in order to increase the success probability of the learning algorithm.To verify the effectiveness of the proposed EEPPO algorithm,we have conducted simulation experiments of the quadruped robot gait learning task on Pybullet.Experimental results show that the central pattern generator based radial basis function(CPG-RBF)network and the policy network are simultaneously updated to achieve the quadruped robot’s bionic diagonal trot gait learning task using key information such as the robot’s speed,posture and joints information.Experimental comparison results with the traditional soft actor-critic(SAC)algorithm validate the superiority of the proposed EEPPO algorithm,which can learn a more stable diagonal trot gait in flat terrain.
文摘为提高移动机器人在无地图情况下的视觉导航能力,提升导航成功率,提出了一种融合长短期记忆神经网络(long short term memory, LSTM)和近端策略优化算法(proximal policy optimization, PPO)算法的移动机器人视觉导航模型。首先,该模型融合LSTM和PPO算法作为视觉导航的网络模型;其次,通过移动机器人动作,与目标距离,运动时间等因素设计奖励函数,用以训练目标;最后,以移动机器人第一视角获得的RGB-D图像及目标点的极性坐标为输入,以移动机器人的连续动作值为输出,实现无地图的端到端视觉导航任务,并根据推理到达未接受过训练的新目标。对比前序算法,该模型在模拟环境中收敛速度更快,旧目标的导航成功率平均提高17.7%,新目标的导航成功率提高23.3%,具有较好的导航性能。
基金supported by FCT/MCTES through national funds and when applicable co-funded EU funds under the project UIDB/50008/2020.
文摘Autonomous driving systems(ADS)are at the forefront of technological innovation,promising enhanced safety,efficiency,and convenience in transportation.This study investigates the potential of end-to-end reinforcement learning(RL)architectures for ADS,specifically focusing on a Go-To-Point task involving lane-keeping and navigation through basic urban environments.The study uses the Proximal Policy Optimization(PPO)algorithm within the CARLA simulation environment.Traditional modular systems,which separate driving tasks into perception,decision-making,and control,provide interpretability and reliability in controlled scenarios but struggle with adaptability to dynamic,real-world conditions.In contrast,end-to-end systems offer a more integrated approach,potentially enhancing flexibility and decision-making cohesion.This research introduces CARLA-GymDrive,a novel framework integrating the CARLA simulator with the Gymnasium API,enabling seamless RL experimentation with both discrete and continuous action spaces.Through a two-phase training regimen,the study evaluates the efficacy of PPO in an end-to-end ADS focused on basic tasks like lane-keeping and waypoint navigation.A comparative analysis with modular architectures is also provided.The findings highlight the strengths of PPO in managing continuous control tasks,achieving smoother and more adaptable driving behaviors than value-based algorithms like Deep Q-Networks.However,challenges remain in generalization and computational demands,with end-to-end systems requiring extensive training time.While the study underscores the potential of end-to-end architectures,it also identifies limitations in scalability and real-world applicability,suggesting that modular systems may currently be more feasible for practical ADS deployment.Nonetheless,the CARLA-GymDrive framework and the insights gained from PPO-based ADS contribute significantly to the field,laying a foundation for future advancements in AD.