This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion...This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks.展开更多
The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural design.However,their complex and nonlinear dynamic behavior presents a significant challenge for cont...The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural design.However,their complex and nonlinear dynamic behavior presents a significant challenge for control,necessitating sophisticated algorithms to ensure stability and accuracy in flight.Various strategies have been explored by researchers and control engineers,with learning-based methods like reinforcement learning,deep learning,and neural networks showing promise in enhancing the robustness and adaptability of quadrotor control systems.This paper investigates a Reinforcement Learning(RL)approach for both high and low-level quadrotor control systems,focusing on attitude stabilization and position tracking tasks.A novel reward function and actor-critic network structures are designed to stimulate high-order observable states,improving the agent’s understanding of the quadrotor’s dynamics and environmental constraints.To address the challenge of RL hyper-parameter tuning,a new framework is introduced that combines Simulated Annealing(SA)with a reinforcement learning algorithm,specifically Simulated Annealing-Twin Delayed Deep Deterministic Policy Gradient(SA-TD3).This approach is evaluated for path-following and stabilization tasks through comparative assessments with two commonly used control methods:Backstepping and Sliding Mode Control(SMC).While the implementation of the well-trained agents exhibited unexpected behavior during real-world testing,a reduced neural network used for altitude control was successfully implemented on a Parrot Mambo mini drone.The results showcase the potential of the proposed SA-TD3 framework for real-world applications,demonstrating improved stability and precision across various test scenarios and highlighting its feasibility for practical deployment.展开更多
为适应大容量同步发电机组并网点母线电压波动增加对自动电压调节器(automatic voltage regulator,AVR)系统响应能力的更高要求,提出一种基于含探索网络的双延迟深度确定性策略梯度(twin delayed deep deterministic policy gradient wi...为适应大容量同步发电机组并网点母线电压波动增加对自动电压调节器(automatic voltage regulator,AVR)系统响应能力的更高要求,提出一种基于含探索网络的双延迟深度确定性策略梯度(twin delayed deep deterministic policy gradient with Explorer network,TD3EN)算法的同步发电机励磁电压控制方法。首先,通过传递函数对同步发电机励磁调压子系统进行建模;然后建立TD3EN算法探索网络、动作网络和评价网络,并设置相应参数;接着利用TD3EN算法训练智能体,通过探索网络探索动作空间,并根据评价网络更新动作网络参数,使其为AVR提供控制信号;将训练完成的智能体接入AVR系统,实现对发电机机端电压的控制。仿真结果表明,所提方法提高了AVR系统响应调节指令和应对电压暂降的能力。展开更多
针对深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法在一些大状态空间任务中存在学习效果不佳及波动较大等问题,提出一种基于渐近式k-means聚类算法的多行动者深度确定性策略梯度(multi-actor deep deterministic po...针对深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法在一些大状态空间任务中存在学习效果不佳及波动较大等问题,提出一种基于渐近式k-means聚类算法的多行动者深度确定性策略梯度(multi-actor deep deterministic policy gradient based on progressive k-means clustering,MDDPG-PK-Means)算法.在训练过程中,对每一时间步下的状态进行动作选择时,根据k-means算法判别结果辅佐行动者网络的决策,同时随训练时间步的增加,逐渐增加k-means算法类簇中心的个数.将MDDPG-PK-Means算法应用于MuJoCo仿真平台上,实验结果表明,与DDPG等算法相比,MDDPG-PK-Means算法在大多数连续任务中都具有更好的效果.展开更多
基金supported by the National Science and Technology Council,Taiwan[Grant NSTC 111-2628-E-006-005-MY3]supported by the Ocean Affairs Council,Taiwansponsored in part by Higher Education Sprout Project,Ministry of Education to the Headquarters of University Advancement at National Cheng Kung University(NCKU).
文摘This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks.
基金supported by Princess Nourah Bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R135)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural design.However,their complex and nonlinear dynamic behavior presents a significant challenge for control,necessitating sophisticated algorithms to ensure stability and accuracy in flight.Various strategies have been explored by researchers and control engineers,with learning-based methods like reinforcement learning,deep learning,and neural networks showing promise in enhancing the robustness and adaptability of quadrotor control systems.This paper investigates a Reinforcement Learning(RL)approach for both high and low-level quadrotor control systems,focusing on attitude stabilization and position tracking tasks.A novel reward function and actor-critic network structures are designed to stimulate high-order observable states,improving the agent’s understanding of the quadrotor’s dynamics and environmental constraints.To address the challenge of RL hyper-parameter tuning,a new framework is introduced that combines Simulated Annealing(SA)with a reinforcement learning algorithm,specifically Simulated Annealing-Twin Delayed Deep Deterministic Policy Gradient(SA-TD3).This approach is evaluated for path-following and stabilization tasks through comparative assessments with two commonly used control methods:Backstepping and Sliding Mode Control(SMC).While the implementation of the well-trained agents exhibited unexpected behavior during real-world testing,a reduced neural network used for altitude control was successfully implemented on a Parrot Mambo mini drone.The results showcase the potential of the proposed SA-TD3 framework for real-world applications,demonstrating improved stability and precision across various test scenarios and highlighting its feasibility for practical deployment.
文摘为适应大容量同步发电机组并网点母线电压波动增加对自动电压调节器(automatic voltage regulator,AVR)系统响应能力的更高要求,提出一种基于含探索网络的双延迟深度确定性策略梯度(twin delayed deep deterministic policy gradient with Explorer network,TD3EN)算法的同步发电机励磁电压控制方法。首先,通过传递函数对同步发电机励磁调压子系统进行建模;然后建立TD3EN算法探索网络、动作网络和评价网络,并设置相应参数;接着利用TD3EN算法训练智能体,通过探索网络探索动作空间,并根据评价网络更新动作网络参数,使其为AVR提供控制信号;将训练完成的智能体接入AVR系统,实现对发电机机端电压的控制。仿真结果表明,所提方法提高了AVR系统响应调节指令和应对电压暂降的能力。
文摘针对深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法在一些大状态空间任务中存在学习效果不佳及波动较大等问题,提出一种基于渐近式k-means聚类算法的多行动者深度确定性策略梯度(multi-actor deep deterministic policy gradient based on progressive k-means clustering,MDDPG-PK-Means)算法.在训练过程中,对每一时间步下的状态进行动作选择时,根据k-means算法判别结果辅佐行动者网络的决策,同时随训练时间步的增加,逐渐增加k-means算法类簇中心的个数.将MDDPG-PK-Means算法应用于MuJoCo仿真平台上,实验结果表明,与DDPG等算法相比,MDDPG-PK-Means算法在大多数连续任务中都具有更好的效果.