期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Review on Control Strategies for Cable-Driven Parallel Robots with Model Uncertainties
1
作者 Xiang Jin Haifeng Zhang +1 位作者 Liqing Wang Qinchuan Li 《Chinese Journal of Mechanical Engineering》 CSCD 2024年第6期1-17,共17页
Cable-driven parallel robots(CDPRs)use cables instead of the rigid limbs of traditional parallel robots,thus processing a large potential workspace,easy to assemble and disassemble characteristics,and with application... Cable-driven parallel robots(CDPRs)use cables instead of the rigid limbs of traditional parallel robots,thus processing a large potential workspace,easy to assemble and disassemble characteristics,and with applications in numerous fields.However,owing to the influence of cable flexibility and nonlinear friction,model uncertainties are di cult to eliminate from the control design.Hence,in this study,the model uncertainties of CDPRs are first analyzed based on a brief introduction to related research.Control strategies for CDPRs with model uncertainties are then reviewed.The advantages and disadvantages of several control strategies for CDPRS are discussed through traditional control strategies with kinematic and dynamic uncertainties.Compared with these traditional control strategies,deep reinforcement learning and model predictive control have received widespread attention in recent years owing to their model independence and recursive feasibility with constraint limits.A comprehensive review and brief analysis of current advances in these two control strategies for CDPRs with model uncertainties are presented,concluding with discussions regarding development directions. 展开更多
关键词 Cable-driven parallel robots Model uncertainties Control strategy Reinforcement learning modelpredictive control KINEMATICS Dynamics
在线阅读 下载PDF
Advanced day-ahead scheduling of HVAC demand response control using novel strategy of Q-learning,model predictive control,and input convex neural networks
2
作者 Rahman Heidarykiany Cristinel Ababei 《Energy and AI》 2025年第2期634-646,共13页
In this paper,we present a Q-Learning optimization algorithm for smart home HVAC systems.The proposed algorithm combines new convex deep neural network models with model predictive control(MPC)techniques.More specific... In this paper,we present a Q-Learning optimization algorithm for smart home HVAC systems.The proposed algorithm combines new convex deep neural network models with model predictive control(MPC)techniques.More specifically,new input convex long short-term memory(ICLSTM)models are employed to predict dynamic states in an MPC optimal control technique integrated within a Q-Learning reinforcement learning(RL)algorithm to further improve the learned temporal behaviors of nonlinear HVAC systems.As a novel RL approach,the proposed algorithm generates day-ahead HVAC demand response(DR)signals in smart homes that optimally reduce and/or shift peak energy usage,reduce electricity costs,minimize user discomfort,and honor in a best-effort way the recommendations from utility/aggregator,which in turn has impact on the overall well being of the distribution network controlled by the aggregator.The proposed Q-Learning optimization algorithm,based on epsilon-model predictive control(e-MPC),can be implemented as a control agent that is executed by the smart house energy management(SHEM)system that we assume exists in the smart home,which can interact with the energy provider of the distribution network,i.e.,utility/aggregator,via the smart meter.The output generated by the proposed control agent represents day-ahead local DR signals in the form of temperature setpoints for the HVAC system that are found by the optimization process to lead to desired trade-offs between electricity cost and user discomfort.The proposed algorithm can be used in smart homes with passive HVAC controllers,which solely react to end-user setpoints,to transform them into smart homes with active HVAC controllers.Such systems not only respond to the preferences of the end-user but also incorporate an external control signal provided by the utility or aggregator.Simulation experiments conducted with a custom simulation tool demonstrate that the proposed optimization framework can offer significant benefits.It achieves 87%higher success rate in optimizing setpoints in the desired range,thereby resulting in up to 15%energy savings and zero temperature discomfort. 展开更多
关键词 Model-free reinforcement learning(MF-RL) Q-LEARNING Inputconvex long short-term memory networks(ICLSTM) modelpredictive control(MPC) Control of nonlinear physicalsystems Thermal comfort HVAC energy usage control Thermostatically controlled loads Smart home energymanagement(SHEM) Load shifting Internet of things(loT)applications Smartgrid Virtual power plant(VPP) Microgrid Deep learning(DL) Demand-side management(DSM) Demand response(DR)
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部