期刊文献+

An operational regulation method for solar heating system based on Deep reinforcement learning

原文传递
导出
摘要 The performance of solar heating systems is significantly influenced by outdoor weather fluctuations and building heating loads,leading to dynamic variations that undermine the efficacy of rule-based control(RBC)strategies.Additionally,the hydraulic and thermal time-delay characteristics frequently lead to delays in control points for real-time optimization(RTO)control strategies.While Model Predictive Control(MPC)effectively addresses these dynamic and time-delay issues in solar heating systems,its substantial computational demands limit its real-world applications.To overcome these challenges,this study proposes a Model-Free Predictive Control(MFPC)approach utilizing Deep reinforcement learning(DRL).Through TRNSYS simulations,the study conducts a comparison of the performance and energy consumption of RBC and MFPC systems,focusing on a residential solar heating system in Lhasa,Xizang as a case study.The results demonstrate that the MFPC method reduces unmet heating demand by 31%compared to traditional RBC strategies,improves solar collection efficiency by nearly 12%,and decreases tank heat loss by 2.2%.When accounting for thermal storage effects,the optimized MFPC strategy achieves a reduction in net energy consumption of 25.6%.
出处 《Building Simulation》 2025年第8期1945-1961,共17页 建筑模拟(英文)
基金 supported by the National Natural Science Foundation of China(Project Nos.U23A20657,U20A20311).
  • 相关文献

二级参考文献19

  • 1胡国强,贺仁睦.基于交互式多目标决策方法的水火电力系统日有功负荷优化分配[J].电网技术,2007,31(18):37-42. 被引量:12
  • 2JALEELI N, VANSLYCK L S, EWART N, et al. Understanding au- tomatic generation control [J]. IEEE Transactions on Power Systems1992, 7(3): 1106- 1122.
  • 3YU T, WANG Y M, YEW J, et al. Stochastic optimal generation command dispatch based on improved hierarchical reinforcement learning approach [J]. lET Generation, Transmission & Distribution, 2011, 5(8): 789 - 797.
  • 4YU T, ZHOU B, CHAN K W, et al. Stochastic optimal relaxed auto- matic generation control in non-Markov environment based on multi- step Q(,k) learning [J]. 1EEE Transactions on Power Systems, 2011, 26(3): 1272 - 1282.
  • 5SUTTON R S, BARTO A G. An Introduction to Reinforcement Learning [M]. Cambrige: The MIT Press, 1998.
  • 6YU T, LIU J, CHAN K W, et al. Distributed multi-step Q(A) learning for optimal power flow of large-scale power grids [J]. International Journal of Electrical Power and Energy Systems, 2012, 42(1): 614 - 620.
  • 7WEISSGERBER J. Dynamic models for steam and hydro turbines in power system studies [J]. IEEE Transactions on Power Apparatus and Systems, 1973, 92(6): 1904- 1951.
  • 8EKANAYAKE J B, JENKINS N, STRBAC G. Frequency response from wind turbines [J]. Wind Engineering, 2008, 32(6): 573 - 586.
  • 9许叶军,达庆利.区间混合判断矩阵排序的线性目标规划法[J].系统工程与电子技术,2008,30(6):1079-1081. 被引量:6
  • 10刘斌,王克英,余涛,刘奇.PSO算法在互联电网CPS功率调节中的应用研究[J].电力系统保护与控制,2009,37(6):36-39. 被引量:17

共引文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部