期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Energy management of buildings with energy storage and solar photovoltaic:A diversity in experience approach for deep reinforcement learning agents 被引量:1
1
作者 Akhtar Hussain petr musilek 《Energy and AI》 EI 2024年第1期1-14,共14页
Deep reinforcement learning(DRL)is a suitable approach to handle uncertainty in managing the energy consumption of buildings with energy storage systems.Conventionally,DRL agents are trained by randomly selecting samp... Deep reinforcement learning(DRL)is a suitable approach to handle uncertainty in managing the energy consumption of buildings with energy storage systems.Conventionally,DRL agents are trained by randomly selecting samples from a data set,which can result in overexposure to some data categories and under/no exposure to other data categories.Thus,the trained model may be biased towards some data groups and underperform(provide suboptimal results)for data groups to which it was less exposed.To address this issue,diversity in experience-based DRL agent training framework is proposed in this study.This approach ensures the exposure of agents to all types of data.The proposed framework is implemented in two steps.In the first step,raw data are grouped into different clusters using the K-means clustering method.The clustered data is then arranged by stacking the data of one cluster on top of another.In the second step,a selection algorithm is proposed to select data from each cluster to train the DRL agent.The frequency of selection from each cluster is in proportion to the number of data points in that cluster and therefore named the proportional selection method.To analyze the performance of the proposed approach and compare the results with the conventional random selection method,two indices are proposed in this study:the flatness index and the divergence index.The model is trained using different data sets(1-year,3-year,and 5-year)and also with the inclusion of solar photovoltaics.The simulation results confirmed the superior performance of the proposed approach to flatten the building’s load curve by optimally operating the energy storage system. 展开更多
关键词 Battery energy storage Building demand management Deep reinforcement learning Diversity in experience Energy management
在线阅读 下载PDF
Decentralized coordination of distributed energy resources through local energy markets and deep reinforcement learning
2
作者 Daniel C.May Matthew Taylor petr musilek 《Energy and AI》 2024年第4期459-469,共11页
As the energy landscape evolves towards sustainability,the accelerating integration of distributed energy resources poses challenges to the operability and reliability of the electricity grid.One significant aspect of... As the energy landscape evolves towards sustainability,the accelerating integration of distributed energy resources poses challenges to the operability and reliability of the electricity grid.One significant aspect of this issue is the notable increase in net load variability at the grid edge.Transactive energy,implemented through local energy markets,has recently garnered attention as a promising solution to address the grid challenges in the form of decentralized,indirect demand response on a community level.Model-free control approaches,such as deep reinforcement learning(DRL),show promise for the decentralized automation of participation within this context.Existing studies at the intersection of transactive energy and model-free control primarily focus on socioeconomic and self-consumption metrics,overlooking the crucial goal of reducing community-level net load variability.This study addresses this gap by training a set of deep reinforcement learning agents to automate end-user participation in an economy-driven,autonomous local energy market(ALEX).In this setting,agents do not share information and only prioritize individual bill optimization.The study unveils a clear correlation between bill reduction and reduced net load variability.The impact on net load variability is assessed over various time horizons using metrics such as ramping rate,daily and monthly load factor,as well as daily average and total peak export and import on an open-source dataset.To examine the performance of the proposed DRL method,its agents are benchmarked against a optimal near-dynamic programming method,using a no-control scenario as the baseline.The dynamic programming benchmark reduces average daily import,export,and peak demand by 22.05%,83.92%,and 24.09%,respectively.The RL agents demonstrate comparable or superior performance,with improvements of 21.93%,84.46%,and 27.02%on these metrics.This demonstrates that DRL can be effectively employed for such tasks,as they are inherently scalable with near-optimal performance in decentralized grid management. 展开更多
关键词 Reinforcement learning Deep reinforcement learning Distributed energy resources Local energy markets Demand response Distributed energy resource management Transactive energy
在线阅读 下载PDF
Reinforcement learning-driven local transactive energy market fordistributed energy resources
3
作者 Steven Zhang Daniel May +1 位作者 Mustafa Gül petr musilek 《Energy and AI》 2022年第2期162-176,共15页
Local energy markets are emerging as a tool for coordinating generation, storage, and consumption of energyfrom distributed resources. In combination with automation, they promise to provide an effective energymanagem... Local energy markets are emerging as a tool for coordinating generation, storage, and consumption of energyfrom distributed resources. In combination with automation, they promise to provide an effective energymanagement framework that is fair and brings system-level savings. The cooperative–competitive natureof energy markets calls for multi-agent based automation with learning energy trading agents. However,depending on the dynamics of the agent–environment interaction, this approach may yield unintended behaviorof market participants. Thus, the design of market mechanisms suitable for reinforcement learning agentsmust take into account this interplay. This article introduces autonomous local energy exchange (ALEX) asan experimental framework that combines multi-agent learning and double auction mechanism. Participantsdetermine their internal price signals and make energy management decisions through market interactions,rather than relying on predetermined external price signals. The main contribution of this article is examinationof compatibility between specific market elements and independent learning agents. Effects of different marketproperties are evaluated through simulation experiments, and the results are used for determine a suitablemarket design. The results show that market truthfulness maintains demand-response functionality, while weakbudget balancing provides a strong reinforcement signal for the learning agents. The resulting agent behavioris compared with two baselines: net billing and time-of-use rates. The ALEX-based pricing is more responsiveto fluctuations in the community net load compared to the time-of-use. The more accurate accounting ofrenewable energy usage reduced bills by a median 38.8% compared to net billing, confirming the ability tobetter facilitate demand response. 展开更多
关键词 Transactive energy Demand response Distributed energy resources(DER) DER integration Local energy market Reinforcement learning
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部