Integrating renewable energy sources into the electricity grid introduces volatility and complexity,requiring advanced energy management systems.By optimizing the charging and discharging behavior of a building’s bat...Integrating renewable energy sources into the electricity grid introduces volatility and complexity,requiring advanced energy management systems.By optimizing the charging and discharging behavior of a building’s battery system,reinforcement learning effectively provides flexibility,managing volatile energy demand,dynamic pricing,and photovoltaic output to maximize rewards.However,the effectiveness of reinforcement learning is often hindered by limited access to training data due to privacy concerns,unstable training processes,and challenges in generalizing to different household conditions.In this study,we propose a novel federated framework for reinforcement learning in energy management systems.By enabling local model training on private data and aggregating only model parameters on a global server,this approach not only preserves privacy but also improves model generalization and robustness under varying household conditions,while decreasing electricity costs and emissions per building.For a comprehensive benchmark,we compare standard reinforcement learning with our federated approach and include mixed integer programming and rule-based systems.Among the reinforcement learning methods,deep deterministic policy gradient performed best on the Ausgrid dataset,with federated learning reducing costs by 5.01%and emissions by 4.60%.Federated learning also improved zero-shot performance for unseen buildings,reducing costs by 5.11%and emissions by 5.55%.Thus,our findings highlight the potential of federated reinforcement learning to enhance energy management systems by balancing privacy,sustainability,and efficiency.展开更多
基金support by the KIT-Publication Fund of the Karl-sruhe Institute of Technology,Germanyfunded by the German Research Foundation(DFG),Germany as part of the Research Training Group 2153:“Energy Status Data-Informatics Methods for its Collection,Analysis,and Exploitation”supported by the Helmholtz Association in the Program Energy System Design.
文摘Integrating renewable energy sources into the electricity grid introduces volatility and complexity,requiring advanced energy management systems.By optimizing the charging and discharging behavior of a building’s battery system,reinforcement learning effectively provides flexibility,managing volatile energy demand,dynamic pricing,and photovoltaic output to maximize rewards.However,the effectiveness of reinforcement learning is often hindered by limited access to training data due to privacy concerns,unstable training processes,and challenges in generalizing to different household conditions.In this study,we propose a novel federated framework for reinforcement learning in energy management systems.By enabling local model training on private data and aggregating only model parameters on a global server,this approach not only preserves privacy but also improves model generalization and robustness under varying household conditions,while decreasing electricity costs and emissions per building.For a comprehensive benchmark,we compare standard reinforcement learning with our federated approach and include mixed integer programming and rule-based systems.Among the reinforcement learning methods,deep deterministic policy gradient performed best on the Ausgrid dataset,with federated learning reducing costs by 5.01%and emissions by 4.60%.Federated learning also improved zero-shot performance for unseen buildings,reducing costs by 5.11%and emissions by 5.55%.Thus,our findings highlight the potential of federated reinforcement learning to enhance energy management systems by balancing privacy,sustainability,and efficiency.