Electric vehicles(EVs)with managed charging and discharging schedules have the potential to reduce costs,enhance grid resilience,and facilitate integration of renewable energy sources.However,the heterogeneity of cons...Electric vehicles(EVs)with managed charging and discharging schedules have the potential to reduce costs,enhance grid resilience,and facilitate integration of renewable energy sources.However,the heterogeneity of consumer travel patterns and the variability of renewable energy generation present significant challenges to existing control strategies,often resulting in issues such as the“curse of dimensionality.”This study proposes a mobility-aware deep reinforcement learning-based charging control strategy using the Deep Q-Network(DQN)algorithm to minimize charging costs and maximize photovoltaic(PV)energy utilization.Leveraging real-time electricity prices,real-world EV travel data,and actual PV generation profiles,the proposed framework achieves low charging costs,high solar energy utilization,and reduced carbon emissions—approaching the performance of an ideal offline optimization algorithm with perfect foresight,and substantially outperforming baseline strategies such as random charging,Charge-As-Soon-As-Possible(CASAP),and greedy charging.Specifically,the RL-based approach reduces charging costs by 55%and lowers carbon emission by 11.6%compared to random charging,and achieves a PV utilization rate of 95%.Furthermore,the value of information regarding EV’s travel time and the building’s electricity demand is 2.4CNY/vehicle/day and$0.7/vehicle/day,respectively,underscoring the importance of addressing uncertainty in EV charging management.These findings demonstrate the feasibility and effectiveness of reinforcement learning in optimizing EV operations within integrated vehicle-grid-building-PV systems.展开更多
基金funded by the National Natural Science Foundation of China(Grant No.72304028,72304031,W2412161)the National Key Research and Development Program of China(Grant No.2020YFA0608603)the Fundamental Research Funds for the Central Universities(Grant No.JKF-2025045909550,FRF-TP-22-024A1).
文摘Electric vehicles(EVs)with managed charging and discharging schedules have the potential to reduce costs,enhance grid resilience,and facilitate integration of renewable energy sources.However,the heterogeneity of consumer travel patterns and the variability of renewable energy generation present significant challenges to existing control strategies,often resulting in issues such as the“curse of dimensionality.”This study proposes a mobility-aware deep reinforcement learning-based charging control strategy using the Deep Q-Network(DQN)algorithm to minimize charging costs and maximize photovoltaic(PV)energy utilization.Leveraging real-time electricity prices,real-world EV travel data,and actual PV generation profiles,the proposed framework achieves low charging costs,high solar energy utilization,and reduced carbon emissions—approaching the performance of an ideal offline optimization algorithm with perfect foresight,and substantially outperforming baseline strategies such as random charging,Charge-As-Soon-As-Possible(CASAP),and greedy charging.Specifically,the RL-based approach reduces charging costs by 55%and lowers carbon emission by 11.6%compared to random charging,and achieves a PV utilization rate of 95%.Furthermore,the value of information regarding EV’s travel time and the building’s electricity demand is 2.4CNY/vehicle/day and$0.7/vehicle/day,respectively,underscoring the importance of addressing uncertainty in EV charging management.These findings demonstrate the feasibility and effectiveness of reinforcement learning in optimizing EV operations within integrated vehicle-grid-building-PV systems.