Dynamic Energy Dispatch Based on Deep Reinforcement Learning in IoT-Driven Smart Isolated Microgrids
arxiv(2020)
摘要
Microgrids (MGs) are small, local power grids that can operate independently
from the larger utility grid. Combined with the Internet of Things (IoT), a
smart MG can leverage the sensory data and machine learning techniques for
intelligent energy management. This paper focuses on deep reinforcement
learning (DRL)-based energy dispatch for IoT-driven smart isolated MGs with
diesel generators (DGs), photovoltaic (PV) panels, and a battery. A
finite-horizon Partial Observable Markov Decision Process (POMDP) model is
formulated and solved by learning from historical data to capture the
uncertainty in future electricity consumption and renewable power generation.
In order to deal with the instability problem of DRL algorithms and unique
characteristics of finite-horizon models, two novel DRL algorithms, namely,
finite-horizon deep deterministic policy gradient (FH-DDPG) and finite-horizon
recurrent deterministic policy gradient (FH-RDPG), are proposed to derive
energy dispatch policies with and without fully observable state information. A
case study using real isolated MG data is performed, where the performance of
the proposed algorithms are compared with the other baseline DRL and non-DRL
algorithms. Moreover, the impact of uncertainties on MG performance is
decoupled into two levels and evaluated respectively.
更多查看译文
关键词
Stochastic processes,Energy management,Uncertainty,Batteries,Predictive models,Internet of Things,Optimal control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要