Reinforcement Learning of Heuristic EV Fleet Charging in a Day-Ahead Electricity Market

IEEE Transactions on Smart Grid(2015)

引用 140|浏览99
暂无评分
摘要
This paper addresses the problem of defining a day-ahead consumption plan for charging a fleet of electric vehicles (EVs), and following this plan during operation. A challenge herein is the beforehand unknown charging flexibility of EVs, which depends on numerous details about each EV (e.g., plug-in times, power limitations, battery size, power curve, etc.). To cope with this challenge, EV charging is controlled during opertion by a heuristic scheme, and the resulting charging behavior of the EV fleet is learned by using batch mode reinforcement learning. Based on this learned behavior, a cost-effective day-ahead consumption plan can be defined. In simulation experiments, our approach is benchmarked against a multistage stochastic programming solution, which uses an exact model of each EVs charging flexibility. Results show that our approach is able to find a day-ahead consumption plan with comparable quality to the benchmark solution, without requiring an exact day-ahead model of each EVs charging flexibility.
更多
查看译文
关键词
Schedules,Batteries,Learning (artificial intelligence),Electricity supply industry,Stochastic processes,Aerospace electronics,Benchmark testing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要