Research on energy-saving driving control of hydrogen fuel bus based on deep reinforcement learning in freeway ramp weaving area

ENERGY(2023)

引用 0|浏览10
暂无评分
摘要
In the intelligent network traffic scenario, the convenient acquisition of microscopic vehicle states and global traffic states can help solve the problems of vehicle driving and energy management in complex traffic envi-ronments. This paper proposes a new energy management method for the hydrogen fuel cell bus based on the double-layer deep deterministic policy gradient (DDPG). Combined with the SUMO simulation platform, a double-layer deep reinforcement learning (D-DRL) architecture based on DDPG is designed to improve control accuracy and training speed. In the upper D-DRL, the Agent handles the effects of complex traffic environments to control the reasonable speed of the vehicle and keep it running smoothly to reduce the loss of energy caused by speed changes, compared with the SUMO-IDM model, the maximum-minimum velocity difference was reduced by 21 % and the acceleration and acceleration change was reduced by 7.9 % and 19 %, After the Agent receives the output speed from the upper layer, it distributes the power between fuel cell and power cell. Compared with the DP algorithm, it keeps the SOC at a higher level, the hydrogen consumption level reaches 93.25 %, and the fluctuation amplitude decreases by 42.09 %, effectively improving fuel cell durability.
更多
查看译文
关键词
Fuel cell bus,Deep reinforcement learning,Energy management strategy,Vehicle speed control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要