PhysQ: A Physics Informed Reinforcement Learning Framework for Building Control

arxiv(2022)

引用 0|浏览5
暂无评分
摘要
Large-scale integration of intermittent renewable energy sources calls for substantial demand side flexibility. Given that the built environment accounts for approximately 40% of total energy consumption in EU, unlocking its flexibility is a key step in the energy transition process. This paper focuses specifically on energy flexibility in residential buildings, leveraging their intrinsic thermal mass. Building on recent developments in the field of data-driven control, we propose PhysQ. As a physics-informed reinforcement learning framework for building control, PhysQ forms a step in bridging the gap between conventional model-based control and data-intensive control based on reinforcement learning. Through our experiments, we show that the proposed PhysQ framework can learn high quality control policies that outperform a business-as-usual, as well as a rudimentary model predictive controller. Our experiments indicate cost savings of about 9% compared to a business-as-usual controller. Further, we show that PhysQ efficiently leverages prior physics knowledge to learn such policies using fewer training samples than conventional reinforcement learning approaches, making PhysQ a scalable alternative for use in residential buildings. Additionally, the PhysQ control policy utilizes building state representations that are intuitive and based on conventional building models, that leads to better interpretation of the learnt policy over other data-driven controllers.
更多
查看译文
关键词
reinforcement learning,control,physics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要