Deep Explainable Relational Reinforcement Learning: A Neuro-Symbolic Approach

MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT IV(2023)

引用 0|浏览33
暂无评分
摘要
Despite its successes, Deep Reinforcement Learning (DRL) yields non-interpretable policies. Moreover, since DRL does not exploit symbolic relational representations, it has difficulties in coping with structural changes in its environment (such as increasing the number of objects). Meanwhile, Relational Reinforcement Learning inherits the relational representations from symbolic planning to learn reusable policies. However, it has so far been unable to scale up and exploit the power of deep neural networks. We propose Deep Explainable Relational Reinforcement Learning (DERRL), a framework that exploits the best of both - neural and symbolic worlds. By resorting to a neuro-symbolic approach, DERRL combines relational representations and constraints from symbolic planning with deep learning to extract interpretable policies. These policies are in the form of logical rules that explain why each decision (or action) is arrived at. Through several experiments, in setups like the Countdown Game, Blocks World, Gridworld, Traffic, and Mingrid, we show that the policies learned by DERRL are adaptable to varying configurations and environmental changes.
更多
查看译文
关键词
Neuro-Symbolic AI,Relational Reinforcement Learning,Deep Reinforcement Learning,Explainability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要