Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents
CoRR(2024)
摘要
Goal misalignment, reward sparsity and difficult credit assignment are only a
few of the many issues that make it difficult for deep reinforcement learning
(RL) agents to learn optimal policies. Unfortunately, the black-box nature of
deep neural networks impedes the inclusion of domain experts for inspecting the
model and revising suboptimal policies. To this end, we introduce *Successive
Concept Bottleneck Agents* (SCoBots), that integrate consecutive concept
bottleneck (CB) layers. In contrast to current CB models, SCoBots do not just
represent concepts as properties of individual objects, but also as relations
between objects which is crucial for many RL tasks. Our experimental results
provide evidence of SCoBots' competitive performances, but also of their
potential for domain experts to understand and regularize their behavior. Among
other things, SCoBots enabled us to identify a previously unknown misalignment
problem in the iconic video game, Pong, and resolve it. Overall, SCoBots thus
result in more human-aligned RL agents. Our code is available at
https://github.com/k4ntz/SCoBots .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要