Explaining Learned Reward Functions with Counterfactual Trajectories
CoRR(2024)
摘要
Learning rewards from human behaviour or feedback is a promising approach to
aligning AI systems with human values but fails to consistently extract correct
reward functions. Interpretability tools could enable users to understand and
evaluate possible flaws in learned reward functions. We propose Counterfactual
Trajectory Explanations (CTEs) to interpret reward functions in reinforcement
learning by contrasting an original with a counterfactual partial trajectory
and the rewards they each receive. We derive six quality criteria for CTEs and
propose a novel Monte-Carlo-based algorithm for generating CTEs that optimises
these quality criteria. Finally, we measure how informative the generated
explanations are to a proxy-human model by training it on CTEs. CTEs are
demonstrably informative for the proxy-human model, increasing the similarity
between its predictions and the reward function on unseen trajectories.
Further, it learns to accurately judge differences in rewards between
trajectories and generalises to out-of-distribution examples. Although CTEs do
not lead to a perfect understanding of the reward, our method, and more
generally the adaptation of XAI methods, are presented as a fruitful approach
for interpreting learned reward functions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要