Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations

arxiv(2020)

引用 0|浏览33
暂无评分
摘要
Deep neural networks have shown excellent performances in many real-world applications such as plant phenotyping. Unfortunately, they may show "Clever Hans"-like behaviour--- making use of confounding factors within datasets---to achieve high prediction rates. Rather than discarding the trained models or the dataset, we show that interactions between the learning system and the human user can correct the model. Specifically, we revise the models decision process by adding annotated masks during the learning loop and penalize decisions made for wrong reasons. In this way the decision strategies of the machine can be improved, focusing on relevant features, without considerably dropping predictive performance.
更多
查看译文
关键词
revising deep networks,explanations,wrong scientific reasons
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要