XOC: Explainable Observer-Classifier for Explainable Binary Decisions.

arXiv: Learning(2019)

引用 23|浏览53
暂无评分
摘要
When deep neural networks optimize highly complex functions, it is not always obvious how they reach the final decision. Providing explanations would make this decision process more transparent and improve a useru0027s trust towards the machine as they help develop a better understanding of the rationale behind the networku0027s predictions. Here, we present an explainable observer-classifier framework that exposes the steps taken through the modelu0027s decision-making process. Instead of assigning a label to an image in a single step, our model makes iterative binary sub-decisions, which reveal a decision tree as a thought process. In addition, our model allows to hierarchically cluster the data and give each binary decision a semantic meaning. The sequence of binary decisions learned by our model imitates human-annotated attributes. On six benchmark datasets with increasing size and granularity, our model outperforms the decision-tree baseline and generates easy-to-understand binary decision sequences explaining the networku0027s predictions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要