BEEF: Balanced English Explanations of Forecasts

IEEE Transactions on Computational Social Systems(2019)

引用 20|浏览76
暂无评分
摘要
The problem of understanding the reasons behind why different machine learning classifiers make specific predictions is a difficult one, mainly because the inner workings of the algorithms underlying such tools are not amenable to the direct extraction of succinct explanations. In this paper, we address the problem of automatically extracting balanced explanations from predictions generated by any classifier, which include not only why the prediction might be correct but also why it could be wrong. Our framework, called Balanced English Explanations of Forecasts , can generate such explanations in natural language. After showing that the problem of generating explanations is NP-complete, we focus on the development of a heuristic algorithm, empirically showing that it produces high-quality results both in terms of objective measures—with statistically significant effects shown for several parameter variations—and subjective evaluations based on a survey completed by 100 anonymous participants recruited via Amazon Mechanical Turk.
更多
查看译文
关键词
Natural languages,Computational geometry,Data models,Predictive models,Computer science,Machine learning,Prediction algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要