Towards explaining anomalies

Pattern Recognition(2020)

引用 18|浏览121
暂无评分
摘要
• We enhance the prediction of anomalies (as given by a kernel one-class SVM) by explaining them in terms of input features. • The method is based on a reformulation of the one-class SVM as a neural network, the structure of which is better suited to the task of explanation. • Explanations are obtained via a deep Taylor decomposition, which propagates the prediction backward in the neural network towards the input features. • Application of our method to image data highlights pixel-level anomalies that can be missed by a simple visual inspection. Detecting anomalies in the data is a common machine learning task, with numerous applications in the sciences and industry. In practice, it is not always sufficient to reach high detection accuracy, one would also like to be able to understand why a given data point has been predicted to be anomalous. We propose a principled approach for one-class SVMs (OC-SVM), that draws on the novel insight that these models can be rewritten as distance/pooling neural networks. This ‘neuralization’ step lets us apply deep Taylor decomposition (DTD), a methodology that leverages the model structure in order to quickly and reliably explain decisions in terms of input features. The proposed method (called ‘OC-DTD’) is applicable to a number of common distance-based kernel functions, and it outperforms baselines such as sensitivity analysis, distance to nearest neighbor, or edge detection.
更多
查看译文
关键词
Outlier detection,Explainable machine learning,Deep Taylor decomposition,Kernel machines,Unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要