Out-of-Distribution Detection through Relative Activation-Deactivation Abstractions

2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE)(2021)

引用 3|浏览6
暂无评分
摘要
A deep learning model always misclassifies an out-of-distribution input, which is not of any category that the deep learning model is trained for. Hence, out-of-distribution detection is practically an important task for ensuring the safety and reliability of a deep learning based system. We present in this paper the notion of relative activation and deactivation to interpret the inference behavior of the deep learning model. Then, we propose a relative activation-deactivation abstraction approach to characterize the decision logic of the deep learning model. The relative activation-deactivation abstractions enjoy close intra-class aggregation for each category under training, as well as diverse inter-class separation between various categories under training. We further propose an out-of-distribution detection algorithm based on the relative activation-deactivation abstraction approach, following the underlying principle that the relative activation-deactivation abstraction of a deep learning model under an out-of-distribution input is far away from the one for the predicted category the deep learning model outputs. Our detection algorithm does not require any designed perturbation to the input data, nor any hyperparameter tuning to the deep learning model with out-of-distribution data. We evaluate the detection algorithm with 8 typical benchmark datasets in literature. The experimental results show that our detection algorithm can achieve better and more stable performance than the state-of-the-art white-box abstraction based detection algorithms, with significantly more true positive and less false positive alerts for out-of-distribution detection.
更多
查看译文
关键词
out-of-distribution detection,deep learning,relative activation,relative deactivation,clustering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要