Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization
arxiv(2023)
摘要
The out-of-distribution (OOD) problem generally arises when neural networks
encounter data that significantly deviates from the training data distribution,
i.e., in-distribution (InD). In this paper, we study the OOD problem from a
neuron activation view. We first formulate neuron activation states by
considering both the neuron output and its influence on model decisions. Then,
to characterize the relationship between neurons and OOD issues, we introduce
the neuron activation coverage (NAC) – a simple measure for neuron
behaviors under InD data. Leveraging our NAC, we show that 1) InD and OOD
inputs can be largely separated based on the neuron behavior, which
significantly eases the OOD detection problem and beats the 21 previous methods
over three benchmarks (CIFAR-10, CIFAR-100, and ImageNet-1K). 2) a positive
correlation between NAC and model generalization ability consistently holds
across architectures and datasets, which enables a NAC-based criterion for
evaluating model robustness. Compared to prevalent InD validation criteria, we
show that NAC not only can select more robust models, but also has a stronger
correlation with OOD test performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要