Reimagining Anomalies: What If Anomalies Were Normal?
CoRR(2024)
摘要
Deep learning-based methods have achieved a breakthrough in image anomaly
detection, but their complexity introduces a considerable challenge to
understanding why an instance is predicted to be anomalous. We introduce a
novel explanation method that generates multiple counterfactual examples for
each anomaly, capturing diverse concepts of anomalousness. A counterfactual
example is a modification of the anomaly that is perceived as normal by the
anomaly detector. The method provides a high-level semantic explanation of the
mechanism that triggered the anomaly detector, allowing users to explore
"what-if scenarios." Qualitative and quantitative analyses across various image
datasets show that the method applied to state-of-the-art anomaly detectors can
achieve high-quality semantic explanations of detectors.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要