Feature CAM: Interpretable AI in Image Classification
arxiv(2024)
摘要
Deep Neural Networks have often been called the black box because of the
complex, deep architecture and non-transparency presented by the inner layers.
There is a lack of trust to use Artificial Intelligence in critical and
high-precision fields such as security, finance, health, and manufacturing
industries. A lot of focused work has been done to provide interpretable
models, intending to deliver meaningful insights into the thoughts and behavior
of neural networks. In our research, we compare the state-of-the-art methods in
the Activation-based methods (ABM) for interpreting predictions of CNN models,
specifically in the application of Image Classification. We then extend the
same for eight CNN-based architectures to compare the differences in
visualization and thus interpretability. We introduced a novel technique
Feature CAM, which falls in the perturbation-activation combination, to create
fine-grained, class-discriminative visualizations. The resulting saliency maps
from our experiments proved to be 3-4 times better human interpretable than the
state-of-the-art in ABM. At the same time it reserves machine interpretability,
which is the average confidence scores in classification.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要