ExplainIt!: A Tool for Computing Robust Attributions of DNNs

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence(2022)

引用 2|浏览0
暂无评分
摘要
Responsible integration of deep neural networks into the design of trustworthy systems requires the ability to explain decisions made by these models. Explainability and transparency are critical for system analysis, certification, and human-machine teaming. We have recently demonstrated that neural stochastic differential equations (SDEs) present an explanation-friendly DNN architecture. In this paper, we present ExplainIt, an online tool for explaining AI decisions that uses neural SDEs to create visually sharper and more robust attributions than traditional residual neural networks. Our tool shows that the injection of noise in every layer of a residual network often leads to less noisy and less fragile integrated gradient attributions. The discrete neural stochastic differential equation model is trained on the ImageNet data set with a million images, and the demonstration produces robust attributions on images in the ImageNet validation library and on a variety of images in the wild. Our online tool is hosted publicly for educational purposes.
更多
查看译文
关键词
AI Ethics, Trust, Fairness: Explainability and Interpretability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要