Leveraging Class Similarity to Improve Deep Neural Network Robustness.

arXiv: Computer Vision and Pattern Recognition(2018)

引用 23|浏览10
暂无评分
摘要
Traditionally artificial neural networks (ANNs) are trained by minimizing the cross-entropy between a provided groundtruth delta (encoded as one-hot vector) and the ANNu0027s predictive softmax distribution. It seems, however, unacceptable to penalize networks equally for missclassification between classes. Confusing the class Automobile with the class Truck should be penalized less than confusing the class Automobile with the class Donkey. To avoid such representation issues and learn cleaner classification boundaries in the network, this paper presents a variation of cross-entropy loss which depends not only on the sample class but also on a data-driven prior distribution across the classes encoded in a matrix form. We explore learning the class-similarity using a datadriven method and then show that by training with our modified similarity-driven loss, we obtain slightly better generalization performance over multiple architectures and datasets as well as improved performance on noisy testing scenarios.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要