To What Extent Do Different Neural Networks Learn the Same Representation: A Neuron Activation Subspace Match Approach

neural information processing systems(2018)

引用 23|浏览59
暂无评分
摘要
Studying the learned representations is important for understanding deep neural networks. In this paper, we investigate the similarity of representations learned by two networks with identical architecture but trained from different initializations. Instead of resorting to heuristic methods, we develop a rigorous theory based on the neuron activation subspace match model. The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively. We also propose efficient algorithms to find the maximum match and simple matches. Finally, experimental study using our algorithms suggests that, somewhat surprisingly, representations learned by the same convolutional layers of two networks are not as similar as prevalently expected.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要