Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination
CoRR(2023)
摘要
Over the past decade, deep learning has proven to be a highly effective tool
for learning meaningful features from raw data. However, it remains an open
question how deep networks perform hierarchical feature learning across layers.
In this work, we attempt to unveil this mystery by investigating the structures
of intermediate features. Motivated by our empirical findings that linear
layers mimic the roles of deep layers in nonlinear networks for feature
learning, we explore how deep linear networks transform input data into output
by investigating the output (i.e., features) of each layer after training in
the context of multi-class classification problems. Toward this goal, we first
define metrics to measure within-class compression and between-class
discrimination of intermediate features, respectively. Through theoretical
analysis of these two metrics, we show that the evolution of features follows a
simple and quantitative pattern from shallow to deep layers when the input data
is nearly orthogonal and the network weights are minimum-norm, balanced, and
approximate low-rank: Each layer of the linear network progressively compresses
within-class features at a geometric rate and discriminates between-class
features at a linear rate with respect to the number of layers that data have
passed through. To the best of our knowledge, this is the first quantitative
characterization of feature evolution in hierarchical representations of deep
linear networks. Empirically, our extensive experiments not only validate our
theoretical results numerically but also reveal a similar pattern in deep
nonlinear networks which aligns well with recent empirical studies. Moreover,
we demonstrate the practical implications of our results in transfer learning.
Our code is available at .
更多查看译文
关键词
deep representation learning,compression,layerwise feature
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要