Implicit Bias Of Gradient Descent On Linear Convolutional Networks

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)(2018)

引用 390|浏览105
暂无评分
摘要
We show that gradient descent on full width linear convolutional networks of depth L converges to a linear predictor related to the l(2/L) bridge penalty in the frequency domain. This is in contrast to fully connected linear networks, where regardless of depth, gradient descent converges to the l(2) maximum margin solution.
更多
查看译文
关键词
gradient descent,frequency domain,implicit bias
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要