On the Learnability of Random Deep Networks.

SODA '20: ACM-SIAM Symposium on Discrete Algorithms Salt Lake City Utah January, 2020(2020)

引用 4|浏览145
暂无评分
摘要
In this paper we study the learnability of random deep networks both theoretically and experimentally. On the theoretical front, assuming the statistical query model, we show that the learnability of random deep networks with sign activation drops exponentially with their depths; under plausible conjectures, our results extend to ReLu and sigmoid activations. The core of the arguments is that even for highly correlated inputs, the outputs of deep random networks are near-orthogonal. On the experimental side, we find that the learnability of random networks drops sharply with depth even with the state-of-the-art training methods.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要