When Does Gradient Descent With Logistic Loss Find Interpolating Two-Layer Networks?

JOURNAL OF MACHINE LEARNING RESEARCH(2021)

引用 19|浏览37
暂无评分
摘要
We study the training of finite-width two-layer smoothed ReLU networks for binary classification using the logistic loss. We show that gradient descent drives the training loss to zero if the initial loss is small enough. When the data satisfies certain cluster and separation conditions and the network is wide enough, we show that one step of gradient descent reduces the loss sufficiently that the first result applies.
更多
查看译文
关键词
optimization guarantees, neural networks, interpolating methods, binary classification, deep learning, clustered class-conditional distributions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要