Characterizing Implicit Bias in Terms of Optimization Geometry

INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80(2020)

引用 392|浏览133
暂无评分
摘要
We study the implicit bias of generic optimization methods, such as mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by an algorithm can be characterized in terms of the potential or norm of the optimization geometry, and independently of hyperparameter choices such as step-size and momentum.
更多
查看译文
关键词
implicit bias,optimization,geometry
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要