Improved Adam Optimizer for Deep Neural Networks

2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)(2018)

引用 622|浏览71
暂无评分
摘要
Adaptive optimization algorithms, such as Adam and RMSprop, have witnessed better optimization performance than stochastic gradient descent (SGD) in some scenarios. However, recent studies show that they often lead to worse generalization performance than SGD, especially for training deep neural networks (DNNs). In this work, we identify the reasons that Adam generalizes worse than SGD, and develop a variant of Adam to eliminate the generalization gap. The proposed method, normalized direction-preserving Adam (ND-Adam), enables more precise control of the direction and step size for updating weight vectors, leading to significantly improved generalization performance. Following a similar rationale, we further improve the generalization performance in classification tasks by regularizing the softmax logits. By bridging the gap between SGD and Adam, we also hope to shed light on why certain optimization algorithms generalize better than others.
更多
查看译文
关键词
improved adam optimizer,deep neural networks,adaptive optimization algorithms,stochastic gradient descent,SGD,normalized direction-preserving Adam,ND-Adam,improved generalization performance,softmax logits
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要