A Learning Rate Method for Full-Batch Gradient Descent

Asadi Soodabeh, Vogel Manfred

Műszaki Tudományos Közlemények(2020)

引用 2|浏览1
暂无评分
摘要
Abstract In this paper, we present a learning rate method for gradient descent using only first order information. This method requires no manual tuning of the learning rate. We applied this method on a linear neural network built from scratch, along with the full-batch gradient descent, where we calculated the gradients for the whole dataset to perform one parameter update. We tested the method on a moderate sized dataset of housing information and compared the result with that of the Adam optimizer used with a sequential neural network model from Keras. The comparison shows that our method finds the minimum in a much fewer number of epochs than does Adam.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要