Block Coordinate Descent for Deep Learning: Unified Convergence Guarantees.

arXiv: Optimization and Control(2018)

引用 23|浏览38
暂无评分
摘要
Training deep neural networks (DNNs) efficiently is a challenge due to the associated highly nonconvex optimization. Recently, the efficiency of the block coordinate descent (BCD) type methods has been empirically illustrated for DNN training. The main idea of BCD is to decompose the highly composite and nonconvex DNN training problem into several almost separable simple subproblems. However, their convergence property has not been thoroughly studied. In this paper, we establish some unified global convergence guarantees of BCD type methods for a wide range of DNN training models, including but not limited to multilayer perceptrons (MLPs), convolutional neural networks (CNNs) and residual networks (ResNets). This paper nontrivially extends the existing convergence results of nonconvex BCD from the smooth case to the nonsmooth case. Our convergence analysis is built upon the powerful Kurdyka-{L}ojasiewicz (KL) framework but some new techniques are introduced, including the establishment of the KL property of the objective functions of many commonly used DNNs, where the loss function can be taken as squared, hinge and logistic losses, and the activation function can be taken as rectified linear units (ReLUs), sigmoid and linear link functions. The efficiency of BCD method is also demonstrated by a series of exploratory numerical experiments.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要