TJU-DNN: A trajectory-unified framework for training deep neural networks and its applications

Neurocomputing(2023)

引用 0|浏览5
暂无评分
摘要
The training method for deep neural networks mainly adopts the gradient descent (GD) method. These methods, however, are very sensitive to initialization and hyperparameters. In this paper, an enhanced gradient descent method guided by the trajectory-based method for training deep neural networks, termed the Trajectory Unified Framework (TJU) method, is presented. From a theoretical viewpoint, the robustness of the TJU-based method is supported by an analytical basis presented in the paper. From a computational viewpoint, a TJU methodology consisting of a Block-Diagonal-Pseudo-Transient-Continuation method and a gradient descent method, termed the TJU-GD method, for training deep neural networks is added to obtain high-quality results. Furthermore, to resolve the issue of imbalanced classification, a TJU-Focal-GD method is developed and evaluated. Experimental numerical evaluation of the proposed TJU-GD on various public datasets reveals that the proposed method can achieve great improvements over baseline methods. Specifically, the proposed TJU-Focal-GD also possesses several advantages over other methods for a class of imbalanced datasets from the homemade power line inspection dataset (PLID).
更多
查看译文
关键词
Gradient descents,Pseudo transient continuation,Nonlinear system,Quotient gradient system,Trajectory-unified methodology,PLID
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要