On a natural homotopy between linear and nonlinear single-layer networks

Neural Networks, IEEE Transactions(1996)

引用 23|浏览0
暂无评分
摘要
In this paper we formulate a homotopy approach for solving for the weights of a network by smoothly transforming a linear single layer network into a nonlinear perceptron network. While other researchers have reported potentially useful numerical results based on heuristics related to this approach, the work presented here provides the first rigorous exposition of the deformation process. Results include a complete description of how the weights relate to the data space, a proof of the global convergence and validity of the method, and a rigorous formulation of the generalized orthogonality theorem to provide a geometric perspective of the solution process. This geometric interpretation clarifies conditions resulting in the appearance of local minima and infinite weights in network optimization procedures, and the similarities of and differences between optimizing the weights in a nonlinear network and optimizing the weights in a linear network. The results provide a strong theoretical foundation for quantifying performance bounds on finite neural networks and for constructing globally convergent optimization approaches on finite data sets.
更多
查看译文
关键词
nonlinear single-layer network,network optimization procedure,linear network,finite neural network,natural homotopy,data space,convergent optimization approach,linear single layer network,nonlinear perceptron network,nonlinear network,finite data set,deformation process,neural network,kalman filters,linear systems,neural networks,neural nets,time series analysis,pattern analysis,pattern recognition,local minima
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要