Insights into randomized algorithms for neural networks: Practical issues and common pitfalls.

Information Sciences(2017)

引用 157|浏览77
暂无评分
摘要
Random Vector Functional-link (RVFL) networks, a class of learner models, can be regarded as feed-forward neural networks built with a specific randomized algorithm, i.e., the input weights and biases are randomly assigned and fixed during the training phase, and the output weights are analytically evaluated by the least square method. In this paper, we provide some insights into RVFL networks and highlight some practical issues and common pitfalls associated with RVFL-based modelling techniques. Inspired by the folklore that “all high-dimensional random vectors are almost always nearly orthogonal to each other”, we establish a theoretical result on the infeasibility of RVFL networks for universal approximation, if a RVFL network is built incrementally with random selection of the input weights and biases from a fixed scope, and constructive evaluation of its output weights. This work also addresses the significance of the scope setting of random weights and biases in respect to modelling performance. Two numerical examples are employed to illustrate our findings, which theoretically and empirically reveal some facts and limits of such class of randomized learning algorithms.
更多
查看译文
关键词
Randomized algorithms,Neural networks,Incremental learning,Function approximation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要