Two-layer neural network on infinite-dimensional data: global optimization guarantee in the mean-field regime

NeurIPS 2022(2023)

引用 0|浏览17
暂无评分
摘要
The analysis of neural network optimization in the mean-field regime is important as the setting allows for feature learning. The existing theory has been developed mainly for neural networks in finite dimensions, i.e. each neuron has a finite-dimensional parameter. However, the setting of infinite-dimensional input naturally arises in machine learning problems such as nonparametric functional data analysis and graph classification. In this paper, we develop a new mean-field analysis of a two-layer neural network in an infinite-dimensional parameter space. We first give a generalization error bound, which shows that the regularized empirical risk minimizer properly generalizes when the data size is sufficiently large, despite the neurons being infinite-dimensional. Next, we present two gradient-based optimization algorithms for infinite-dimensional mean-field networks, by extending the recently developed particle optimization framework to the infinite-dimensional setting. We show that the proposed algorithms converge to the (regularized) global optimal solution, and moreover, their rates of convergence are of polynomial order in the online setting and exponential order in the finite sample setting, respectively. To the best of our knowledge, this is the first quantitative global optimization guarantee of a neural network on infinite-dimensional input and in the presence of feature learning.
更多
查看译文
关键词
mean-field regime,neural network,optimization,functional data analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要