Limitations of Lazy Training of Two-layers Neural Networks

Behrooz Ghorbani
Behrooz Ghorbani
Theodor Misiakiewicz
Theodor Misiakiewicz

NeurIPS, pp. 9108-9118, 2019.

Cited by: 0|Bibtex|Views6|Links
EI

Abstract:

We study the supervised learning problem under either of the following two models: (1) Feature vectors ${\boldsymbol x}_i$ are $d$-dimensional Gaussians and responses are $y_i = f_*({\boldsymbol x}_i)$ for $f_*$ an unknown quadratic function; (2) Feature vectors ${\boldsymbol x}_i$ are distributed as a mixture of two $d$-dimensional cen...More

Code:

Data:

Your rating :
0

 

Tags
Comments
s are the corresponding class labels. We use two-layers neural networks with quadratic activations, and compare three different learning regimes: the random features (RF) regime in which we only train the second-layer weights; the neural tangent (NT) regime in which we train a linearization of the neural network around its initialization; the fully trained neural network (NN) regime in which we train all the weights in the network. We prove that, even for the simple quadratic model of point (1), there is a potentially unbounded gap between the prediction risk achieved in these three training regimes, when the number of neurons is smaller than the ambient dimension. When the number of neurons is larger than the number of dimensions, the problem is significantly easier and both NT and NN learning achieve zero risk. ","authors":[{"name":"Behrooz Ghorbani"},{"id":"54325089dabfaeb54214e094","name":"Song Mei"},{"name":"Theodor Misiakiewicz"},{"id":"53f437e5dabfaeb22f47f3c9","name":"Andrea Montanari"}],"flags":[{"flag":"affirm_author","person_id":"54325089dabfaeb54214e094"}],"id":"5d1eb9dfda562961f0b19be7","is_downvoted":false,"is_starring":false,"is_upvoted":false,"lang":"en","num_citation":0,"num_starred":0,"num_upvoted":0,"num_viewed":6,"pages":{"end":"9118","start":"9108"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F19\u002F1906\u002F1906.08899.pdf","title":"Limitations of Lazy Training of Two-layers Neural Networks.","urls":["db\u002Fjournals\u002Fcorr\u002Fcorr1906.html#abs-1906-08899","http:\u002F\u002Farxiv.org\u002Fabs\u002F1906.08899","https:\u002F\u002Fdblp.org\u002Frec\u002Fconf\u002Fnips\u002FGhorbaniMMM19","http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9111-limitations-of-lazy-training-of-two-layers-neural-network","https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.08899"],"venue":{"info":{"name":"NeurIPS"},"volume":"abs\u002F1906.08899"},"versions":[{"id":"5e8d92999fced0a24b620556","sid":"journals\u002Fcorr\u002Fabs-1906-08899","src":"dblp","vsid":"journals\u002Fcorr","year":2019},{"id":"5e15adcb3a55ac47ab5b07f5","sid":"conf\u002Fnips\u002FGhorbaniMMM19","src":"dblp","vsid":"conf\u002Fnips","year":2019},{"id":"5d109f303a55ac538fb92ac4","sid":"1906.08899","src":"arxiv","year":2019}],"year":2019,"cited_pubs":0,"total_ref":0,"total_sim":0},"bestpaper":undefined,"simpapers":undefined,"authorsData":[{"name":"Behrooz Ghorbani"},{"avatar":"https:\u002F\u002Fstatic.aminer.cn\u002Fupload\u002Favatar\u002F1356\u002F933\u002F81\u002F54325089dabfaeb54214e094_0.jpg","id":"54325089dabfaeb54214e094","name":"Song Mei","name_zh":"梅松"},{"name":"Theodor Misiakiewicz"},{"avatar":"https:\u002F\u002Fstatic.aminer.cn\u002Fupload\u002Favatar\u002F637\u002F1759\u002F1746\u002F53f437e5dabfaeb22f47f3c9_0.jpg","id":"53f437e5dabfaeb22f47f3c9","name":"Andrea Montanari","name_zh":""}],"redirect_id":undefined,"notfound":undefined,"pdfInfo":{}};