Learning to Learn without Gradient Descent by Gradient Descent

ICML, pp. 748-756, 2017.

Cited by: 133|Bibtex|Views109|Links
EI

Abstract:

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control obje...More

Code:

Data:

Your rating :
0

 

Tags
Comments