Function-space Parameterization of Neural Networks for Sequential Learning
ICLR 2024(2024)
摘要
Sequential learning paradigms pose challenges for gradient-based deep
learning due to difficulties incorporating new data and retaining prior
knowledge. While Gaussian processes elegantly tackle these problems, they
struggle with scalability and handling rich inputs, such as images. To address
these issues, we introduce a technique that converts neural networks from
weight space to function space, through a dual parameterization. Our
parameterization offers: (i) a way to scale function-space methods to large
data sets via sparsification, (ii) retention of prior knowledge when access to
past data is limited, and (iii) a mechanism to incorporate new data without
retraining. Our experiments demonstrate that we can retain knowledge in
continual learning and incorporate new data efficiently. We further show its
strengths in uncertainty quantification and guiding exploration in model-based
RL. Further information and code is available on the project website.
更多查看译文
关键词
Neural Networks,Bayesian deep learning,deep learning,Gaussian processes,sequential learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要