Universal Neural Functionals
CoRR(2024)
摘要
A challenging problem in many modern machine learning tasks is to process
weight-space features, i.e., to transform or extract information from the
weights and gradients of a neural network. Recent works have developed
promising weight-space models that are equivariant to the permutation
symmetries of simple feedforward networks. However, they are not applicable to
general architectures, since the permutation symmetries of a weight space can
be complicated by recurrence or residual connections. This work proposes an
algorithm that automatically constructs permutation equivariant models, which
we refer to as universal neural functionals (UNFs), for any weight space. Among
other applications, we demonstrate how UNFs can be substituted into existing
learned optimizer designs, and find promising improvements over prior methods
when optimizing small image classifiers and language models. Our results
suggest that learned optimizers can benefit from considering the (symmetry)
structure of the weight space they optimize. We open-source our library for
constructing UNFs at
https://github.com/AllanYangZhou/universal_neural_functional.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要