Flexible infinite-width graph convolutional networks and the importance of representation learning
CoRR(2024)
摘要
A common theoretical approach to understanding neural networks is to take an
infinite-width limit, at which point the outputs become Gaussian process (GP)
distributed. This is known as a neural network Gaussian process (NNGP).
However, the NNGP kernel is fixed, and tunable only through a small number of
hyperparameters, eliminating any possibility of representation learning. This
contrasts with finite-width NNs, which are often believed to perform well
precisely because they are able to learn representations. Thus in simplifying
NNs to make them theoretically tractable, NNGPs may eliminate precisely what
makes them work well (representation learning). This motivated us to understand
whether representation learning is necessary in a range of graph classification
tasks. We develop a precise tool for this task, the graph convolutional deep
kernel machine. This is very similar to an NNGP, in that it is an infinite
width limit and uses kernels, but comes with a `knob' to control the amount of
representation learning. We found that representation learning is necessary (in
the sense that it gives dramatic performance improvements) in graph
classification tasks and heterophilous node classification tasks, but not in
homophilous node classification tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要