You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets
arxiv(2022)
摘要
Recent works have impressively demonstrated that there exists a subnetwork in
randomly initialized convolutional neural networks (CNNs) that can match the
performance of the fully trained dense networks at initialization, without any
optimization of the weights of the network (i.e., untrained networks). However,
the presence of such untrained subnetworks in graph neural networks (GNNs)
still remains mysterious. In this paper we carry out the first-of-its-kind
exploration of discovering matching untrained GNNs. With sparsity as the core
tool, we can find untrained sparse subnetworks at the initialization,
that can match the performance of fully trained dense GNNs. Besides
this already encouraging finding of comparable performance, we show that the
found untrained subnetworks can substantially mitigate the GNN over-smoothing
problem, hence becoming a powerful tool to enable deeper GNNs without bells and
whistles. We also observe that such sparse untrained subnetworks have appealing
performance in out-of-distribution detection and robustness of input
perturbations. We evaluate our method across widely-used GNN architectures on
various popular datasets including the Open Graph Benchmark (OGB).
更多查看译文
关键词
better graph neural networks,neural networks,training weights
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要