Homomorphism Counts for Graph Neural Networks: All About That Basis
CoRR(2024)
摘要
Graph neural networks are architectures for learning invariant functions over
graphs. A large body of work has investigated the properties of graph neural
networks and identified several limitations, particularly pertaining to their
expressive power. Their inability to count certain patterns (e.g., cycles) in a
graph lies at the heart of such limitations, since many functions to be learned
rely on the ability of counting such patterns. Two prominent paradigms aim to
address this limitation by enriching the graph features with subgraph or
homomorphism pattern counts. In this work, we show that both of these
approaches are sub-optimal in a certain sense and argue for a more fine-grained
approach, which incorporates the homomorphism counts of all structures in the
"basis" of the target pattern. This yields strictly more expressive
architectures without incurring any additional overhead in terms of
computational complexity compared to existing approaches. We prove a series of
theoretical results on node-level and graph-level motif parameters and
empirically validate them on standard benchmark datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要