Topology-Informed Graph Transformer
CoRR(2024)
摘要
Transformers have revolutionized performance in Natural Language Processing
and Vision, paving the way for their integration with Graph Neural Networks
(GNNs). One key challenge in enhancing graph transformers is strengthening the
discriminative power of distinguishing isomorphisms of graphs, which plays a
crucial role in boosting their predictive performances. To address this
challenge, we introduce 'Topology-Informed Graph Transformer (TIGT)', a novel
transformer enhancing both discriminative power in detecting graph isomorphisms
and the overall performance of Graph Transformers. TIGT consists of four
components: A topological positional embedding layer using non-isomorphic
universal covers based on cyclic subgraphs of graphs to ensure unique graph
representation: A dual-path message-passing layer to explicitly encode
topological characteristics throughout the encoder layers: A global attention
mechanism: And a graph information layer to recalibrate channel-wise graph
features for better feature representation. TIGT outperforms previous Graph
Transformers in classifying synthetic dataset aimed at distinguishing
isomorphism classes of graphs. Additionally, mathematical analysis and
empirical evaluations highlight our model's competitive edge over
state-of-the-art Graph Transformers across various benchmark datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要