SGHormer: An Energy-Saving Graph Transformer Driven by Spikes
arxiv(2024)
摘要
Graph Transformers (GTs) with powerful representation learning ability make a
huge success in wide range of graph tasks. However, the costs behind
outstanding performances of GTs are higher energy consumption and computational
overhead. The complex structure and quadratic complexity during attention
calculation in vanilla transformer seriously hinder its scalability on the
large-scale graph data. Though existing methods have made strides in
simplifying combinations among blocks or attention-learning paradigm to improve
GTs' efficiency, a series of energy-saving solutions originated from
biologically plausible structures are rarely taken into consideration when
constructing GT framework. To this end, we propose a new spiking-based graph
transformer (SGHormer). It turns full-precision embeddings into sparse and
binarized spikes to reduce memory and computational costs. The spiking graph
self-attention and spiking rectify blocks in SGHormer explicitly capture global
structure information and recover the expressive power of spiking embeddings,
respectively. In experiments, SGHormer achieves comparable performances to
other full-precision GTs with extremely low computational energy consumption.
The results show that SGHomer makes a remarkable progress in the field of
low-energy GTs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要