Prompt Learning on Temporal Interaction Graphs
CoRR(2024)
摘要
Temporal Interaction Graphs (TIGs) are widely utilized to represent
real-world systems. To facilitate representation learning on TIGs, researchers
have proposed a series of TIG models. However, these models are still facing
two tough gaps between the pre-training and downstream predictions in their
“pre-train, predict” training paradigm. First, the temporal discrepancy
between the pre-training and inference data severely undermines the models'
applicability in distant future predictions on the dynamically evolving data.
Second, the semantic divergence between pretext and downstream tasks hinders
their practical applications, as they struggle to align with their learning and
prediction capabilities across application scenarios.
Recently, the “pre-train, prompt” paradigm has emerged as a lightweight
mechanism for model generalization. Applying this paradigm is a potential
solution to solve the aforementioned challenges. However, the adaptation of
this paradigm to TIGs is not straightforward. The application of prompting in
static graph contexts falls short in temporal settings due to a lack of
consideration for time-sensitive dynamics and a deficiency in expressive power.
To address this issue, we introduce Temporal Interaction Graph Prompting
(TIGPrompt), a versatile framework that seamlessly integrates with TIG models,
bridging both the temporal and semantic gaps. In detail, we propose a temporal
prompt generator to offer temporally-aware prompts for different tasks. These
prompts stand out for their minimalistic design, relying solely on the tuning
of the prompt generator with very little supervision data. To cater to varying
computational resource demands, we propose an extended “pre-train,
prompt-based fine-tune” paradigm, offering greater flexibility. Through
extensive experiments, the TIGPrompt demonstrates the SOTA performance and
remarkable efficiency advantages.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要