On the Use of Aggregation Functions for Semi-Supervised Network Embedding

IJCNN(2023)

引用 0|浏览3
暂无评分
摘要
Network embedding methods map nodes into vector representations, aiming to preserve important properties of relationships between nodes through similarities in a latent vector-space model. Graph Neural Networks (GNNs) based on aggregate functions have received significant attention among different network embedding methods. In general, the embeddings of a node are recursively generated by aggregating embeddings from neighboring nodes. Aggregation is a crucial step in these methods, and different aggregation functions have been proposed, from simple averaging and max pooling operations to complex functions based on attention mechanisms. However, we note that there is a lack of studies comparing aggregate functions, especially in more practical real-world scenarios involving semi-supervised tasks. This paper introduces a methodology to evaluate different aggregation functions for semi-supervised learning through a model selection strategy guided by a statistical significance analysis framework. We show that Transformers-based aggregation functions are competitive for semi-supervised scenarios and obtain relevant results in different domains. Furthermore, we also discuss scenarios where “less is more”, mainly when there are constraints on the availability of computational resources.
更多
查看译文
关键词
Network Embedding,Graph Neural Networks,Semi-Supervised Learning,Aggregation Functions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要