Position-Aware Active Learning for Multi-Modal Entity Alignment

Baogui Xu, Yafei Lu,Bing Su, Xiaoran Yan

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

Cited 0|Views4
No score
Abstract
Multi-Modal Entity Alignment (MMEA) aims to identify equivalent entities across different knowledge graphs by utilizing auxiliary modalities such as images. While MMEA has made significant progress, prevailing methods still heavily rely on abundant annotated entity pairs. Active learning seeks to alleviate the labeling burden or enhance model efficiency within fixed labeling capacity through careful sample selection. However, active learning for entity alignment in multimodal scenarios remains unexplored. In our view, it is crucial that data selected from different modalities should complement each other without redundancy or overlap; otherwise, the obtained data may prove a waste of labeling budgets. To achieve this goal, we propose a novel acquisition function leveraging Graph Neural Networks’ (GNNs) capability to aggregate information over multiple hops, prioritizing data distant from other modalities’ selections. Moreover, existing approaches employ data augmentation by selecting entity pairs whose inter-entity similarities of other modalities exceed a predefined threshold, but this augmentation strategy inadequately capitalizes on the available similarity information among entities. We can further enhance performance by integrating similarity matrices from different modalities. Consequently, our method achieves considerable improvements over existing active learning methods for entity alignment, as demonstrated by the experiments.
More
Translated text
Key words
Active learning,multi-modal knowledge graph,entity alignment,graph neural network
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined