Multi-Channel Graph Neural Network for Entity Alignment

meeting of the association for computational linguistics, 2019.

Cited by: 49|Bibtex|Views106|
Keywords:
exclusive entitynetwork modelkg attentionmulti channel graph neural networkKnowledge GraphsMore(14+)
Weibo:
We propose a novel Multi-channel Graph Neural Network model, MuGNN, which learns alignment-oriented knowledge graph embeddings for entity alignment

Abstract:

Entity alignment typically suffers from the issues of structural heterogeneity and limited seed alignments. In this paper, we propose a novel Multi-channel Graph Neural Network model (MuGNN) to learn alignment-oriented knowledge graph (KG) embeddings by robustly encoding two KGs via multiple channels. Each channel encodes KGs via differen...More

Code:

Data:

0
Introduction
  • Knowledge Graphs (KGs) store the world knowledge in the form of directed graphs, where nodes denote entities and edges are their relations
  • Since it was proposed, many KGs are constructed (e.g., YAGO (Rebele et al, 2016)) to provide structural knowledge for different applications and languages.
  • These KGs usually contain complementary contents, attracting researchers to integrate them into a unified KG, which shall benefit many knowledge driven tasks, such as information extraction (Cao et al, 2018a) and recommendation (Wang et al, 2018a).
Highlights
  • Knowledge Graphs (KGs) store the world knowledge in the form of directed graphs, where nodes denote entities and edges are their relations
  • We propose to perform knowledge graph inference and alignment jointly, so that the heterogeneity of knowledge graph are explicitly reconciled through completion by rule inference and transfer, and pruning via cross-knowledge graph attention
  • We further investigate the key components of MuGNN and analyze how the knowledge inference and transfer mechanism contribute to knowledge graph alignment
  • We propose a novel Multi-channel Graph Neural Network model, MuGNN, which learns alignment-oriented knowledge graph embeddings for entity alignment
  • It is able to alleviate the negative impacts caused by the structural heterogeneity and limited seed alignments
  • Extensive experiments on five publicly available datasets and further analysis demonstrate the effectiveness of our method
Methods
  • MTransE (Chen et al, 2017) trains independent embedding of knowledge graph with TransE, and assigns the entity pairs in seed alignments with similar embeddings by minimizing their Euclidean distances.
  • JAPE (Sun et al, 2017) learns the representation of entities and relations from different KGs in a unified embedding space.
  • AlignEA (Sun et al, 2018) swaps aligned entities in triples to calibrate the embedding of KGs in a unified embedding space.
  • AlignEA is the up to date non-iterative state of the art model
Results
  • The reason is that the ground rule triple amounts for French and English datasets are limited (Table 2), which are less than 1% of the oracle triples.
  • The above strategies can be seen as a general enhancement for most alignment approaches (Sun et al, 2018), the authors focus on improving the alignment performance without any external information and in a non-iterative way
Conclusion
  • The authors propose a novel Multi-channel Graph Neural Network model, MuGNN, which learns alignment-oriented KG embeddings for entity alignment.
  • MuGNN explicitly completes the KGs, and pruning exclusive entities by using different relation weighting schemes: KG selfattention and cross-KG attention, showing robust graph encoding capability.
  • The authors are interested in introducing text information of entities for alignment by considering word ambiguity (Cao et al, 2017b); and through cross-KG entity proximity (Cao et al, 2015)
Summary
  • Introduction:

    Knowledge Graphs (KGs) store the world knowledge in the form of directed graphs, where nodes denote entities and edges are their relations
  • Since it was proposed, many KGs are constructed (e.g., YAGO (Rebele et al, 2016)) to provide structural knowledge for different applications and languages.
  • These KGs usually contain complementary contents, attracting researchers to integrate them into a unified KG, which shall benefit many knowledge driven tasks, such as information extraction (Cao et al, 2018a) and recommendation (Wang et al, 2018a).
  • Methods:

    MTransE (Chen et al, 2017) trains independent embedding of knowledge graph with TransE, and assigns the entity pairs in seed alignments with similar embeddings by minimizing their Euclidean distances.
  • JAPE (Sun et al, 2017) learns the representation of entities and relations from different KGs in a unified embedding space.
  • AlignEA (Sun et al, 2018) swaps aligned entities in triples to calibrate the embedding of KGs in a unified embedding space.
  • AlignEA is the up to date non-iterative state of the art model
  • Results:

    The reason is that the ground rule triple amounts for French and English datasets are limited (Table 2), which are less than 1% of the oracle triples.
  • The above strategies can be seen as a general enhancement for most alignment approaches (Sun et al, 2018), the authors focus on improving the alignment performance without any external information and in a non-iterative way
  • Conclusion:

    The authors propose a novel Multi-channel Graph Neural Network model, MuGNN, which learns alignment-oriented KG embeddings for entity alignment.
  • MuGNN explicitly completes the KGs, and pruning exclusive entities by using different relation weighting schemes: KG selfattention and cross-KG attention, showing robust graph encoding capability.
  • The authors are interested in introducing text information of entities for alignment by considering word ambiguity (Cao et al, 2017b); and through cross-KG entity proximity (Cao et al, 2015)
Tables
  • Table1: Statistics of DBP15K and DWY100k
  • Table2: Statistics of KG inference and transfer
  • Table3: Overall performance
  • Table4: Examples of groundings of transferred rules
Related work
  • Merging different KGs into a unified one has attracted much attention since it shall benefit many Knowledge-driven applications, such as information extraction (Cao et al, 2017a, 2018b), question answering (Zhang et al, 2015) and recommendation (Cao et al, 2019). Early approaches for entity alignment leverage various features to overcome the heterogeneity between KGs, such as machine translation and external lexicons (Suchanek et al, 2011; Wang et al, 2013). Following the success of KG representation learning, recent work embeds entities in different KGs into a low-dimensional vector space with the help of seed alignments (Chen et al, 2017). However, the limited seeds and structural differences take great negative impacts on the quality of KG embeddings, which performs alignment poorly. JAPE (Sun et al, 2017) and KDCoE (Chen et al, 2018) introduced attributes or descriptions information to improve entity embeddings, while IPTransE (Zhu et al, 2017) and BootEA (Sun et al, 2018) enlarged the seed set by selecting predicted alignments with high confidence iteratively.

    Clearly, the above strategies can be seen as a general enhancement for most alignment approaches (Sun et al, 2018), thus we focus on improving the alignment performance without any external information and in a non-iterative way. Inspired by Wang et al (2018b), which utilize Graph Convolutional Network (GCN) (Kipf and Welling, 2017) to encode the entire KGs, we aim at reconciling the heterogeneity between KGs through completion and pruning, and learn alignment-oriented KG embeddings by modeling structural features from different perspectives via Multi-channel GNNs.
Funding
  • NExT++ research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its IRC@SG Funding Initiative
Reference
  • Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In NIPS.
    Google ScholarFindings
  • Yixin Cao, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2018a. Neural collective entity linking. In Proceedings of the 27th International Conference on Computational Linguistics, pages 675–686.
    Google ScholarLocate open access versionFindings
  • Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, and Tiansi Dong. 2018b. Joint representation learning of cross-lingual words and entities via attentive distant supervision. In EMNLP.
    Google ScholarFindings
  • Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, and Juanzi Li. 2017a. Bridge text and knowledge by learning multi-prototype entity mention embedding. In ACL.
    Google ScholarFindings
  • Yixin Cao, Juanzi Li, Xiaofei Guo, Shuanhu Bai, Heng Ji, and Jie Tang. 201Name list only? target entity disambiguation in short texts. In EMNLP.
    Google ScholarFindings
  • Yixin Cao, Jiaxin Shi, Juanzi Li, Zhiyuan Liu, and Chengjiang Li. 2017b. On modeling sense relatedness in multi-prototype word embedding. In IJCNLP.
    Google ScholarFindings
  • Yixin Cao, Xiang Wang, Xiangnan He, Tat-Seng Chua, et al. 2019. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. arXiv preprint arXiv:1902.06236.
    Findings
  • Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 201Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In IJCAI.
    Google ScholarFindings
  • Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In IJCAI.
    Google ScholarFindings
  • Thomas Rebele, Fabian Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, and Gerhard Weikum. 2016. Yago: A multilingual knowledge base from wikipedia, wordnet, and geonames. In ISWC.
    Google ScholarFindings
  • Fabian M Suchanek, Serge Abiteboul, and Pierre Senellart. 20Paris: Probabilistic alignment of relations, instances, and schema. Proceedings of the VLDB Endowment.
    Google ScholarLocate open access versionFindings
  • Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attributepreserving embedding. In ISWC.
    Google ScholarFindings
  • Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In IJCAI.
    Google ScholarFindings
  • Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In ICLR.
    Google ScholarFindings
  • Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, and Tat-Seng Chua. 2018a. Explainable reasoning over knowledge graphs for recommendation. arXiv preprint arXiv:1811.04540.
    Findings
  • Zhichun Wang, Juanzi Li, and Jie Tang. 2013. Boosting cross-lingual knowledge linking via concept annotation. In IJCAI.
    Google ScholarFindings
  • Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018b. Cross-lingual knowledge graph alignment via graph convolutional networks. In EMNLP.
    Google ScholarFindings
  • Mengdi Zhang, Tao Huang, Yixin Cao, and Lei Hou. 2015. Target detection and knowledge learning for domain restricted question answering. In NLPCC.
    Google ScholarFindings
  • Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via joint knowledge embeddings. In IJCAI.
    Google ScholarFindings
  • John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research.
    Google ScholarLocate open access versionFindings
  • Luis Galarraga, Christina Teflioudi, Katja Hose, and Fabian M Suchanek. 2015. Fast rule mining in ontological knowledge bases with amie++. The VLDB Journal—The International Journal on Very Large Data Bases.
    Google ScholarFindings
  • Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2016. Jointly embedding knowledge graphs and logical rules. In ACL.
    Google ScholarFindings
  • Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In ICLR.
    Google ScholarFindings
Your rating :
0

 

Tags
Comments