Enhancing RDF Verbalization with Descriptive and Relational Knowledge

ACM Transactions on Asian and Low-Resource Language Information Processing(2023)

引用 2|浏览5
暂无评分
摘要
RDF verbalization has received increasing interest, which aims to generate a natural language description of the knowledge base. Sequence-to-sequence models based on Transformer are able to obtain strong performance equipped with pre-trained language models such as BART and T5. However, in spite of the general performance gain introduced by the pre-trained models, the performance of the task is still limited by the small scale of the training dataset. To address the problem, we propose two orthogonal strategies to enhance the representation learning of RDF triples. Concretely, two types of knowledge are introduced, i.e., descriptive knowledge and relational knowledge, respectively. The descriptive knowledge indicates the semantic information of self definition, and the relational knowledge indicates the semantic information learned from the structural context. We further combine the descriptive and relational knowledge together to enhance the representation learning. Experimental results on the WebNLG and SemEval-2010 datasets show that the two types of knowledge can both enhance the model performance, and their combination is able to obtain further improvements in most cases, providing new state-of-the-art results.
更多
查看译文
关键词
RDF Verbalization, Descriptive Knowledge, Relational Knowledge, Pretrained Language Models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要