Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction

Conference on Empirical Methods in Natural Language Processing(2020)

引用 12|浏览24
暂无评分
摘要
The goal of open information extraction (OIE) is to extract facts from natural language text, and to represent them as structured triples of the form . For example, given the sentence “Beethoven composed the Ode to Joy.”, we are expected to extract the triple . In this work, we systematically compare different neural network architectures and training approaches, and improve the performance of the currently best models on the OIE16 benchmark (Stanovsky and Dagan, 2016) by 0.421 F1 score and 0.420 AUC-PR, respectively, in our experiments (i.e., by more than 200% in both cases). Furthermore, we show that appropriate problem and loss formulations often affect the performance more than the network architecture.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要