Multigranularity Pruning Model for Subject Recognition Task under Knowledge Base Question Answering When General Models Fail

International Journal of Intelligent Systems(2023)

引用 0|浏览4
暂无评分
摘要
In general knowledge base question answering (KBQA) models, subject recognition (SR) is usually a precondition of finding an answer, and it is a common way to employ a general named entity recognition (NER) model such as BERT-CRF to recognize the subject. However, in previous researches, the difference between a NER task and a SR task is usually ignored, and a wrong entity recognized by the NER model will certainly lead to a wrong answer in the KBQA task, which is one bottleneck for KBQA performance. In this paper, a multigranularity pruning model (MGPM) is proposed to answer a question when general models fail to recognize a subject. In MGPM, the set of all possible subjects in the Knowledge Base (KB) is pruned by 4 multigranularity pruning submodels successively based on the constraint of relation (domain and tuple), string similarity, and semantic similarity. Experimental results show that our model is compatible with various KBQA models for both single-relation and complex questions answering. The integrated MGPM model (with the BERT-CRF model) achieves a SR accuracy of 94.4% on the SimpleQuestions dataset, 68.6% on the WebQuestionsSP dataset, and 63.7% on the WebQuestions dataset, which outperforms the original model by a margin of 3.6%, 8.6%, and 5.3%, respectively.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要