Spider4SPARQL: A Complex Benchmark for Evaluating Knowledge Graph Question Answering Systems
CoRR(2023)
摘要
With the recent spike in the number and availability of Large Language Models
(LLMs), it has become increasingly important to provide large and realistic
benchmarks for evaluating Knowledge Graph Question Answering (KGQA) systems. So
far the majority of benchmarks rely on pattern-based SPARQL query generation
approaches. The subsequent natural language (NL) question generation is
conducted through crowdsourcing or other automated methods, such as rule-based
paraphrasing or NL question templates. Although some of these datasets are of
considerable size, their pitfall lies in their pattern-based generation
approaches, which do not always generalize well to the vague and linguistically
diverse questions asked by humans in real-world contexts. In this paper, we
introduce Spider4SPARQL - a new SPARQL benchmark dataset featuring 9,693
previously existing manually generated NL questions and 4,721 unique, novel,
and complex SPARQL queries of varying complexity. In addition to the NL/SPARQL
pairs, we also provide their corresponding 166 knowledge graphs and ontologies,
which cover 138 different domains. Our complex benchmark enables novel ways of
evaluating the strengths and weaknesses of modern KGQA systems. We evaluate the
system with state-of-the-art KGQA systems as well as LLMs, which achieve only
up to 45% execution accuracy, demonstrating that Spider4SPARQL is a
challenging benchmark for future research.
更多查看译文
关键词
Benchmark for Question Answering over Knowledge Graphs,Language Models,Performance Evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要