It was the training data pruning too!

arXiv: Learning(2018)

引用 25|浏览65
暂无评分
摘要
We study the current best model (KDG) for question answering on tabular data evaluated over the WikiTableQuestions dataset. Previous ablation studies performed against this model attributed the modelu0027s performance to certain aspects of its architecture. In this paper, we find that the modelu0027s performance also crucially depends on a certain pruning of the data used to train the model. Disabling the pruning step drops the accuracy of the model from 43.3% to 36.3%. The large impact on the performance of the KDG model suggests that the pruning may be a useful pre-processing step in training other semantic parsers as well.
更多
查看译文
关键词
training data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要