Compositional Generalization via Semantic Tagging.

EMNLP(2021)

引用 29|浏览31
暂无评分
摘要
Although neural sequence-to-sequence models have been successfully applied to semantic parsing, they struggle to perform well on query-based data splits that require \emph{composition generalization}, an ability of systematically generalizing to unseen composition of seen components. Motivated by the explicitly built-in compositionality in traditional statistical semantic parsing, we propose a new decoding framework that preserves the expressivity and generality of sequence-to-sequence models while featuring explicit lexicon-style alignments and disentangled information processing. Specifically, we decompose decoding into two phases where an input utterance is first tagged with semantic symbols representing the meanings of its individual words, and then a sequence-to-sequence model is used to predict the final meaning representation conditioning on the utterance and the predicted tag sequence. Experimental results on three semantic parsing datasets with query-based splits show that the proposed approach consistently improves compositional generalization of sequence-to-sequence models across different model architectures, domains and semantic formalisms.
更多
查看译文
关键词
generalization,tagging
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要