SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU
arxiv(2023)
摘要
Task-oriented dialogue (ToD) systems help users execute well-defined tasks
across a variety of domains (e.g., flight booking or food
ordering), with their Natural Language Understanding (NLU) components being
dedicated to the analysis of user utterances, predicting users' intents
(Intent Detection, ID) and extracting values for informational slots
(Value Extraction, VE). In most domains, labelled NLU data is
scarce, making sample-efficient learning – enabled with effective transfer
paradigms – paramount. In this work, we introduce SQATIN, a new framework for
dialog NLU based on (i) instruction tuning and (ii) question-answering-based
formulation of ID and VE tasks. According to the evaluation on established NLU
benchmarks, SQATIN sets the new state of the art in dialogue NLU, substantially
surpassing the performance of current models based on standard fine-tuning
objectives in both in-domain training and cross-domain transfer. SQATIN yields
particularly large performance gains in cross-domain transfer, owing to the
fact that our QA-based instruction tuning leverages similarities between
natural language descriptions of classes (i.e., slots and intents) across
domains.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要