Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering.

CoRR(2023)

引用 1|浏览37
暂无评分
摘要
We train a language model (LM) to robustly answer multistep questions by generating and answering sub-questions. We propose Chain-of-Questions, a framework that trains a model to generate sub-questions and sub-answers one at a time by leveraging human annotated question decomposition meaning representation (QDMR). The key technical challenge is that QDMR only contains sub-questions but not answers to those sub-questions, so we treat sub-answers as latent variables and optimize them using a novel dynamic mixture of Hard-EM and MAPO. Chain-of-Questions greatly outperforms strong neuro-symbolic methods by 9.0 F1 on DROP contrast set, and outperforms GPT-3.5 by 24.3 F1 on HOTPOTQA adversarial set, thus demonstrating the effectiveness and robustness of our framework.
更多
查看译文
关键词
chain-of-questions answering,latent answers,robust multistep
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络