Center-retained fine-tuning for conversational question ranking center identification


引用 0|浏览41
Given a conversation context, conversational question ranking (CQR) aims to select a proper question from a candidate pool to clarify users' ambiguous information needs. Most state-ofthe-art CQR methods are pre-training based, which seek to design various fine-tuning tasks by randomly manipulating utterances in the context. However, existing fine-tuning methods will occasionally remove the center utterances (i.e., the referent utterances in the context for the clarification questions), which might result in counterproductive effect as the center utterances have high semantic coherence with the clarification questions. In this work, we introduce an unsupervised center-aware ranking (UCAR) framework, which first identifies the center utterance and then conducts fine-tuning by retaining the center. To identify the center, we devise a multi-perspective center identification (MCD) module and optimize it in an unsupervised manner. Experimental results on two benchmark datasets with 20k test samples show that seven state-of-the-art baselines get improved by 1% similar to 5% in terms of MRR equipped with UCAR. The results demonstrate that it indeed improves the performance by keeping semantic coherence in conversations when fine-tuning. In addition, the centers identified by our proposed unsupervised MCD module are of high quality, as validated by humans. Moreover, an ancillary advantage of UCAR is that it has good interpretability by referring to the center utterance in the conversation context.
Conversational question ranking,Asking clarification question,Conversational information seeking,Centering theory
AI 理解论文
Chat Paper