Supervised Contrastive Learned Deep Model for Question Continuation Evaluation

IEEE Transactions on Human-Machine Systems(2023)

引用 0|浏览36
暂无评分
摘要
Question continuation evaluation (QCE) is a branch task of dialogue act prediction (DAP) in the natural language processing area, which is aimed at predicting whether each question in a dialogue is worthy of being followed-up under a specific context. QCE is important for communication, education, and even entertainment. Regrettably, QCE has always been disregarded as an auxiliary task for conversational machine reading comprehension. QCE involves more information and relationships than the original DAP task, making it more complex. Moreover, the classification of QCE inherently renders the samples confusing. In this article, a transformer long short-term memory (LSTM)-based supervised contrastive learned model for QCE is proposed to automatically distribute QCE labels. This model is mainly constructed with transformer encoder blocks and LSTM modules, and supervised contrastive learning (SCL) is innovatively introduced to the training process. This model is good at extracting both information about corpora and the relationships among corpora, and SCL alleviates any confusion. With the only applicable dataset, i.e., Question Answering in Context (QuAC), experiments are conducted. This model is proven to perform well and is robust to missing data. The performance is 2.3% (accuracy) and 12.2% (macro-F1 score) higher than baselines from QuAC and only decreases by approximately 2.3% when 10% data remain.
更多
查看译文
关键词
Dialogue act prediction (DAP),question continuation evaluation (QCE),supervised contrastive learning (SCL)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要