Prompting Implicit Discourse Relation Annotation
CoRR(2024)
摘要
Pre-trained large language models, such as ChatGPT, archive outstanding
performance in various reasoning tasks without supervised training and were
found to have outperformed crowdsourcing workers. Nonetheless, ChatGPT's
performance in the task of implicit discourse relation classification, prompted
by a standard multiple-choice question, is still far from satisfactory and
considerably inferior to state-of-the-art supervised approaches. This work
investigates several proven prompting techniques to improve ChatGPT's
recognition of discourse relations. In particular, we experimented with
breaking down the classification task that involves numerous abstract labels
into smaller subtasks. Nonetheless, experiment results show that the inference
accuracy hardly changes even with sophisticated prompt engineering, suggesting
that implicit discourse relation classification is not yet resolvable under
zero-shot or few-shot settings.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要