Exploring Automated Distractor Generation for Math Multiple-choice Questions via Large Language Models
arxiv(2024)
摘要
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of
education since they are easy to administer, grade, and are a reliable format
in assessments and practices. One of the most important aspects of MCQs is the
distractors, i.e., incorrect options that are designed to target common errors
or misconceptions among real students. To date, the task of crafting
high-quality distractors largely remains a labor and time-intensive process for
teachers and learning content designers, which has limited scalability. In this
work, we study the task of automated distractor generation in the domain of
math MCQs and explore a wide variety of large language model (LLM)-based
approaches, from in-context learning to fine-tuning. We conduct extensive
experiments using a real-world math MCQ dataset and find that although LLMs can
generate some mathematically valid distractors, they are less adept at
anticipating common errors or misconceptions among real students.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要