Improving the performance of automatic short answer grading using transfer learning and augmentation

ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE(2023)

引用 1|浏览10
暂无评分
摘要
The task of grading answers ranging from one phrase to one paragraph using computational techniques is known as Automated Short Answer Grading (ASAG). The performance of existing systems is not good enough due to limited data and the lack of availability of data in many domains. Many ASAG systems were developed as an outcome of the active research in this field. This study builds an effective system for grading short answers in the programming domain by leveraging Pre-trained Language Models and Text Augmentation. We fine-tuned three-sentence transformer models on the SPRAG corpus with five different augmentation techniques: viz., Random Deletion, Synonym Replacement, Random Swap, Backtranslation, and NLPAug. The SPRAG corpus contains student responses involving keywords and special symbols. We experimented with four different data sizes with the augmented data to determine the impact of training data on the fine-tuned sentence transformer model. this paper provides an exhaustive analysis of fine-tuning pretrained sentence transformer models with varying sizes of data by applying text augmentation techniques. we found that applying random swap and synonym replacement techniques together while fine-tuning has given a significant improvement, With a 4.91% increase in accuracy and a 3.36% increase in the F1-score. All the trained models are publicly available1.
更多
查看译文
关键词
automatic short answer,transfer learning,augmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要