An error-analysis study from an EFL writing context: Human and Automated Essay Scoring Approaches

TECHNOLOGY KNOWLEDGE AND LEARNING(2022)

引用 5|浏览2
暂无评分
摘要
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches’ performances were analyzed quantitatively using Corder’s (1974) error analysis approach to categorize the writing errors in a corpus of 197 essays written by English as a foreign language (EFL) learners. Pearson correlation coefficient and paired sample t -tests were conducted to analyze and compare errors detected by both approaches. According to the study’s results, a moderate correlation between human raters and AES in terms of the total scores and the number of errors detected. Results also indicated that the total number of errors detected by AES is significantly higher than human raters and that the latter tend to give students higher scores. The findings encourage a more open attitude towards AES systems to support EFL writing teachers in assessing students’ work.
更多
查看译文
关键词
EFL, Writing, Correlation, Feedback, Automated essay scoring (AES), Human raters
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要