A Study of the Use of the e‐rater® Scoring Engine for the Analytical Writing Measure of the GRE® revised General Test

ETS Research Report Series(2014)

引用 3|浏览11
暂无评分
摘要
In this research, we investigated the feasibility of implementing the e-rater® scoring engine as a check score in place of all-human scoring for the Graduate Record Examinations® (GRE®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a check score in operational practice. We proceeded with the investigation in four phases. In phase I, for both argument and issue prompts, we investigated the quality of human scoring consistency across individual prompts, as well as two groups of prompts organized into sets. The sets were composed of prompts with separate focused questions (i.e., variants) that must be addressed by the writer in the process of responding to the topic of the prompt. There are also groups of variants of prompts (i.e., grouped for scoring purposes by similar variants). Results showed adequate human scoring quality for model building and evaluation. In phase II, we investigated eight different e-rater model variations each for argument and issue essays including prompt-specific; variant-specific; variant-group–specific; and generic models both with and without content features at the rating level, at the task score level, and at the writing score level. Results showed the generic model was a valued alternative to the prompt-specific, variant-specific, and variant-group–specific models, with and without the content features. In phase III, we evaluated the e-rater models on a recently tested group from the spring of 2012 (between March 18, 2012, to June 18, 2012) following the introduction of scoring benchmarks. Results confirmed the feasibility of using a generic model at the rating and task score level and at the writing score level, demonstrating reliable cross-task correlations, as well as divergent and convergent validity. In phase IV of the study, we purposely introduced a bias to simulate the effects of training the model on a potentially less able group of test takers in the spring of 2012. Results showed that use of the check-score model increased the need for adjudications between 5% and 8%, yet the increase in bias actually increased the agreement of the scores at the analytical writing score level with all-human scoring.
更多
查看译文
关键词
automated essay scoring
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要