skip to main content skip to footer

A Study of the Use of the e-rater Scoring Engine for the Analytical Writing Measure of the GRE revised General Test rGRE

Author(s):
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April
Publication Year:
2014
Report Number:
RR-14-24
Source:
ETS Research Report
Document Type:
Report
Page Count:
66
Subject/Key Words:
Automated Essay Scoring (AES), e-rater, Revised GRE, Analytical Writing Assessment, Scoring Models, Check Scoring, Human Computer Agreement, General Test (GRE), Higher Education

Abstract

In phase IV of the study, we purposely introduced a bias to simulate the effects of training the model on a potentially less able group of test takers in the spring of 2012. Results showed that use of the check-score model increased the need for adjudications between 5% and 8%, yet the increase in bias actually increased the agreement of the scores at the analytical writing score level with all-human scoring.

Read More