skip to main content skip to footer

Evaluation of the e-rater Scoring Engine for the GRE Issue and Argument Prompts GRE AES

Author(s):
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent
Publication Year:
2014
Source:
Wendler, Cathy; Bridgeman, Brent (eds.) with assistance from Chelsea Ezzo. The Research Foundation for the GRE revised General Test: A Compendium of Studies. Princeton, NJ: Educational Testing Service, 2014, p4.5.1-4.5.5
Document Type:
Chapter
Page Count:
5
Subject/Key Words:
Graduate Record Examination (GRE), Revised GRE, Test Design, Test Revision, Automated Essay Scoring (AES), Human Scoring, e-rater, Scoring Models, Analytical Writing, Essay Prompts

Abstract

With a check score model, the e-rater score is compared to the one assigned by a single human rater. If there is no discrepancy, the human score stands. If the scores are discrepant, a second human reader reads the essay and the scores of the first and second human are averaged (and if the first and second humans disagree by more than a point, an additional human score is obtained). In this system, the essay score is always based solely on evaluations by human raters. The check score model is currently used for scoring GRE Analytical Writing measure essays.

Read More