angle-up angle-right angle-down angle-left close user menu open menu closed search globe bars phone store

Stumping E-Rater: Challenging the Validity of Automated Essay Scoring

Powers, Donald E.; Burstein, Jill; Chodorow, Martin; Fowles, Mary E.; Kukich, Karen
Publication Year:
Report Number:
GREB-98-08bP, RR-01-03
ETS Research Report
Document Type:
Page Count:
Subject/Key Words:
Graduate Record Examinations Board Writing Evaluation Electronic Essay Rater (E-rater) Graduate Record Examinations (GRE) Validity Scoring Automated Scoring Automation Essay Tests Test Scoring Machines Automated Scoring and Natural Language Processing


For this study, various writing experts were invited to "challenge" e-rater -- an automated essay scorer that relies on natural language processing techniques -- by composing essays in response to Graduate Record Examinations (GRE) Writing Assessment prompts with the intention of undermining its scoring capability. Specifically, using detailed information about e-rater's approach to essay scoring, writers tried to "trick" the computer-based system into assigning scores that were higher or lower than deserved. E-rater's automated scores on these "problem essays" were compared with scores given by two trained, human readers, and the difference between the scores constituted the standard for judging the extent to which e-rater was fooled. Challengers were differentially successful in writing problematic essays. Expert writers were more successful in tricking e-rater into assigning scores that were too high than in duping e-rater into awarding scores that were too low. The study provides information on ways in which e-rater, and perhaps other automated essay scoring systems, may fail to provide accurate evaluations, if used as the sole method of scoring in high-stakes assessments. The results suggest possible avenues for improving automated scoring methods.

Read More