Stumping E-Rater: Challenging the Validity of Automated Essay Scoring

Author(s):
Powers, Donald. E.; Burstein, Jill C.; Chodorow, Martin; Fowles, Mary E.; Kukich, Karen
Publication Year:
2001
Report Number:
RR-01-03
GREB-98-08bP
Source:
Document Type:
Subject/Key Words:
Writing assessment validity automated scoring essay scoring e-rater

Abstract

For this study, various writing experts were invited to "challenge" e-rater® -- an automated essay scorer that relies on natural language processing techniques -- by composing essays in response to Graduate Record Examinations (GRE) Writing Assessment prompts with the intention of undermining its scoring capability. Specifically, using detailed information about e-rater's approach to essay scoring, writers tried to "trick" the computer-based system into assigning scores that were higher or lower than deserved. E-rater's automated scores on these "problem essays" were compared with scores given by two trained, human readers, and the difference between the scores constituted the standard for judging the extent to which e-rater was fooled. Challengers were differentially successful in writing problematic essays. Expert writers were more successful in tricking e-rater into assigning scores that were too high than in duping e-rater into awarding scores that were too low. The study provides information on ways in which e-rater, and perhaps other automated essay scoring systems, may fail to provide accurate evaluations, if used as the sole method of scoring in high-stakes assessments. The results suggest possible avenues for improving automated scoring methods.

Read More