skip to main content skip to footer

Performance of a Generic Approach in Automated Essay Scoring GRE AES

Author(s):
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine S.
Publication Year:
2014
Source:
Wendler, Cathy; Bridgeman, Brent (eds.) with assistance from Chelsea Ezzo. The Research Foundation for the GRE revised General Test: A Compendium of Studies. Princeton, NJ: Educational Testing Service, 2014, p4.4.1-4.4.3
Document Type:
Chapter
Page Count:
3
Subject/Key Words:
Graduate Record Examination (GRE), Revised GRE, Test Design, Test Revision, Automated Essay Scoring (AES), Validity, Human Scoring, e-rater, Scoring Models

Abstract

Provides the results of a study that examined different scoring models that could be used with e-rater. Some automated essay scoring engines rely heavily on content features that are unique to each essay prompt, but because e-rater emphasizes form over content, it can score many topics using the same standards. Such a generic scoring approach allows the same scoring model to be used across prompts, and new prompts can be introduced without requiring any changes to the scoring engine. This study examined the functioning of a generic scoring model and compared it to approaches that were more dependent on the content in particular prompts. In terms of average scores and correlations with scores assigned by human raters, the scores from the generic approach were comparable to scores from the much more time- and labor-intensive prompt-specific approach.

Read More