Beyond Essay Length: Evaluating e-rater's Performance on TOEFL Essays TOEFL
- Author(s):
- Chodorow, Martin; Burstein, Jill
- Publication Year:
- 2004
- Report Number:
- RR-04-04
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 46
- Subject/Key Words:
- Essay Scoring, Essay Tests, Electronic Essay Rater (E-rater), Test of English as a Foreign Language (TOEFL), Writing Assessment, Automated Essay Scoring (AES), Automated Scoring and Natural Language Processing
Abstract
This study examines the relation between essay length and holistic scores assigned to Test of English as a Foreign Language (TOEFL) essays by e-rater, the automated essay scoring system developed by ETS. Results show that an early version of the system, e-rater99, accounted for little variance in human reader scores beyond that which could be predicted by essay length. A later version of the system, e-rater01, performs significantly better than its predecessor and is less dependent on length due to its greater reliance on measures of topical content and of complexity and diversity of vocabulary. Essay length was also examined as a possible explanation for differences in scores among examinees with native languages of Spanish, Arabic, and Japanese. Human readers and e-rater01 show the same pattern of differences for these groups, even when effects of length are controlled.
Read More
- Request Copy (specify title and report number, if any)
- http://dx.doi.org/10.1002/j.2333-8504.2004.tb01931.x