e-rater® Scoring Engine
Evaluates students’ writing proficiency with automatic scoring and feedback
Selection an option below to learn more.
When scoring essays, the e-rater® engine will:
The e-rater engine is continually being developed and improved, with the aim of extending its ability to model important and challenging aspects of writing proficiency. Ongoing research aims to enhance the e-rater engine capabilities so that it can identify and evaluate the structure of an argument in an essay, as well as assess the creative use of language in student and test-taker writing.
The features used for e-rater scoring are the result of nearly 2 decades of Natural Language Processing research at ETS, and each feature may be composed of independent sub-features. Work has also been done to establish a vertically linked scale of K–12 writing scores across grades based on the e-rater engine, known as the Developmental Writing Scale.
The features of the e-rater scoring engine currently include:
The adjustment of features to assign a total score to an essay can be tailored to a specific prompt, or in a "generic" fashion, allowing the same e-rater model to be used to score a variety of prompt responses.
For tasks that are appropriate for the e-rater engine (essay-length writing tasks that are scored for writing quality rather than correctness of claims made in the response), agreement with human raters can be very strong. As Attali, Bridgeman & Trapani found in 2010, Automated Essay Writing with e-rater v2.0 (PDF), the e-rater engine's agreement with a human rater on the TOEFL® Independent and GRE® Issue tasks was higher than the agreement between two independent human raters.