What Is the e-rater® Engine?
The e-rater engine is an artificial intelligence engine that uses Natural Language Processing (NLP), a field of computer science and linguistics that uses computational methods to analyze characteristics of a text. NLP methods support such burgeoning application areas as machine translation, speech recognition and information retrieval.
Who Uses It and Why?
Companies and institutions use this patented technology to power their custom applications.
The e-rater engine is used within the Criterion® Online Writing Evaluation Service. Students use the e-rater engine's feedback to evaluate their essay-writing skills as well as to identify areas that need improvement. Teachers use the Criterion service to help their students to develop their writing skills independently and receive automated, constructive feedback. It is also used in other low-stakes practice tests include TOEFL® Practice Online and GRE® ScoreItNow!™.
In high-stakes settings, the engine is used in conjunction with human ratings for both the Issue and Argument prompts of the GRE test's Analytical Writing section and the TOEFL iBT® test's Independent and Integrated Writing prompts. ETS research has shown that combining automated and human essay scoring demonstrates assessment score reliability and measurement benefits.
For more information regarding the use of the e-rater engine, read E-rater as a Quality Control on Human Scores.
How Does It Grade Essays?
The e-rater engine provides a holistic score for an essay that has been entered into the computer electronically. It also provides real-time diagnostic feedback about grammar, usage, mechanics, style and organization, and development. This feedback is based on NLP research specifically tailored to the analysis of student responses and is detailed in ETS's research publications.
How Does It Compare to Human Raters?
The e-rater engine uses NLP to identify features relevant to writing proficiency in training essays and their relationship with human scores. The resulting scoring model, which assigns weights to each observed feature, is stored offline in a database that can then be used to score new essays according to the same formula. The e-rater engine cannot read so it cannot evaluate essays the same way that human raters do. However, the features used in e-rater scoring have been developed to be as substantively meaningful as they can be, given the state of the art in NLP. They also have been developed to demonstrate strong reliability — often greater reliability than human raters themselves.
Learn more about how it works.
Contact us to learn how the e-rater automated scoring engine can meet the needs of your company or institution.