ETS uses both human scoring and automated scoring to score standardized tests.
Human-scored tests are scored manually rather than by machine and require human judgment. Since test scores can impact future student learning or opportunities such as placement, licensure or professional advancement, successful scoring is critical. At ETS, test scorers are carefully selected and go through rigorous training to ensure the accuracy of their work.
There are two types of automated scoring used at ETS:
- machine scoring of multiple-choice test questions
- automated scoring of open-ended responses, such as short written answers, essays and recorded speech
We have developed a number of automated scoring technologies through extensive research in Natural Language Processing (NLP) that spans more than a decade. They include:
- the e-rater® engine
- the c-rater™ system
- the m-rater engine
- the SpeechRaterSM engine
- the TextEvaluator® tool
We are innovators in the field of automated scoring and have incorporated these technologies into many of our testing programs, products and services, including the GRE® General Test, the TOEFL iBT® test and the Criterion® Online Writing Evaluation service. For more information, download our Automated Scoring Technologies Brochure.
A Note About the Use of Standardized Test Scores
A standardized test score is a measurement of a test-taker's knowledge of a subject or a set of skills that can be used as a basis for comparison, but only if used properly. R&D Connections, a series of publications created by ETS Research & Development, can help you understand the role of scores in standardized testing and how they should be used.
- What We Do: Test Scoring
- e-rater Scoring Engine
- Automated Scoring Technologies Brochure
- A Culture of Evidence: An Evidence-Centered Approach to Accountability for Student Learning Outcomes