Proven Automated Scoring Capabilities

There is a long history of research on automated scoring of constructed response (CR) and performance tasks. In recent years, the technology has begun to have a widespread impact on large-scale assessment, with its introduction into the GRE® test, the TOEFL® test and many state assessments. Automated scoring promises to drive fundamental improvements in the speed, cost and scalability of performance assessment, but a knowledgeable partner is necessary in order to get the most out of the technology. ETS has been at the forefront of research into new techniques in automated scoring of performance tasks and can bring this knowledge base to bear in the real-time scoring of PARCC's and Smarter Balanced's large-scale assessments.

Based on more than 15 years of ETS-supported research and development, we currently offer a suite of automated scoring applications that can consistently score items, including the:

  • e-rater® engine, which evaluates the quality of essays written on the computer
  • c-rater™ engine, which detects the presence of particular expected concepts in a student response
  • m-rater engine, which scores CR mathematics items for which the response is a number, an equation or mathematical expression, or a graph
  • SpeechRaterSM engine, which scores spoken responses for pronunciation, fluency, vocabulary usage and prosodic features

ETS has a broad research and development agenda to continually improve its automated scoring technologies, based on new findings and techniques in the field such as natural language processing (NLP), machine learning and educational measurement. This includes both long-term research goals to improve the depth with which important constructs can be measured, as well as near-term development goals for improving and enhancing existing capabilities.

Development Goals for Improving and Enhancing Existing Capabilities

Improvements to the e-rater engine that are currently under development include special scoring and feedback for English-language learners, improvement of grammatical error analysis features and the incorporation of context-sensitive spelling error identification.

One immediate focus of c-rater development is an update to its core NLP components to improve, for example, its ability to identify correct content despite the presence of spelling and grammar errors. Another area of interest is the creation of a method to identify references to points found in the stimulus, but not incorporated in the conceptual rubric.

Enhancements to the m-rater engine will include upgrades to the interface components into which equations and graphs can be entered. Besides general updates designed to improve interface usability and extend the range of mathematics that can be assessed, we will modify the graph editor so that users can set the viewing window and enter labels for the axes. (These new features will be scoreable.) We also will add support for m-rater advisories, which will flag cases in which a response is syntactically ill-formed, perhaps due to a typographical error by the student, and may need to be routed for special processing.

Finally, ETS's immediate development goals for the SpeechRater engine include generalizing the capability to allow it to be used for a range of different item types and extending it to provide formative feedback as well as summative scores.

Beyond these near-term development goals for our capabilities, ETS remains deeply involved in fundamental research intended to improve not only our own capabilities but the state of the art in NLP and speech analysis for assessment purposes. One strand of this research, which is particularly important given the emphasis that the CCSS place on student ability to demonstrate critical thinking in productive language, is next-generation content scoring. Current methods of scoring student writing, to the extent they are able to address written content at all, do so with methods that rely on meaningful units no larger than single words.

Get more information about automated scoring and natural language processing. Numerous publications, authored or co-authored by ETS staff, also discuss automated scoring in greater detail:



Technology-enhanced Assessments

Technology-enhanced AssessmentsETS works closely with states and the consortia to bring technological innovations to K–12 assessment.

Resources on Consortia

The K–12 Center at ETS offers a variety of resources on the assessment consortia, including summaries of their designs and future plans, videos and presentations.


NAEP and ETSETS has assisted the NAEP program in introducing numerous psychometric and assessment design innovations over the years.

Join the ETS K–12 Online Community

Join the ETS K-12 Online CommunityReceive policy reports, research updates and more.