Automated Scoring of CBAL Mathematics Tasks With m-rater

Fife, James H.
Publication Year:
Report Number:
ETS Research Memorandum
Document Type:
Page Count:
Subject/Key Words:
m-rater Alchemist Scoring Models KeyBuilder Conditionally Scored Solution Steps Mathematics Tasks Human-Scored Responses Automated Scoring Automated Scoring and Natural Language Processing Cognitively Based Assessment of, for, and as Learning (CBAL)


For the past several years, ETS has been engaged in a research project known as Cognitively Based Assessments of, for, and as Learning (CBAL). The goal of this project is to develop a research-based assessment system that provides accountability testing and formative testing in an environment that is a worthwhile learning experience in and of itself. An important feature of the assessments in this system is that they are computer-delivered, with as many of the tasks as possible scored automatically. For the automated scoring of mathematics items, ETS has developed m-rater scoring engine. In the present report, I discuss the m-rater–related automated scoring work done in CBAL Mathematics in 2009. Scoring models were written for 16 tasks. These models were written in Alchemist, a software tool originally developed for the writing of c-rater™ scoring models (c-rater is ETS’s scoring engine for scoring short text responses for content). In 2009 the c-rater support team completed a collection of enhancements to Alchemist that enables the user to write m-rater scoring models. This collection of enhancements is known as KeyBuilder. In 2009 I reviewed the literature to see to what extent problem solutions that are expressed in the form of a structured sequence of equations can be automatically evaluated.

Read More