Automated Scoring of CBAL Mathematics Tasks With m-rater CBAL
- Author(s):
- Fife, James H.
- Publication Year:
- 2011
- Report Number:
- RM-11-12
- Source:
- ETS Research Memorandum
- Document Type:
- Report
- Page Count:
- 18
- Subject/Key Words:
- m-rater, Mathematics Tasks, Human-Scored Responses, Scoring Models, Automated Scoring, Solution Steps, Cognitively Based Assessment of, for, and as Learning (CBAL), KeyBuilder, Conditionally Scored, Alchemist, Automated Scoring and Natural Language Processing
Abstract
For the past several years, ETS has been engaged in a research project known as Cognitively Based Assessments of, for, and as Learning (CBAL). The goal of this project is to develop a research-based assessment system that provides accountability testing and formative testing in an environment that is a worthwhile learning experience in and of itself. An important feature of the assessments in this system is that they are computer-delivered, with as many of the tasks as possible scored automatically. For the automated scoring of mathematics items, ETS has developed m-rater scoring engine. In the present report, I discuss the m-rater–related automated scoring work done in CBAL Mathematics in 2009. Scoring models were written for 16 tasks. These models were written in Alchemist, a software tool originally developed for the writing of c-rater™ scoring models (c-rater is ETS’s scoring engine for scoring short text responses for content). In 2009 the c-rater support team completed a collection of enhancements to Alchemist that enables the user to write m-rater scoring models. This collection of enhancements is known as KeyBuilder. In 2009 I reviewed the literature to see to what extent problem solutions that are expressed in the form of a structured sequence of equations can be automatically evaluated.
Read More
- Request Copy (specify title and report number, if any)
- http://www.ets.org/Media/Research/pdf/RM-11-12.pdf