skip to main content skip to footer

The M-rater Engine: Introduction to the Automated Scoring of Mathematics Items CBAL

Author(s):
Fife, James H.
Publication Year:
2017
Report Number:
RM-17-02
Source:
ETS Research Memorandum
Document Type:
Report
Page Count:
46
Subject/Key Words:
m-rater, Automated Scoring and Natural Language Processing, Scoring Rubrics, Test Items, Cognitively Based Assessment of, for, and as Learning (CBAL), Graph Items, Mathematics Assessment, Constructed Response Items, Equations (Mathematics)

Abstract

This report provides an introduction to the m-rater engine, ETS’s automated scoring engine for computer-delivered constructed-response items when the response is a number, an equation (or mathematical expression), or a graph. This introduction is intended to acquaint the reader with the types of items that m-rater can score, the requirements for authoring these items on-screen, the methods m-rater uses to score these items, and the features these items must possess to be reliably scored. M-rater can score 3 types of responses—numeric responses, equations, and graphs—and these 3 types of responses are considered separately. Each type of response has certain technical requirements that must be satisfied and certain considerations for scoring. These requirements are discussed in detail in the Numeric Response Items, Equation Items, and Graph Items sections. Also discussed are generating example items—items with many correct solutions for which the student must produce 1 correct solution. The final chapter explains how to convert scoring rubrics into concepts and scoring rules, a necessary step for building m-rater scoring models.

Read More