ETS's m-rater™ scoring engine is used for scoring open-ended mathematical responses, such as those which take the form of mathematical expressions, equations or graphs. Dating from the late 1990s, the m-rater scoring engine is one of the first ETS capabilities for automated scoring to be developed. The scores generated by the m-rater engine demonstrate very strong agreement with human ratings.
The m-rater scoring engine evaluates the correctness of a mathematical expression by determining symbolically, using a computer algebra system, if the expression is equivalent to the correct response. This enables the m-rater scoring engine to identify expressions equivalent to the key no matter what form they are found in, and to assign credit as appropriate. For instance, partial credit may be assigned if a linear equation was supposed to be provided in slope-intercept form, but was instead provided in a different, equivalent form. Scoring of mathematical responses based on string matching or text-based patterns is much more limited and error-prone than the m-rater scoring engine's capabilities for establishing equivalence symbolically.
Similarly, graph items can be scored based on a key which specifies constraints on the response entered with the graph editor. For some items, many different graphs may constitute valid answers, and the m-rater scoring engine can allow all of these variants to be scored using an elegant specification of the key.
Of course many math items are written to elicit short, text-based responses and may be more suitable for the c-rater™ engine.
Featured Publications
Below are some recent or significant publications that our researchers have authored on the subject of automated scoring of mathematical content.
2017
-
The m-rater™ Engine: Introduction to the Automated Scoring of Mathematics Items
J. H. Fife
ETS Research Memorandum RM-17-02This report provides an introduction to the m-rater™ engine, ETS’s automated scoring engine for computer-delivered constructed-response items when the response is a number, an equation (or mathematical expression), or a graph. This introduction is intended to acquaint the reader with the types of items that m-rater can score, the requirements for authoring these items on-screen, the methods m-rater uses to score these items, and the features these items must possess to be reliably scored. Learn more about this publication >
2013
-
Automated Scoring of Mathematics Tasks in the Common Core Era: Enhancements to m-rater™ in Support of CBAL® Mathematics and the Common Core Assessments
J. H. Fife
ETS Research Report RR-13-26This report describes some improvements made to the m-rater™ scoring engine in 2012: (a) the numeric equivalence scoring engine was augmented with an open-source computer algebra system, (b) the graphing of smooth curves in the graph editor was improved, (c) the graph editor was modified to give assessment specialists the option of requiring examinees to set the viewing window, and (d) m-rater advisories were implemented in situations in which construct-irrelevant errors in the response may prevent m-rater from scoring the response. Learn more about this publication >
2012
-
Difficulty Modeling and Automatic Generation of Quantitative Items: Recent Advances and Possible Next Steps
E. A. Graf & J. H. Fife. In M. Gierl & T. Haladyna (Eds.), Automatic Item Generation: Theory and Practice, pp. 157–180This ETS-authored chapter is part of a book volume that aims to summarize current knowledge about the field of automatic item generation. The chapter appears in Part III of the volume, which covers psychological and substantive characteristics of generated items. Learn more about this publication >
2011
-
Automated Scoring of Constructed-Response Literacy and Mathematics Items
R. E. Bennett
Publisher: Arabella Philanthropic Investment AdvisorsThe Race to the Top assessment consortia have indicated an interest in using "automated scoring" to more efficiently grade student answers. This white paper identifies potential uses and challenges around automated scoring to help the consortia make better-informed planning and implementation decisions. Learn more about this publication >
-
Automated Scoring of CBAL® Mathematics Tasks With m-rater™
J. H. Fife
ETS Research Memorandum RM-11-12The goal of the CBAL® research initiative is to develop a research-based assessment system that provides accountability testing and formative testing in an environment that is a worthwhile learning experience in and of itself. This report describes the m-rater™-related automated scoring work done in CBAL Mathematics in 2009. Learn more about this publication >
2005
-
Online Assessment in Mathematics
B. Sandene, R. E. Bennett, J. Braswell, & A. Oranje
Online Assessment in Mathematics and Writing: Reports from the NAEP Technology-based Assessment Project (NCES 2005–457)
U.S. Department of Education, National Center for Education StatisticsThe Math Online study is one of three field investigations in the National Assessment of Educational Progress Technology-Based Assessment Project, which explores the use of new technology in administering NAEP. Learn more about this publication >
Find More Articles
View more research publications related to automated scoring of math responses.