Evaluations of Automated Scoring Systems in Practice
- Author(s):
- Rotou, Ourania; Rupp, Andre A.
- Publication Year:
- 2020
- Report Number:
- RR-20-10
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 18
- Subject/Key Words:
- Human Raters, Constructed-Response Tests, Automated Scoring, Natural Language Processing, Large-Scale Assessment, Evaluation Design
Abstract
This research report provides a description of the processes of evaluating the “deployability” of automated scoring (AS) systems from the perspective of large-scale educational assessments in operational settings. It discusses a comprehensive psychometric evaluation that entails analyses that take into consideration the specific purpose of AS, the test design, the quality of human scores, the data collection design needed to train and evaluate the AS model, and the application of statistics and evaluation criteria. Finally, it notes that an effective evaluation of an AS system requires professional judgment coupled with statistical and psychometric knowledge and understanding of the risk assessment and business metrics.
Read More
- Request Copy (specify title and report number, if any)
- https://doi.org/10.1002/ets2.12293