skip to main content skip to footer

Distractor Analysis for Multiple-Choice Tests: An Empirical Study With International Language Assessment Data IRT NRM

Author(s):
Haberman, Shelby J.; Liu, Yang; Lee, Yi-Hsuan
Publication Year:
2019
Report Number:
RR-19-39
Source:
ETS Research Report
Document Type:
Report
Page Count:
16
Subject/Key Words:
Item Response Theory (IRT), Nominal Response Model (NRM), Model Fit, Test Scoring, Distractors (Tests), Multiple Choice Items, Test Reliability, Scale Scores, Two-Parameter Logistic Model

Abstract

Distractor analyses are routinely conducted in educational assessments with multiple-choice items. In this research report, we focus on three item response models for distractors: (a) the traditional nominal response (NR) model, (b) a combination of a two-parameter logistic model for item scores and a NR model for selections of incorrect distractors, and (c) a model in which the item score satisfies a two-parameter logistic model and distractor selection and proficiency are conditionally independent, given that an incorrect response is selected. Model comparisons involve generalized residuals, information measures, scale scores, and reliability estimates. To illustrate the methodology, a study of an international assessment of proficiency of nonnative speakers of a single target language used to make high-stakes decisions compares the models under study.

Read More