skip to main content skip to footer

An Empirical Examination of the IRT Information in Polytomously Scored Reading Items NAEP

Author(s):
Donoghue, John R.
Publication Year:
1993
Report Number:
RR-93-12
Source:
ETS Research Report
Document Type:
Report
Page Count:
30
Subject/Key Words:
Constructed Response Items, Item Response Models, Item Types, National Assessment of Educational Progress (NAEP), Scoring, Weighted Scores

Abstract

(30pp.) One natural question about polytomous items (which yield responses which can be scored as ordered categories) concerns the information contained in the items; how much more information do polytomous items yield? Using the generalized partial credit IRT model, polytomous items from the 1991 field test of the NAEP Reading Assessment were calibrated with multiple choice and short open-ended items. The expected information of each type of item was computed. On average, four-category polytomous items yielded 2.1 to 3.1 times as much IRT information as dichotomous items. These results provide limited support for the ad hoc rule of weighting k category polytomous items the same as k-1 dichotomous items for computing total scores. Comparing average values, polytomous items provided more information across the entire proficiency range. Polytomous items provided the most information about examinees of moderately high proficiency; the information function peaked at 1.0 to 1.5, and the population distribution mean was 0. When scored dichotomously, information in the extended open-ended items sharply decreased. However, they still provided more expected information than did the other response formats. For reference, a derivation of the information function for the generalized partial credit model is included.

Read More