Constructed Response and Differential Item Functioning: A Pragmatic Approach DIF
- Author(s):
- Dorans, Neil J.; Schmitt, Alicia P.
- Publication Year:
- 1991
- Report Number:
- RR-91-47
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 49
- Subject/Key Words:
- Constructed-Response Tests, Differential Item Functioning (DIF), Test Bias
Abstract
Differential item functioning (DIF) assessment attempts to identify items or item types for which subpopulations of examinees exhibit performance differentials that are not consistent with the performance differentials typically seen for those subpopulations on collections of items that purport to measure a common construct. DIF assessment requires a rule for scoring items and a matching variable on which different subpopulations can be viewed as comparable for purposes of assessing their performance on items. Typically, DIF is operationally defined as a difference in item performance between subpopulations, e.g., Black Americans and Whites, that exists after members of the different subpopulations have been matched on some total score. Constructed-response items move beyond traditional multiple-choice items, for which DIF methodology is well defined, towards item types involving selection or identification, reordering or rearrangement, substitution or correction, completion, construction, and performance or presentation. This paper defines DIF, describes two standard procedures for measuring DIF and indicates how DIF might be assessed for certain constructed-response item types. The description of DIF assessment presented in this paper is applicable to computer-delivered constructed-response items as well as paper-and-pencil delivered items. (50pp.)
Read More
- Request Copy (specify title and report number, if any)
- http://dx.doi.org/10.1002/j.2333-8504.1991.tb01414.x