An Evaluation of Three Approximate Item Response Theory Models for Equating Test Scores IRT SAT
- Author(s):
- Douglass, James B.; Marco, Gary Lee.; Wingersky, Marilyn S.
- Publication Year:
- 1985
- Report Number:
- RR-85-46
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 77
- Subject/Key Words:
- Equated Scores, Item Response Theory (IRT), Mathematical Models, Scholastic Aptitude Test (SAT), Test Theory
Abstract
The primary purpose of this study was to determine the extent to which three item response theory (IRT) models could be used to approximate the three-parameter logistic model in estimating item parameters and in equating test scores. These approximate models were less expensive to apply and in some cases used less data than the full-blown, three-parameter model. The results of the study were as follows: (1) the item calibrations based on twentieths were closer to the true values and to LOGIST estimates than item calibrations based on fifths; (2) the equating results based on twentieths, however, were not more accurate generally than those based on fifths; (3) the three-parameter model using coarse groupings yielded highly accurate score conversions in equating a test to itself, more accurate in fact than the full-blown three-parameter models studied by Petersen, Cook, and Stocking; and (4) all of the approximate models yielded very accurate equating results. A follow-up analysis indicated that these unexpected equating results were due in large part to the indirect method used to place item parameter estimates on scale through existing score conversions derived from conventional equating methods. The success of the approximate models raises a question about the adequacy of equating a test to itself as a criterion for evaluating equating results. Further research is recommended bdfore any of the approximate models are used operationally. (77pp.)
Read More
- Request Copy (specify title and report number, if any)
- http://dx.doi.org/10.1002/j.2330-8516.1985.tb00131.x