A Feasibility Study of On-the-Fly Item Generation in Adaptive Testing

Author(s):
Bejar, Isaac I.; Lawless, René R.; Morley, Mary E.; Wagner, Michael E.; Bennett, Randy E.; Revuelta, Javier
Publication Year:
2002
Report Number:
RR-02-23
GREB-98-12P
Source:
Document Type:
Subject/Key Words:
Adaptive testing computer-adaptive testing (CAT) item response theory (IRT) expected response function automated item generation quantitative reasoning

Abstract

The goal of this study was to assess the feasibility of an approach to adaptive testing based on item models. A simulation study was designed to explore the affects of item modeling on score precision and bias, and two experimental tests were administered - an experimental, on-the-fly, adaptive quantitative-reasoning test as well as a linear test. Results of the simulation study showed that under different levels of isomorphicity, there was no bias, but precision of measurement was eroded, especially in the middle range of the true-score scale. However, the comparison of adaptive test scores with operational Graduate Record Examinations (GRE) test scores matched the test-retest correlation observed under operational conditions. Analyses of item functioning on linear forms suggested a high level of isomorphicity across items within models. The current study provides a promising first step toward significant cost and theoretical improvement in test creation methodology for educational assessment.

Read More