A Feasibility Study of On-the-Fly Item Generation in Adaptive Testing CAT IRT GRE
- Author(s):
- Bejar, Isaac I.; Lawless, Rene; Morley, Mary E.; Wagner, Michael; Bennett, Randy Elliot; Revuelta, Javier
- Publication Year:
- 2002
- Report Number:
- RR-02-23
- Source:
- ETS Research Report
- Document Type:
- Report
- Page Count:
- 44
- Subject/Key Words:
- Computerized Adaptive Testing (CAT), Item Response Theory (IRT), Graduate Record Examinations (GRE), Expected Response Function, Automated Item Generation, Quantitative Reasoning
Abstract
The goal of this study was to assess the feasibility of an approach to adaptive testing based on item models. A simulation study was designed to explore the affects of item modeling on score precision and bias, and two experimental tests were administered - an experimental, on-the-fly, adaptive quantitative-reasoning test as well as a linear test. Results of the simulation study showed that under different levels of isomorphicity, there was no bias, but precision of measurement was eroded, especially in the middle range of the true-score scale. However, the comparison of adaptive test scores with operational Graduate Record Examinations (GRE) test scores matched the test-retest correlation observed under operational conditions. Analyses of item functioning on linear forms suggested a high level of isomorphicity across items within models. The current study provides a promising first step toward significant cost and theoretical improvement in test creation methodology for educational assessment.
Read More
- Request Copy (specify title and report number, if any)
- http://dx.doi.org/10.1002/j.2333-8504.2002.tb01890.x