Fitting New Measurement Models to GRE General Test Constructed-Response Item Data
- Bennett, Randy E.; Sebrechts, Marc M.; Yamamoto, Kentaro
- Publication Year:
- Report Number:
- Document Type:
- Subject/Key Words:
- Algebra constructed responses expert systems item analysis models test theory
This exploratory study applied two new cognitively sensitive measurement models to constructed-response quantitative data. The models, intended to produce qualitative characterizations of examinee performance, were fitted to algebra word problem solutions produced by examinees taking the GRE General Test. Two types of response data were modeled--error diagnoses and partial credit scores--both produced by an expert system. Error diagnoses, analyzed using Yamamoto's (1989a) Hybrid model, detected a class of examinees who tended to miss important pieces of the problem solution but made relatively few errors of other types. Group members were of low quantitative proficiency overall, though considerable variability was evident. Comparisons with matched examinees whose response patterns were better captured by the unidimensional IRT model suggested subtle differences in error frequency rather than sharp qualitative distinctions. In contrast with the error data, partial-credit scores modeled using Rock's (Rock & Pollack, 1987) HOST procedure did not fit well, in part owing to limitations of the task theory being tested. Implications for the development of refined task and error theories, improvements to expert- system scoring procedures, and response modeling are discussed.