skip to main content skip to footer

Fitting New Measurement Models to GRE General Test Constructed-Response Item Data GRE GREB

Author(s):
Bennett, Randy Elliot; Sebrechts, Marc M.; Yamamoto, Kentaro
Publication Year:
1991
Report Number:
RR-91-60
Source:
ETS Research Report
Document Type:
Report
Page Count:
62
Subject/Key Words:
Graduate Record Examinations Board, Algebra, Constructed Responses, Expert Systems, General Test (GRE), Graduate Record Examinations (GRE), Item Analysis, Models, Test Theory

Abstract

This exploratory study applied two new cognitively sensitive measurement models to constructed-response quantitative data. The models, intended to produce qualitative characterizations of examinee performance, were fitted to algebra word problem solutions produced by examinees taking the GRE General Test. Two types of response data were modeled--error diagnoses and partial credit scores--both produced by an expert system. Error diagnoses, analyzed using Yamamoto's (1989a) Hybrid model, detected a class of examinees who tended to miss important pieces of the problem solution but made relatively few errors of other types. Group members were of low quantitative proficiency overall, though considerable variability was evident. Comparisons with matched examinees whose response patterns were better captured by the unidimensional IRT model suggested subtle differences in error frequency rather than sharp qualitative distinctions. In contrast with the error data, partial-credit scores modeled using Rock's (Rock & Pollack, 1987) HOST procedure did not fit well, in part owing to limitations of the task theory being tested. Implications for the development of refined task and error theories, improvements to expert- system scoring procedures, and response modeling are discussed.

Read More