skip to main content skip to footer

A Task Type for Measuring the Representational Component of Quantitative Proficiency GREB CAT GRE

Author(s):
Bennett, Randy Elliot; Rock, Donald A.
Publication Year:
1995
Report Number:
RR-95-19
Source:
ETS Research Report
Document Type:
Report
Page Count:
58
Subject/Key Words:
Graduate Record Examinations Board, Computer Assisted Testing, Graduate Record Examinations (GRE), Item Types, Quantitative Ability, Representational Competence

Abstract

Two computer-based categorization tasks were developed and pilot tested. For Study I, the task asked examinees to sort mathematical word problem stems according to prototypes. Results showed that those who sorted well tended to have higher GRE General Test scores and college grades than did examinees who sorted less proficiently. Examinees generally preferred this task to multiple-choice items like those found on the General Test quantitative section and felt the task was a fairer measure of their ability to succeed in graduate school. For Study II, the task involved rating the similarity of item pairs. Both mathematics test developers and students participated, with the results analyzed by individual differences multidimensional scaling. Experts produced more scaleable ratings overall and primarily attended to two dimensions. Students used the same two dimensions with the addition of a third. Students who rated like the experts in terms of the dimensions used tended to have higher admissions test scores than those who used other criteria. Finally, examinees preferred multiple-choice questions to the rating task and felt that the former was a fairer indicator of their scholastic abilities. The major implication of this work is in identifying a new task type for admissions tests, as well as for instructional assessment products that might help lower- scoring examinees localize and remediate problem-solving difficulties.

Read More