Psychometric and Cognitive Functioning of an Underdetermined Computer-Based Response Type for Quantitative Reasoning

Author(s):
Bennett, Randy Elliot.; Morley, Mary; Quardt, Dennis; Singley, Mark K.; Rock, Donald A.; Katz, Irvin R.; Nhouyvanisvong, Adisack
Publication Year:
1998
Report Number:
RR-98-29
GREB-95-11R
Source:
Document Type:
Subject/Key Words:
Generating explanations (GE) reasoning ability item analysis differential item difficulty quantitative tests

Abstract

This study evaluated the psychometric and cognitive functioning of a new computer-delivered response type for measuring quantitative reasoning skill. This open-ended, automatically scorable response type, called generating examples, presents underdetermined problems that can have many right answers. Two GE tests were randomly spiraled among a group of paid volunteers. The tests differed in the manipulation of specific item features hypothesized to affect difficulty. Both within-group correlational and between-group experimental analyses relating to internal consistency reliability, relations with external criteria, features that contribute to item difficulty, adverse impact, and examinee perceptions were performed. Results showed that GE scores were reasonably reliable, but only moderately related to the GRE quantitative section, suggesting the two tests might be tapping somewhat different skills. In the difficulty analyses, two of three item features manipulated had the predicted effect; these features were asking examinees to supply more than one correct answer and to identify whether an item was solvable. Our impact analyses detected no significant gender differences independent of those associated with the General Test. Finally, examinees were evenly divided as to whether they thought GE items provided a fairer indicator of their ability than multiple-choice items, but as in past studies, they overwhelmingly preferred to take more conventional questions.

Read More