Examining the Validity of a Computer-Based Generating-Explanations Test in an Operational Setting

Author(s):
Bennett, Randy Elliot.; Rock, Donald A.
Publication Year:
1998
Report Number:
RR-97-18
GREB-93-01P
Source:
Document Type:
Subject/Key Words:
GRE General Test divergent thinking convergent thinking generating explanations (GE) discriminant analysis predictive validity constructed response items computer-assisted testing

Abstract

Generating explanations (GE) is a computer-delivered item type that presents a situation and asks the examinee to pose as many plausible reasons for it as possible. Previous research suggests that GE measures a divergent thinking ability largely independent of the convergent skills tapped by the GRE General Test. This study was conducted to determine if prior GE validity results generalized to the GRE candidate population, how population groups performed, what effects partial-credit modeling might have for validity, and what problems were associated with operational administration. Validity results showed that earlier findings were generally supported: GE was found to be reliable but only marginally related to the General Test and to make significant (but small) independent contributions to the explanation of relevant criteria. With respect to population groups, GE produced smaller gender and ethnic group differences than did the General Test and showed the same relations to outside criteria across groups, suggesting it was measuring similar skills in each population. Attempts to model GE responses on a partial-credit IRT scale succeeded but produced no improvement in relations with external criteria over those obtained by summing raw item scores. Finally, interviews conducted with examinees to detect potential delivery problems suggested that the directions needed to be shortened.

Read More