skip to main content skip to footer

Examining the Validity of a Computer-Based Generating-Explanations Test in an Operational Setting GREB GRE CAT

Author(s):
Bennett, Randy Elliot; Rock, Donald A.
Publication Year:
1997
Report Number:
RR-97-18
Source:
ETS Research Report
Document Type:
Report
Page Count:
58
Subject/Key Words:
Graduate Record Examinations Board, General Test (GRE), Divergent Thinking, Convergent Thinking, Generating Explanations, Discriminant Analysis, Predictive Validity, Constructed-Response Items, Computer Assisted Testing

Abstract

Generating explanations (GE) is a computer-delivered item type that presents a situation and asks the examinee to pose as many plausible reasons for it as possible. Previous research suggests that GE measures a divergent thinking ability largely independent of the convergent skills tapped by the GRE General Test. This study was conducted to determine if prior GE validity results generalized to the GRE candidate population, how population groups performed, what effects partial-credit modeling might have for validity, and what problems were associated with operational administration. Validity results showed that earlier findings were generally supported: GE was found to be reliable but only marginally related to the General Test and to make significant (but small) independent contributions to the explanation of relevant criteria. With respect to population groups, GE produced smaller gender and ethnic group differences than did the General Test and showed the same relations to outside criteria across groups, suggesting it was measuring similar skills in each population. Attempts to model GE responses on a partial-credit IRT scale succeeded but produced no improvement in relations with external criteria over those obtained by summing raw item scores. Finally, interviews conducted with examinees to detect potential delivery problems suggested that the directions needed to be shortened

Read More