skip to main content skip to footer

Comparability of Computer and Paper-and-Pencil Scores for Two CLEP General Examinations CLEP CAT

Author(s):
Checketts, Keith T.; Druesne, Barry; Mazzeo, John; Muhlstein, Alan; Raffeld, Paul C.
Publication Year:
1992
Report Number:
RR-92-14, CBR-91-05
Source:
ETS Research Report
Document Type:
Report
Page Count:
18
Subject/Key Words:
College Board, CLEP General Examination, College-Level Examination Program (CLEP), Computer Assisted Testing, English Composition Tests, Mathematics Tests, Test Administration, Test Reliability

Abstract

This report describes two studies that investigated the comparability of scores from paper-and-pencil and computer-administered versions of the College-Level Examination Program (CLEP) General Examinations in Mathematics and English Composition. The first study used a prototype computer- administered version of each examination. Based on the results of the first study and feedback from the study participants, several modifications were made to these prototype versions. A second study was then conducted using the modified computer versions. Both studies used a single-group counterbalanced equating design. Data for the Mathematics Examination were collected at Southwest Texas State University, and data for the English Composition Examination were collected at Utah State University. The results of Study 1 suggest that, despite efforts to design computer versions of the CLEP Mathematics and English Composition General Examinations that were administratively similar to the paper-and-pencil examinations (i.e., allowed item review and answer changing and were comparably timed), mode-of-administration effects (i.e., changes in average scores as a function of the mode of test delivery) were found. The results of Study 2 suggest that the modifications made to the computer versions eliminated the mode-of-administration effects for the English Composition Examination but not for the Mathematics Examination. The results of both studies underscore the need to determine empirically (rather than just to assume) the equivalence of computer and paper versions of an examination. (18pp.)

Read More