skip to main content skip to footer

Developing and Evaluating a Machine-Scorable, Constrained Constructed-Response Item AP CAT

Author(s):
Braun, Henry I.; Bennett, Randy Elliot; Frye, Douglas; Soloway, Elliot
Publication Year:
1989
Report Number:
RR-89-30
Source:
ETS Research Report
Document Type:
Report
Page Count:
48
Subject/Key Words:
Advanced Placement Program (AP), Computer Assisted Testing, Computer Science Tests, Constructed Response Items, Expert Systems, Scoring, Test Construction

Abstract

The use of constructed response items in large scale standardized testing has been hampered by the costs and difficulties associated with obtaining reliable scores. The advent of expert systems may signal the eventual removal of this impediment. This study investigated the accuracy with which expert systems could score a new, non-multiple choice item type. The item type presents a faulty solution to a computer programming problem and asks the student to correct the solution. This item type was administered to a sample of high school seniors enrolled in an Advanced Placement course in Computer Science who also took the Advanced Placement Computer Science (APCS) Test. Results indicated that the expert systems were able to produce scores for between 82% and 97% of the solutions encountered and to display high agreement with a human reader on which solutions were and were not correct. Diagnoses of the specific errors produced by students were less accurate. Correlations with scores on the objective and free-response sections of the APCS examination were moderate. Implications for additional research and for testing practice are offered. (48pp.)

Read More