skip to main content skip to footer

Environments for Presenting and Automatically Scoring Complex Constructed-Response Items

Author(s):
Bennett, Randy Elliot
Publication Year:
1992
Report Number:
RM-92-05
Source:
ETS Research Memorandum
Document Type:
Report
Page Count:
38
Subject/Key Words:
Automation, Constructed-Response Tests, Scoring

Abstract

This study centers on a class of complex, computer- delivered, constructed-response tasks for which the answers contain multiple elements, have correct solutions that take many forms, and, although they require judgment to evaluate, are machine scorable. It explores the use of computer-delivered constructed- response tasks in three areas: computer science, algebra, and verbal reasoning. In each area, an experimental, interactive assessment system has been constructed for which the computer presentation interface, the task formats, the scoring method, and the relevant research are discussed. It is concluded that these experimental systems represent a first generation of interactive performance assessment tools with "exciting possibilities for improving assessment, particularly by presenting problems more similar to criterion tasks and by providing new kinds of performance information," but that issues related to construct underrepresention and irrelevant variance, generalizability, efficiency, and response aggregation must first be resolved.

Read More