This study centers on a class of complex, computer- delivered, constructed-response tasks for which the answers contain multiple elements, have correct solutions that take many forms, and, although they require judgment to evaluate, are machine scorable. It explores the use of computer-delivered constructed- response tasks in three areas: computer science, algebra, and verbal reasoning. In each area, an experimental, interactive assessment system has been constructed for which the computer presentation interface, the task formats, the scoring method, and the relevant research are discussed. It is concluded that these experimental systems represent a first generation of interactive performance assessment tools with "exciting possibilities for improving assessment, particularly by presenting problems more similar to criterion tasks and by providing new kinds of performance information," but that issues related to construct underrepresention and irrelevant variance, generalizability, efficiency, and response aggregation must first be resolved.