Examinee responses to 48 constructed response items, administered as part of the Mathematics Intervention Module project, were analyzed to determine the likelihood that common incorrect responses could be identified and the misconceptions leading to those responses diagnosed. For two thirds of the items, we were able to diagnose the misconceptions for at least 60% of the nonblank incorrect responses. Using Sato’s caution index, unexpected incorrect responses from examinees whose caution index was high were categorized to determine their cause. Of particular interest were responses involving procedural errors that might not have been possible in the more constrained setting of a computer-delivered assessment. Examples are given that show how a computer-delivered assessment can be constrained to allow construct-relevant procedural errors and to prevent construct-irrelevant ones.