In the Cognitively Based Assessment of, for, and as Learning (CBAL™) research initiative, innovative K–12 prototype tests based on cognitive competency models are developed. This report presents the statistical results of the 4 CBAL Grade 8 Writing tests administered to students in 12 states in fall 2009. Specifically, classical item statistics including rater reliabilities for human-scored items, item p+ values, item-total correlations, item missing response rates, differential item functioning (DIF), interscore correlations, and reliabilities of subscores and total scores are reported. Under item response theory, the tests are calibrated and scaled based on the generalized partial credit model. In addition, t-tests, multiple comparisons, and mixed models are used to examine the factors influencing test scores, including test form, test order, student, school, gender, and socioeconomic status. The results show that these 4 tests performed reasonably well.