This study investigated the convergent validity of expert-system scores for four mathematical constructed-response item formats. A five-factor model was posed comprised of four constructed-response format factors and a GRE General Test quantitative factor. Confirmatory factor analysis was used to test the fit of this model and to compare it with several alternatives. The five-factor model fit well, although a solution comprised of two highly correlated dimensions--GRE-quantitative and constructed-response--represented the data almost as well. These results extend the meaning of the expert system's constructed-response scores by relating them to a well-established quantitative measure and by indicating that they signify the same underlying proficiency across item formats.