This paper presents the results of tests of significance of the hypothesis that the true standard error of prediction of college grades is identical for boys and girls. Seven college samples were involved in the study, encompassing a total of 3546 individuals. The hypothesis was tested separately for each of the predictors: high school grades alone, aptitude test score alone, and high school grades and aptitude test scores in combination. The hypothesis was tested for each sample for each predictor, and an over-all test of the data from all the colleges was applied for each predictor. Using high school grades as the predictor, it was found that there was a significant sex difference (at the 5% level) in the observed standard errors of prediction at four of the seven colleges, and a highly significant over-all difference. Using aptitude test score as the predictor, a significant sex difference was not found at any college, and the over-all test was not significant. Using high school grade and aptitude test scores as simultaneous predictors, significance occurred at four colleges, and the over-all test was highly significant. For 18 of the 19 combinations of colleges and predictors, the direction of the observed difference was in favor of the girls; i.e., the square root of the average squared error in the prediction of girls' college grades was almost invariably less than the corresponding quantity for boys. Eight of these 18 differences were significant at the 5% level, and all of the significant differences occurred in cases where high school grade was used as a predictor (alone or in combination). The factor chiefly responsible for the greater predictability of girls' college grades was not the higher validity of predictors for girls than for boys, but was instead the greater homogeneity of girls' college grades; i.e., the standard deviation of college grades was smaller for the girls than it was for the boys.