(110pp.) Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item- functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75- item pool. A three-parameter logistic item response model was assumed, and examinees were matched on expected true scores based on their CAT responses and on estimated item parameters. Both DIF methods performed well. The CAT-based DIF statistics were highly correlated with DIF statistics based on nonadaptive administration of all 75 pool items and with the true magnitudes of DIF in the simulation. DIF methods were also investigated for "pretest items," for which item parameter estimates were assumed to be unavailable. The pretest DIF statistics were generally well-behaved and also had high correlations with the true DIF. The pretest DIF measures, however, tended to be slightly smaller in magnitude than their CAT-based counterparts. Also, in the case of the Mantel-Haenszel approach, the pretest DIF statistics tended to have somewhat larger standard errors than the CAT DIF statistics.