An information-correction method for testlet-based tests is introduced. This method takes advantage of both generalizability theory (GT) and item response theory (IRT). The measurement error for the examinee proficiency parameter is often underestimated when a unidimensional conditional-independence IRT model is specified for a testlet dataset. By using a design effect ratio composed of random variances that can be easily derived from GT analysis, it becomes possible to adjust the underestimated measurement error from the unidimensional IRT models to a more appropriate level. In this paper, it is demonstrated how the information-correction method can be implemented in the context of a testlet design. Also, through the simulation study, it is shown that the underestimated measurement errors from IRT estimates can be adjusted to the appropriate level despite the varyingmagnitudes of local itemdependence (LID), testlet length, balance of testlet length, and number of item parameters in themodel.The real data example providesmore details about when and how the information-correctionmethod should be used in a test analysis. Estimation by the information-correction method should be adequate for practical work, given the robustness of the variance ratio.