Modified admissions tests that produce scores with essentially the same meaning as those from standard examinations are required by federal law. Unable to meet this mandate, testing programs typically flag results from modified administrations to warn users of their noncomparability. Research indicates that the primary source of noncomparability in paper-and-pencil tests is associated with the provision of extended time. Computer-based tests offer particular promise for improving comparability, in part, because timing could potentially be made more generous for all examinees. As data become available, empirical work will be needed to determine whether scores from computer-based tests are more comparable, but so will increased efforts to achieve score comparability through assessment design. This paper argues for test modifications that, rather than being restricted to examinees with disabilities, represent general design changes that enhance comparability for everyone.