Research has found that the a‐stratified item selection strategy (STR) for computerized adaptive tests (CATs) may lead to insufficient use of high a items at later stages of the tests and thus to reduced measurement precision. A refined approach, unequal item selection across strata (USTR), effectively improves test precision over the STR by allowing more items to be selected from the strata with higher a‐parameter values. However, both approaches ignore the contribution of items' c‐parameters to the information. This study proposes another procedure—maximum information STR (MISTR)—that groups items based on the maximum amount of Fisher information an item can provide. This information is a function of its a‐ and c‐parameters. MISTR can be further modified to select more items from strata with high a‐parameter values (unequal MISTR [UMISTR]). This study evaluated and compared MISTR, UMISTR, STR, and USTR on two aspects of the CAT performance: (a) quality of θ estimation and (b) effectiveness in item pool usage. The results showed that both the MISTR and UMISTR approaches produced more precise ability estimation than the STR approach when the test length was longer and when an item‐exposure‐control procedure was used. The UMISTR produced slightly less precise ability estimation than USTR but led to fewer underused items, indicating a more balanced use of the item pool. These findings suggest that MISTR and UMISTR can be viable alternatives to STR and USTR.