This study investigated the strategies subjects adopted to solve stem-equivalent SAT-Mathematics (SAT-M) word problems in constructed-response (CR) and multiple- choice (MC) formats. Parallel test forms of CR and MC items were administered to subjects representing a range of mathematical abilities. Format-related differences in difficulty were more prominent at the item level than for the test as a whole. At the item level, analyses of subjects' problem-solving processes appeared to explain difficulty differences as well as similarities. Differences in difficulty derived more from test- development than from cognitive factors: On items in which large format effects were observed, the MC response options often did not include the erroneous answers initially generated by subjects. Thus, the MC options may have given unintended feedback when a subject's initial answer was not an option or allowed a subject to choose the correct answer based on an estimate. Similarities between formats occurred because subjects used similar methods to solve both CR and MC items. Surprisingly, when solving CR items, subjects often adopted strategies commonly associated with MC problem solving. For example, subjects appeared adept at estimating plausible answers to CR items and checking those answers against the demands of the item stem. Although there may be good reasons for using constructed-response items in large-scale testing programs, multiple-choice questions of the sort studied here should provide measurement that is generally comparable to stem-equivalent constructed-response items.