Test validity is a perennial issue in psychology. In this paper we reanalyze several recent studies that criticize the validity of SAT reading items; also, new data and analyses are presented to clarify what is meant by a valid reading test. Early studies have shown that reading items can often be guessed above chance levels when the passage is missing, which is an outcome viewed by some critics as proof of an invalid test. The SAT validity issue is reexamined in light of several data sets that present evidence that mergeable information across a set of interrelated SAT reading items (i.e., interitem context) artificially influences the way examinees "guess" the correct answers to items with passage absent. In one finding, when the passage is absent, it is shown that as the number of items increases, the percent items correctly guessed increases; this increase is due to increased interitem context wherein information is pooled across items in order artificially to improve "guessability." Other results show that as the interitem context increases, the correlation of an item's guessed score (when the passage is absent) and its nonguessed score (when the passage is present) increases. This is argued to be a consequence of the examinees' search for thematic coherence, an underlying cognitive process that is similar when the passage is present or absent. This paper further considers other processing similarities as well as process differences when the passage is present or absent. Based on results using both a correlational and percent pass approach, we conclude that the SAT reading test probably is consistent with a construct valid test.