Whether portfolio assessment systems can provide credible and useful accountability data remains an unresolved issue. Recent studies raise several concerns, among the most serious being the lack of consistent rating among judges. The current research describes a writing portfolio system in which portfolios were scored by two raters on each of three dimensions. Judgment consistency was much greater than founding other large-scale efforts, despite the fact that the selection of portfolio contents was relatively unconstrained. These promising results are attributed to a thorough, and long-standing institutional effort to develop a common interpretive framework for examining and considering student writing. Features and mechanisms important to developing this shared understanding are reviewed.