Many writing assessments use generic prompts about social issues. However, we currently lack an understanding of how test takers respond to such prompts. In the absence of such an understanding, automated scoring systems may not be as reliable as they could be and may worsen over time. To move toward a deeper understanding of responses to generic issue prompts, we analyzed topical trends in test takers’ responses and correlated these trends with those found in the news. We found evidence that many trends are similar across essays and the news. However, we also observed some interesting differences. Based on these analyses, we make recommendations in this paper for developers of writing assessments and automated scoring systems.