Prompted perhaps by the recent release of disappointing NAEP reading scores, many state education leaders are asking, what aren’t we getting right about literacy instruction? Recent conferences and panels focus on the “science of reading,” but a larger, related issue is also surfacing: are our schools starving students of important historical, cultural, literary, and technical knowledge by focusing reading instruction exclusively on developing reading skills without teaching content knowledge? According to some, our approach to developing students’ reading skills over the last 20 years has completely overlooked the role of content knowledge.

In The Knowledge Gap, Natalie Wexler catalogs how a skills-based approach to instruction has drained content knowledge out of classroom teaching, with students learning “reading skills” decontextualized from any meaningful or engaging content. She recounts how one teacher, following the curriculum, tries to get her first-grade students to understand what a symbol is—an important reading strategy—by explaining what the colors of the Kenyan flag represent, without providing any geographical or historical context that would help these five year olds to understand the first thing about Kenya.

Differences in students’ performance on standardized tests are not due to a skills gap, she argues, but a knowledge gap. And tests that focus purely on decontextualized skills increase this gap, because they lead to a content-starved curriculum focused only on drilling the skills that will be tested on the state test. This is especially true for economically disadvantaged students whose schools narrow the curriculum to prepare for low-quality, skills-based tests. She calls for a richer, knowledge-based approach for all students.

The Impact of Low-Quality Tests

As an assessment developer, I am keenly aware of the negative impacts that low-quality tests have on classroom practice, driving teachers to focus on tested “reading skills” rather than engaging students in deep consideration of texts and the worlds they represent. As Wexler points out, reading deficiencies due to lack of content knowledge multiply according to the Matthew Principle: To those who have rich content knowledge, more will be given through successful reading. To those whose knowledge is limited, reading is a challenge, hobbling their ability to learn.

And this Catch 22 isn’t purely a socio-economic issue. It is founded in the science of reading. Research into how we make meaning of texts identifies the important role played by prior knowledge. Readers first construct their understanding of a text by linking words and sentences into ideas. Decoding skills are critical at this point, as is vocabulary. Comprehension occurs, however, when those ideas activate our prior knowledge and get connected to the structured concepts stored in our long-term memory. Our understanding of a text is a fusion of ideas proposed by the text and the prior knowledge we connect.

Assessing readers’ ability to construct the meaning of a text by blending its ideas and prior knowledge is a vexing problem for test designers. We tried to sort through these issues some years ago when colleagues and I conducted a study of how the SAT measures reading comprehension. We could predict students’ scores based on the strength of their vocabulary, their ability to decode sentence structure, and their ability to infer and use text structures and organization (i.e., narrative, problem-solution, cause and effect). The missing piece, however, was background knowledge. The SAT was not designed to measure that; in fact, test developers generally try to control for variations in background knowledge so as not to privilege students who have had broader and richer cultural experiences. And yet, we could not say how much students’ scores were driven by their prior knowledge of the topics in the passages they were reading.

Addressing Inequities

How can we address the inequities of the knowledge gap? And what are the implications for assessment? Here are some ideas:

Wexler calls for a “knowledge rich curriculum” focused on building students’ historical, civic, cultural, literary, and technical knowledge by having teachers even in early grades read aloud from rich texts from across disciplines. This requires creating more room in the curriculum for science and social studies and less time focusing on decontextualized skills. Expanding students’ knowledge base, she argues, will improve their reading comprehension as much or more than teaching decoding and text analysis skills.

Another approach being pursued by states and education policy makers is to embed assessments into the curriculum so that students’ reading proficiency is assessed in the context of the content they have learned. This is what Louisiana is attempting in its Innovation Assessment Demonstration Authority pilot, which integrates English language arts with social studies and tests students throughout the year on a set of texts that are part of the taught curriculum. Their vision for the pilot is to evolve testing “to promote equity, deepen instructional focus via knowledge- and text-rich pedagogy, and build integration of knowledge across subject areas.”

Louisiana is currently piloting this approach with 20 high schools, across three districts and two charter networks, all of which have adopted the state’s optional ELA curriculum that is organized around common topics and anchor texts. This common curriculum is what enables the statewide assessment to align to what is being taught in schools. Few states enjoy this advantage, however, and the statewide assessment typically must remain curriculum neutral.

How then can a state assessment that is not directly aligned to the taught curriculum address the very real issues and inequities of the knowledge gap? By focusing not on decontextualized reading skills—like find the main idea—but rather on asking students to demonstrate their ability to analyze and evaluate texts using the content and the tools of the discipline. Today’s next-generation standards dramatically raised the bar by setting expectations that students would not simply recall decontextualized information but should apply their knowledge using the “practices” of the discipline or subject area. Practices are the tools, procedures, and knowledge structures employed by experts in the field. They are what we mean when we say, “investigate like a scientist,” or “analyze like a historian.”

High-quality tests demand these practices, including analysis, synthesis, and application of new information, which deepens students’ conceptual understanding and engages them in building understanding that supports further inquiry, discovery, and learning.

Most statewide assessments will not be curriculum aligned (most states will choose to leave that up to their local districts). But states can deepen connections between knowledge and skills by developing assessments that ask students to analyze, synthesize, and apply knowledge to solve problems and express their solutions in writing. High-quality Research Simulation Tasks do just that by providing context, information, and tools students then employ to address the challenge of the task, whether it be mathematically modeling a real-world problem or drawing from multiple texts and videos to evaluate Franklin D. Roosevelt’s economic policies during the Great Depression.

By including these kinds of content- and context-rich tasks on the summative assessment, states model and influence the depth and rigor of academic tasks that students should encounter in the classroom—which is one way they can address the inequities of the knowledge gap.