What Aren’t We Getting Right About Literacy Instruction?
Prompted perhaps by the recent release of disappointing NAEP reading scores, many state education leaders are asking, what aren’t we getting right about literacy instruction? Recent conferences and panels focus on the “science of reading,” but a larger, related issue is also surfacing: are our schools starving students of important historical, cultural, literary, and technical knowledge by focusing reading instruction exclusively on developing reading skills without teaching content knowledge? According to some, our approach to developing students’ reading skills over the last 20 years has completely overlooked the role of content knowledge.

In The Knowledge Gap, Natalie Wexler catalogs how a skills-based approach to instruction has drained content knowledge out of classroom teaching, with students learning “reading skills” decontextualized from any meaningful or engaging content. She recounts how one teacher, following the curriculum, tries to get her first-grade students to understand what a symbol is—an important reading strategy—by explaining what the colors of the Kenyan flag represent, without providing any geographical or historical context that would help these five year olds to understand the first thing about Kenya.
Differences in students’ performance on standardized tests are not due to a skills gap, she argues, but a knowledge gap. And tests that focus purely on decontextualized skills increase this gap, because they lead to a content-starved curriculum focused only on drilling the skills that will be tested on the state test. This is especially true for economically disadvantaged students whose schools narrow the curriculum to prepare for low-quality, skills-based tests. She calls for a richer, knowledge-based approach for all students.
The Impact of Low-Quality Tests
As an assessment developer, I am keenly aware of the negative impacts that low-quality tests have on classroom practice, driving teachers to focus on tested “reading skills” rather than engaging students in deep consideration of texts and the worlds they represent. As Wexler points out, reading deficiencies due to lack of content knowledge multiply according to the Matthew Principle: To those who have rich content knowledge, more will be given through successful reading. To those whose knowledge is limited, reading is a challenge, hobbling their ability to learn.
And this Catch 22 isn’t purely a socio-economic issue. It is founded in the science of reading. Research into how we make meaning of texts identifies the important role played by prior knowledge. Readers first construct their understanding of a text by linking words and sentences into ideas. Decoding skills are critical at this point, as is vocabulary. Comprehension occurs, however, when those ideas activate our prior knowledge and get connected to the structured concepts stored in our long-term memory. Our understanding of a text is a fusion of ideas proposed by the text and the prior knowledge we connect.
Assessing readers’ ability to construct the meaning of a text by blending its ideas and prior knowledge is a vexing problem for test designers. We tried to sort through these issues some years ago when colleagues and I conducted a study of how the SAT measures reading comprehension. We could predict students’ scores based on the strength of their vocabulary, their ability to decode sentence structure, and their ability to infer and use text structures and organization (i.e., narrative, problem-solution, cause and effect). The missing piece, however, was background knowledge. The SAT was not designed to measure that; in fact, test developers generally try to control for variations in background knowledge so as not to privilege students who have had broader and richer cultural experiences. And yet, we could not say how much students’ scores were driven by their prior knowledge of the topics in the passages they were reading.
Addressing Inequities
How can we address the inequities of the knowledge gap? And what are the implications for assessment? Here are some ideas:
Wexler calls for a “knowledge rich curriculum” focused on building students’ historical, civic, cultural, literary, and technical knowledge by having teachers even in early grades read aloud from rich texts from across disciplines. This requires creating more room in the curriculum for science and social studies and less time focusing on decontextualized skills. Expanding students’ knowledge base, she argues, will improve their reading comprehension as much or more than teaching decoding and text analysis skills.
Another approach being pursued by states and education policy makers is to embed assessments into the curriculum so that students’ reading proficiency is assessed in the context of the content they have learned. This is what Louisiana is attempting in its Innovation Assessment Demonstration Authority pilot, which integrates English language arts with social studies and tests students throughout the year on a set of texts that are part of the taught curriculum. Their vision for the pilot is to evolve testing “to promote equity, deepen instructional focus via knowledge- and text-rich pedagogy, and build integration of knowledge across subject areas.”
Louisiana is currently piloting this approach with 20 high schools, across three districts and two charter networks, all of which have adopted the state’s optional ELA curriculum that is organized around common topics and anchor texts. This common curriculum is what enables the statewide assessment to align to what is being taught in schools. Few states enjoy this advantage, however, and the statewide assessment typically must remain curriculum neutral.
How then can a state assessment that is not directly aligned to the taught curriculum address the very real issues and inequities of the knowledge gap? By focusing not on decontextualized reading skills—like find the main idea—but rather on asking students to demonstrate their ability to analyze and evaluate texts using the content and the tools of the discipline. Today’s next-generation standards dramatically raised the bar by setting expectations that students would not simply recall decontextualized information but should apply their knowledge using the “practices” of the discipline or subject area. Practices are the tools, procedures, and knowledge structures employed by experts in the field. They are what we mean when we say, “investigate like a scientist,” or “analyze like a historian.”
High-quality tests demand these practices, including analysis, synthesis, and application of new information, which deepens students’ conceptual understanding and engages them in building understanding that supports further inquiry, discovery, and learning.
Most statewide assessments will not be curriculum aligned (most states will choose to leave that up to their local districts). But states can deepen connections between knowledge and skills by developing assessments that ask students to analyze, synthesize, and apply knowledge to solve problems and express their solutions in writing. High-quality Research Simulation Tasks do just that by providing context, information, and tools students then employ to address the challenge of the task, whether it be mathematically modeling a real-world problem or drawing from multiple texts and videos to evaluate Franklin D. Roosevelt’s economic policies during the Great Depression.
By including these kinds of content- and context-rich tasks on the summative assessment, states model and influence the depth and rigor of academic tasks that students should encounter in the classroom—which is one way they can address the inequities of the knowledge gap.

In Practice

A look at how educators are developing innovative strategies to use data to inform instruction in the classroom 
In D.C. School, ‘Assessment Becomes a Tool’
Anecca Robinson is no stranger to interim assessment. In fact, she creates her own.
As assistant director at the Paul Public Charter School in Washington D.C., which serves middle school and high school students, Robinson coordinates all testing and manages curriculum and intervention programs. A few years ago, she and her team began developing interim assessments to prepare for the end-of-year summative tests.
“We’ve found from our own analysis that sometimes our assessments are even more rigorous than the summative end-of-year assessment,” she said. “So when our kids do well on the in-house tests, we have a pretty good guess, by conjecture, that our students will do well on the summative.”
Washington D.C. is a success story in the national education landscape, according to the latest scores on the National Assessment of Educational Progress, also known as “the nation’s report card” or simply NAEP.
It joined Mississippi as one of two places to substantially improve in most of the metrics that are tracked and had the highest gains in eighth-grade math and fourth-grade reading over the 30 years since NAEP testing began. While there are many reasons for Washington’s success, efforts by teachers and administrators like Robinson play their part.
Robinson said she combines test items that have been made public with those from an item bank to create interim tests that give teachers timely feedback. The tests are carefully calibrated. “I pay attention to question type, level of rigor and how it’s assessing a particular standard,” she said.
The standards steer almost everything, Robinson said. “We are using the standards for the foundation of our curriculum updates, how we plan for instruction and how we plan for professional development,” she said. “It guides our efforts.”
“When I started here in 2017, we did a lot of professional development around reading and math, focusing on Common Core,” she said. “Our teachers said we need to design our curriculum so that the bulk of our efforts are focused on the key skills that are spelled out in the standards.”
The interim assessments used by the school also closely align to the standards, Robinson said. She rejects the idea that interim assessments encourage “teaching to the test.”
“What I always try to emphasize to the teachers is that we always have standards, and those standards inform how we teach and what we teach” she said. “As educators, we have to make decisions on how to pace instruction, how to modify curriculum based on class and individual student needs, but without watering down the standards. Assessment becomes a tool.”
Editor’s Note: For those who want to follow Ms. Robinson’s example, New Meridian makes exemplar classroom tasks available that are aligned to grade-level standards and assessments.

Voices in Education

Tell us what you think about critical issues in assessment
Should NAEP Be More Aligned to State Standards?
With much of the educational community asking questions in the wake of last year’s disappointing NAEP scores, an important question is drawing attention: how well does the National Assessment of Educational Progress align to state education standards and assessments, and should that alignment improve?
It’s a question that has been asked for years and that officials at NAEP have hit head on, most recently in an October report by the NAEP Validity Studies Panel.
“NAEP is meant to be reflective of the entirety of what is taught in the United States, and the many changes to standards in the past 10 years have led to questions about the extent to which NAEP continues to meet this objective,” the report said. “The NVS Panel has conducted several studies to investigate this issue … and has found some variations in the alignment between state and NAEP standards across different NAEP grades and subjects.”
NAEP has initiated many studies in recent years to address alignment, and the results are obviously of interest to states and districts, which are required by federal law to administer assessments aligned to state standards and yet often judged by NAEP scores that may not be so closely aligned.
“States and districts have informally posited that, if the alignment between NAEP frameworks and their own content standards were closer, then their NAEP scores might be higher,” the report said. “Stated another way, there are concerns that NAEP may be underreporting the actual abilities of their students—and trends in achievement—because of some degree of misalignment.”
Tell us what you think: should NAEP be more aligned to state education standards? And if so, how and why? Please visit this page and use the comments section to tell us your viewpoint.

 

Inside New Meridian

Ask Us
‘Technological Change Has Opened New Possibilities’
Expert: Kristopher John, Vice President, Product Strategy
Question: How do you think assessment will change in the next 5 to 10 years?
Answer: There’s been a major shift in how assessments are delivered over the last decade. It’s really an extraordinary change. Ten years ago, you had to contend with the logistics of paper and pencil and all that entailed. The technological change has opened new possibilities—and that’s where I believe we will see the most change. You can already see it in the Innovative Assessment Demonstration Authority pilots currently underway. States can pursue models of assessment that would have been too cumbersome before these shifts occurred.
Many of the options we can now pursue weren’t possible even five years ago. It is hard to overestimate how much mobile and new network technologies will impact how assessments are delivered and, consequently, how they can be used more effectively. A 5G mobile network that is fast enough to facilitate driverless cars and artificial intelligence could have an enormous impact on how assessments work in the classroom.
Do you have a question about assessment that you want answered? Ask us your question and a New Meridian expert may take it on. Contact us at info@newmeridiancorp.org.

Must Reads

In case you missed it, assessment-related news worth reading
The 74 Interview: Outgoing Louisiana Chief John White
Startling Science Scores in California
The Nation’s English-Learner Population Has Surged
25% of Students Reported Taking Engineering-Related Classes in 2018Â