‘What is Tested is What Gets Taught’ 

Recent disappointing results on NAEP and PISA have prompted many policy makers and pundits to question the billions of dollars invested in large-scale, standards-based reform initiatives dating back to NCLB, RTT, Common Core and next-generation college and career readiness assessments. U.S. student performance on PISA has been stagnant since 2000, and achievement gaps have widened. NAEP saw declines this year across grades and subjects.

As Daniel Koretz, a professor at the Harvard Graduate School of Education, recently told The New York Times, “it’s really time to rethink the entire drift of policy reform because it just isn’t working.”

Or maybe it’s just a really intractable problem and “frustration is understandable,” as William McCallum, a mathematician who helped write the Common Core State Standards, said in the same Times piece.

Indeed, national efforts to address the problem run headlong into our historical commitment to local control, whereby 15,000 local schools boards are trying to figure out how to improve student outcomes for an increasingly diverse population facing an increasingly complex world. Maybe we do know what works but lack the mechanisms to implement it at scale. Where there has been strong leadership with a clear commitment to high standards and quality instruction, outcomes have improved.

Washington DC, for example, with its focus on rigorous standards and assessments, early childhood education, teacher quality and principal leadership, has shown tremendous gains on NAEP since 2003. Mississippi, with its focus on research-based reading instruction, also saw strong gains.

What’s immediately apparent if you actually look at these gold-standard assessments, is that they emphasize critical thinking and problem solving, going well beyond the low-level tests of basic skills common in the NCLB era. They include more open-ended tasks and complex problems.

Students deemed “proficient” on NAEP have not only demonstrated mastery of challenging academic content, but also the ability to analyze, reason, plan and creatively solve real-world problems.

Consider a fourth-grade NAEP reading question using the story Five Boiled Eggs, which asks students, “Do you think that the innkeeper changes in the story? Use specific information from the beginning and end of the story to support your opinion.” It requires students to think across the entire text, locate and recall specific passages and form a reasoned analysis.

The same is true of PISA, which not only assesses whether students can locate and recall knowledge, but also whether they can extrapolate from what they have learned and apply their knowledge in new situations.

Consider a PISA math question that challenges students to help a nurse calculate the drip rate for an infusion in drops per minute, using a formula. The question then asks students to calculate how the drops per minute change if the hours are doubled but the drop factor and the volume of the solution remain constant. The questions shows the impact of change and mathematical relationships in an occupational setting.

To prepare students to perform on these more rigorous assessments, schools will need to provide regular opportunities for students to practice high-level skills such as solving complex problems, conducting research, developing a persuasive argument and using technologies to find, analyze and evaluate information.

Research confirms that an inquiry-based approach that expects students to delve deeply into an area of study and make connections among ideas and areas of knowledge within and across disciplines strengthens understanding, retention and the transfer of knowledge.

Schools will also have to develop higher-quality assessments that measure these deeper learning skills. Low-quality assessments that focus on low-level recall have an insidious effect, especially on students of color and low-income students whose schools have narrowed the curriculum to mirror low-quality tests. The result is that these students are often denied access to a thinking curriculum and are instead relegated to remedial, rote-oriented and often scripted courses of study.

It is because of our commitment to an equitable education for all students that New Meridian is committed to providing states with high-quality assessments that measure these deeper learning skills. We understand that what is tested is what gets taught. That is why we are committed to helping states develop assessments that include more authentic investigation into real-world challenges, so that we can help ensure that every student is given the opportunity to develop these higher-level skills that will be so critical to their future success.

Changemakers

Examining how states and systems are rethinking education, assessments and accountability to improve student outcomes

Assessment is Vital to DoDEA’s Mission

There are plenty of U.S. school systems that are larger than Department of Defense Education Activity, with its 164 schools and 72,000 students. Los Angeles, New York, and Houston are just a few.

But none can say they operate across 10 different time zones, with schools in 11 foreign countries, seven states and two U.S. territories.

Broader still is the mission, providing quality K-12 education to highly-mobile military families who could find themselves living in Germany, Korea and the Washington suburbs all in the span of a decade of service. For Thomas M. Brady, director of DoDEA for the last five years, facilitating that mobility with educational support is a vital component of America’s military readiness.

“If the men and women we’re sending in harm’s way … are comfortable that their children are in a good quality school system, then they can focus on their mission and they don’t have to worry,” Brady said. “That’s one less distraction from what they do. It’s a very important mission that we take very seriously.”

For Brady, who ran school systems in Washington DC, Philadelphia and Providence, the mission is close to home. During 25 years in the Army, his family moved roughly 19 times. His oldest daughter attended four different high schools. “Moving is part of the Department of Defense fabric,” he said.

For DoDEA, however, that mobility presents special challenges. The system serves children of families in all five branches of the service, plus civilian Department of Defense workers. Not only do students move frequently between DoDEA schools, but they also move in and out of civilian school systems. Standards and continuity across subjects and grade levels is vital.

To ensure that continuity, Brady and his team created an assessment system with both summative and interim components. Adopting national college and career readiness standards, DoDEA began summative assessments two years ago, administering annual tests similar to those used by states.

“As one of two federally-operated school systems, the No Child Left Behind and ESSA mandates did not have a direct impact on the operation of our schools. However, we believe in accountability to our stakeholders and the importance of a viable assessment program and after the first full year of implementation, we’re doing remarkably well.”

In this year’s National Assessment of Educational Progress, known as the nation’s report card, fourth graders at DoDEA schools led the nation in both the reading and math assessments. Eight graders led the reading assessment, and ranked second in math. DoDEA scores beat the national averages by 10 to 18 percentage points.

“If we’re teaching the standards at or above [the national] level, then our children will not have a problem when they get to different school districts,” Brady said.

Yet DoDEA did not stop with summative assessment. This year, the organization implemented an interim assessment program with a goal of introducing relevant and timely feedback. Teachers in DoDEA schools now see the results of interim assessments in as little as two weeks. As the system processes and data flows are refined, Brady sees test results being available to teachers within days.

“The end-of-year summative assessment is important, obviously,” Brady said. “It’s a very good after-the-fact measurement as to what you did, and it can help you prepare for the following year. But when you’re looking at improving student achievement, it’s our belief that you need to get the results to teachers as the year progresses. The faster the interim results can be brought back to those teams of teachers, the faster they can make the necessary changes.”

Brady said DoDEA is still rolling out interim assessments at schools worldwide and there are still questions that need to be addressed in the months and years ahead. But he is confident that interim assessment will benefit students.

“Interim assessments, I think, are the key to student achievement and improvement,” he said, “and so that’s what we’re doing this year.”

Q&A

Juan D’Brot and Erika Landl Discuss Interim Assessment

Juan D’Brot and Erika Landl are senior associates at the National Center for the Improvement of Educational Assessment, commonly known as the Center for Assessment. Both were instrumental in producing this year’s Reidy Interactive Lecture Series, which focused on interim assessment, and the Center’s Interim Assessment Specifications Process. The Prime caught up with them as they finish a busy year.

How do you define interim assessment?

An interim assessment is a tool that provides information about student learning throughout a course, grade or instructional sequence. It is differentiated from formative assessment in that it is not a process that occurs during or in concert with instruction. The results can be used to inform instruction (i.e., for formative purposes) but they do not constitute formative assessment. Similarly, interim assessments differ from summative assessments in that the results are not used to make decisions about the full range of knowledge, skills and abilities acquired by the end of a course or grade.

Why are interim assessments valuable in today’s testing environment?

If designed appropriately, interim assessments can be used to inform several types of educational decisions, including those related to:

  • the quality of programs and initiatives
  • how students are progressing in a particular content area
  • where additional instruction is required
  • where professional development may be required
  • how well curriculum and instruction reflect the standards.

In addition, they can be used to measure standards that are difficult to assess, facilitate self-monitoring by students and help educators understand how items can be designed to reflect the complexity of standards. However, the value of an interim assessment is dependent on how well it is aligned to a specified purpose and use.

What is the proper role of interim assessment in a balanced assessment system? A balanced assessment system is one where various assessments in the system (1) are coherently linked through a clear specification of learning targets, (2) are comprehensive in scope, providing evidence that supports educational decision-making by different stakeholders, and (3) serve to continuously document student progress over time. Interim assessments can contribute to the utility of assessment systems if thoughtfully developed to address a particular need, but they are not a required component of a balanced assessment system. Consequently, the “proper” role of interim assessment, if any, will vary depending on the portfolio of assessments that already exist and how the results from the assessment are intended to be used. 

What insight can an interim assessment provide that we might not otherwise have? A well-designed interim assessment provides a way of supplementing and corroborating existing knowledge of student performance, curricular effectiveness or instructional delivery. For example, a granular interim assessment could be used to formally confirm estimates of student achievement and progress based upon educator’s formative assessment practices. Alternatively, an assessment that is broader in scope/content coverage might be used to evaluate how well curriculum is being delivered by educators or received by students. The intended purpose and use of an interim assessment must be defined in advance to ensure that districts and schools choose options that are aligned with their goals.

How can states and districts maximize effective use of interim assessment data? Effective use of interim assessment information is dependent on whether the assessment fits into a well-defined vision for teaching and learning. Before implementing an interim assessment, users should consider the following:

  • What purpose is the interim assessment intending to serve? How are stakeholders (e.g., instructional leaders, teacher teams, educators and students) expected to use information from the assessment?
  • How does information from an interim assessment inform progress toward teachers’ instructional processes and student learning goals?
  • How does the interim assessment supplement existing assessment information or local assessment practices in the district or school?

We believe that leveraging a thoughtful process that enables districts and schools to evaluate and select interim assessments that are well aligned to their needs and goals will lead to more valuable applications of interim assessments.

The Center is developing a tool to guide states and districts through a process to evaluate interim assessments and identify those that best meet their needs. What can you tell us about it?

The Interim Assessment Specifications Process was developed to help those tasked with identifying, developing and/or procuring local assessments do so through a systems-based lens. There are three phases to the process:

  • In Phase 1, leaders work together to articulate the state/district vision of teaching and learning and the intended role of assessment within that vision; discuss existing assessments within the system and whether they support that vision; and identify and prioritize gaps in the information believed necessary to help students, educators and schools improve.
  • In Phase 2, educational leaders articulate the characteristics and features an assessment must demonstrate to support the intended use and identify the specific types of evidence necessary to support decisions about technical quality.
  • In Phase 3, users will evaluate whether assessments are (1) aligned to the state’s vision, (2) being used as intended and (3) facilitating desired changes in behavior.

 

What advice would you give decision makers who are considering interim assessment?

While it is often overused, it is important for decision makers to draw on one of Stephen Covey’s 7 Habits of Highly Effective People: Begin with the end in mind. While a state/district may have a good, general idea of what they want from an interim assessment it is easy to lose sight of desired outcomes if the intended purpose, use and desired characteristics are not clearly defined. Therefore, we encourage states and districts to place the intended use and utility of interim assessments at the forefront of the evaluation process by: 1) clearly defining the role interim assessments should play, if any 2) determining and prioritizing assessment needs, and 3) articulating the specific assessment design characteristics necessary to meet those needs.

 

Inside New Meridian

Ask Us
We’ll Hear More About ‘Assessment for Learning’ in the Year Ahead

Expert: Matt Lisk, Senior Vice President for Sales

Question: What are the most important trends in assessment we will see in 2020?

Answer: In my role, I spend a great deal of time discussing assessment with state officials and vendors of all kinds. One area where I think we’ll continue to see momentum is formative and interim assessment. There is a great deal of discussion about using assessment for learning, rather than conducting assessments of learning. Summative assessments will always be valuable. But I have seen a definite shift in K-12 institutions and vendors toward increased interest in classroom-based tests that give teachers quick, actionable feedback. I also think we’ll see growth in performance-based assessments, which go beyond measuring a student’s knowledge of content to measure their ability to apply that knowledge in real-world situations. New Meridian supports these trends with our bank of test items that are aligned to state standards and measure knowledge in many different ways. I think we’ll hear a lot about these trends in 2020. The flexibility provided by ESSA will continue to encourage states to find innovative ways to understand student performance more holistically.

Do you have a question about assessment that you want answered? Tell us your question and a New Meridian expert may take it on. Contact us at info@newmeridiancorp.org.