Dr. Tracy Gardner, who leads New Meridian’s test design and development efforts, has more than 20 years of operational experience with considerable expertise in measurement, assessment design and development, and psychometrics. Prior to joining New Meridian, she was Senior Director of Assessment for the next-generation GED assessment, where she contributed to all aspects of the assessment development process at GED Testing Service. Previously, she was Senior Psychometrician and Manager of Psychometrics and Research Services at Pearson, where she led and supported the psychometrics for more than two dozen large-scale testing programs.
Thinking back to your high school and college days, was there a particular moment or seminal event that led you into this field of study?
I was a double major in psychology and mathematics in college. My favorite areas were developmental psychology and cognitive psychology because I was passionate about studying how children learn. Psychology has a strong focus on research, and since I had a strong mathematics background, I found the statistics courses in the mathematics department to be particularly interesting.
Statistics and research methods classes lit my fire, and a professor of mine recognized this passion in me (apparently, that was rare—who knew?). During my junior year, my advisor suggested that I look into quantitative psychology and/or research methodology Ph.D. programs. As soon as I saw the course listings for research methodology/psychometrics, I knew it was the path for me. As a Ph.D. student at the University of Pittsburgh, I got to take courses in educational statistics, measurement, psychometrics, and test design. I always loved taking tests as a kid (particularly the New York State Regents exams), so this field that combined my love of psychology, mathematics, and statistics was a perfect path.
You’ve been on the vanguard of assessment design and development for the better part of two decades. How has test design and assessment development changed from the time you first entered the field?
As a graduate student in the 1990’s, I was blessed to have an internship position on the Maryland State Performance Assessment Program (MSPAP) led by Dr. Suzanne Lane. MSPAP was an innovative, interactive, multi-disciplinary assessment given to groups of students in classrooms.
Students worked in groups on complex, multi-disciplinary, performance-assessment tasks over the course of several days. These challenging tasks had students read for information, use manipulatives, interpret graphics, make predictions, perform calculations, show their work, and explain their reasoning. While students did not receive individual scores, schools did get scores, which they used to evaluate the strength of their curriculum and instruction.
This innovative program was the first assessment that I ever worked on and I became an immediate supporter of performance-based assessment. I graduated in 2000 before the No Child Left Behind (NCLB) legislation was approved. NCLB had lofty goals to get all students proficient and brought much more federal oversight to the assessment process. For the first 10 years of my career, I worked on all NCLB-type tests, which required reliable and valid student scores, annual testing in grades 3-8, and at least one high-school grade. Performance assessments like MSPAP were no longer an option for testing within the new federal guidelines.
Testing changed again with the introduction of the Common Core State Standards and ESSA, and it is continuing to change as it swings back to research more innovative approaches to testing (e.g. the Innovative Assessment Pilot). Change is inevitable, but the good news is that the field is growing, and technology is improving. Ideally, I would love to see the field continue to move toward incorporating a system of assessments that will allow for multiple measures to be combined into a portfolio of evidence.
One of the criticisms of assessments, particularly in the context of accountability, is that too much time is diverted from instruction toward test preparation. How do you respond to critics who say high-stakes, summative tests force teachers to teach to the test?
As a mother of six children that range in age from age 3 to 14, this question is particularly important to me, both as a parent and as an assessment expert. In my opinion, a good assessment enables teachers to be more informed about students’ understanding so the teacher can be a better guide to help students learn. Instruction that is better informed by good assessment is time better spent in the classroom.
For example, the ELA assessments that New Meridian develops measure students’ ability to read literary and informational texts critically, make inferences, draw conclusions and then write extended responses while citing evidence from the text. I believe that classroom time spent on developing these types of skills is time well spent. As a parent of school-age children, these are exactly the kinds of skills that I want my children to be working on in their daily classroom instruction because these are the kinds of skills that they will likely need to succeed in their future careers.
There are more than 14,000 school districts in the U.S. and each of them has the autonomy to develop its own curricula. With so many different curricula in use around the country, is it realistic to believe that a single test—or tests produced from a common pool of performance tasks—will work for every school district?
I can’t help but reflect back on my time in graduate school when I was evaluating the validity of the MSPAP. We found that the extended tasks benefited students from all walks of life. The tasks represented the same kinds of challenges and assignments that they were likely to face as an adult in a real-world job. New Meridian tasks are similar in that they require critical thinking, modeling, reasoning, communication and research skills. Whether a student attends public school, private school, or a charter/magnet school, these skills are critically important for career and college readiness.
As you look at the education landscape in America, what are you most optimistic about?
I am excited to see the pendulum shifting back to having more local control of assessment practices. While I definitely see the benefit of standard assessment and accountability practices that have been introduced through NCLB and ESSA legislation, I am excited to see more opportunities for through-course, interim, and formative assessments that may allow a swing back toward more performance-based assessment as one piece of the assessment system.
As a psychometrician and measurement professional, I have spent my career trying to educate stakeholders on the need for multiple measures. I learned more than 25 years ago that one assessment can’t do everything, but a comprehensive portfolio of multiple measures can provide an informative profile of student strengths and weaknesses. I am optimistic we are moving to a system of assessments that will take all the pressure off one assessment and allow for multiple measures to build out a more complete profile of student achievement.