By Nancy Doorey, Nancy Doorey Educational Consulting
(Article source: The Hunt Institute)

State state plans for the Every Student Succeeds Act (ESSA) are due by September 18, 2017, although the timelines could be altered by the new administration. Perhaps the most controversial section of those plans is the one that addresses testing and accountability systems. In nearly every state, leaders hear from some who want fewer, shorter tests, from others who want tests that emphasize the types of complex skills needed in higher education and the workplace, and others who just want the cost reduced. How can state leaders develop systems that will have enough public support to remain in place? Exploring the following questions and options may help.

  • Which are the skills and knowledge that are essential to assess at each grade level? Even though a state has adopted content standards, it’s worth taking time to to be clear about what you, as a state, value enough to include in the assessments. Some of the skills that received much greater emphasis within the current generation of standards, due to the documented need for them in postsecondary training and education programs, are more expensive and time-consuming to assess. These include competencies such as building knowledge using content-rich, non-fiction texts; the writing of a well-defended claim that uses evidence drawn from multiple sources; and the selection and application of multiple mathematical skills and procedures to solve a complex problem. Are they important enough to include in your testing program?
  • How will you strike an informed balance between test quality, testing time, and cost? Generally speaking, the complex skills that higher education faculty and employers cite as currently being inadequately developed in K-12 systems are also more costly to assess than the “basics” and require more testing time. A completely custom state assessment that measures all of these critical skills is likely too expensive for any state to deploy. But several options exist to help states strike a balance:
    • Pooled buying power through multi-state consortia and shared tests: Whether one of the consortia that received federal funds or another, it makes a lot of sense to avoid duplication of costs. After all, the large majority of reading, writing and math skills that states want to assess are the same.
    • Customized tests that include some licensed items: Fortunately, both PARCC and Smarter Balanced now allow states to license the use of tests or sets of items within a customized state test. For expensive-to-measure skills that might otherwise have to be dropped from the assessments due to cost, this option is helpful.
    • De-couple some pieces from the high-stakes accountability system: A big driver of testing cost is the need to have legally defensible results at the individual student level. States could choose to test and report some of the expensive-to-measure skills at the school, district or state level. For example, if a state decides it simply can’t afford to test writing to sources in a quality way within the main state assessment at each grade level, the state could include, at some or all levels, writing tasks that are then scored by teachers (perhaps using one of the consortia’s electronic scoring platforms with continuous monitoring of consistency in scoring), and report the results at the school level. Scores could even be included within course grades, as appropriate. This type of approach has been used by states in the past as a way to clearly signal what is important in instruction and to provide meaningful data to taxpayers about the performance of schools while reducing costs.
    • Consider an off-the-shelf test from a vendor. The evaluations by the Thomas B. Fordham Institute and the Human Resources Research Organization (HumRRO) of several current-generation tests revealed significant differences in the strengths and weaknesses of each. Some simply failed to measure some of the high-priority skills within college- and career-readiness standards. If considering an off-the-shelf test, states should carefully review whether or not their “critical skills” are tested in meaningful ways, and may need to augment it.

Testing will almost certainly continue to be a lightening rod. But taking the time to gain consensus on what is worth assessing and how best to bamailto:nancy.doorey@gmail.comlance test quality, testing time, and cost may help states then stay the course.

Reach Nancy Doorey at

%d bloggers like this: