Many of us heard Jim Shelton, the ED Assistant Deputy Secretary for Innovation and Improvement, speak to the education publishing industry last week about the $650 million fund now called “Investing in Innovation” (i3). Through i3, Shelton wants to fund the scaling up of innovations having some evidence that they’re worth investing in. These i3 grants could be as large as $50 million.
With that amount at stake, it makes sense for government funders to look for some track record of scientifically documented success. The frequent references in ED documents to processes of “continuous improvement” as part of innovations suggest that proposers would do well to supplement the limited evidence for their innovation by showing how scientific evidence can be generated as an ongoing part of a funded project, that is, how in-course corrections and improvements can be made to the innovation as it is being put into place in a school system.
In his speech to the education industry, Shelton complained about the low quality of the evidence currently being put forward. Although some publishers have taken the initiative and done serious tests of their products, there has never been a strong push for them to produce evidence of effectiveness.
School systems usually haven’t demanded such evidence, partly because there are often more salient decision criteria and partly because little qualified evidence exists, even for programs that are effective. Moreover, district decision makers may find studies of a product conducted in schools that are different from their schools to have marginal relevance, regardless of how “rigorously” the studies were conducted.
The ED appears to recognize that it will be counter-productive for grant programs such as i3 to depend entirely on the pre-existing scientific evidence. An alternative research model based on continuous improvement may help states and districts to succeed with their i3 proposals—and with their projects, once funded.
Now that improved state and district data systems are increasing the ability of school systems to quickly reference several years of data on students and teachers, i3 can start looking at how rigorous research is built into the innovations they fund—not just the one-time evaluation typically built into federal grant proposals.
This kind of research for continuous improvement is an innovation in itself—an innovation that may start with the “data-driven decision making” mode in which data are explored to identify an area of weakness or a worrisome trend. But the real innovation in research will consist of states and districts building their own capacity to evaluate whether the intervention they decided to implement actually strengthened the area of weakness or arrested the worrisome trend they identified and chose to address. Perhaps it did so for some schools but not others, or maybe it caught on with some teachers but not with all. The ability of educators to look at this progress in relation to the initial goals completes the cycle of continuous improvement and sets the stage for refocusing, tweaking, or fully redesigning the intervention under study.
We predict that i3 reviewers, rather than depending solely on strong existing evidence, will look for proposals that also include a plan for continuous improvement that can be part of how the innovation assures its success. In this model, research need not be limited to the activity of an “external evaluator” that absorbs 10% of the grant. Instead, routine use of research processes can be an innovation that builds the internal capacity of states and districts for continuous improvement.