We’ve heard administration officials say that the stimulus programs provide a laboratory for ideas that can be built into the ESEA (aka NCLB) reauthorization, as well as into the reauthorization of the Education Sciences Act. So we are studying closely the RFAs, draft RFAs, and other guidance from the US Department of Education for stimulus programs such as Race to the Top (R2T), Investing in Innovation (i3), Enhancing Education Through Technology (EETT), and the State Longitudinal Data Systems (SLDS) looking for clues about the new research agenda. They are not hard to find.
As a general trend, there is no doubt that the new administration is seriously committed to evidence-based policy. Peter Orszag of the White House budget office has recently called for (and made the case for, in his blog) systematic evaluations of federal policies consistent with the president’s promise to “restore science to its rightful place.” But how does this play out with ED?
First, we are seeing a major shift from a static notion of “scientifically based research” (SBR) to a much more dynamic approach to continuous improvement. In NCLB there was constant reference to SBR as a necessary precondition for spending ESEA funds on products, programs, or services. In some cases, it meant that the product’s developers had to have consulted rigorous research. In other cases, it was interpreted as there having to be rigorous research showing the product itself was effective. But in either case, the SBR had to precede the purchase.
Evidence of a more dynamic approach is found in all of the competition-based stimulus programs. Take for example the discussion of “instructional improvement systems.” While this term usually refers to classroom-based systems for formative testing with feedback to the teacher allowing differentiation of instruction, it is used in a broader sense in the current RFAs and guidance documents. The definition provided in the R2T draft RFA reads as follows (bullets and highlights added for clarity):
“Instructional improvement systems means technology-based and other strategies that tools that provide
* teachers,
* principals,
* and administrators
with meaningful support and actionable data to systemically manage continuous instructional improvement, including activities such as:
* instructional planning;
* gathering information (e.g., through formative assessments (as defined in this notice), interim assessments (as defined in this notice), summative assessments, and looking at student work and other student data);
* analyzing information with the support of rapid-time (as defined in this notice) reporting;
* using this information to inform decisions on appropriate next steps;
* and evaluating the effectiveness of the actions taken.”
It is important to notice, first of all, that tools are provided to administrators, not just to teachers. Moreover, the final activity in the cycle is to evaluate the effectiveness of the actions. (Joanne Weiss, who heads up the R2T program, uses the same language inclusive of effectiveness evaluation by district administrators in a recent speech).
We have pointed out in a previous entry that the same cycle of needs analysis, action, and evaluation that works for teachers in the classroom also works for district-level administrators. The same assessments that help teachers differentiate instruction can, in many cases, be aggregated up to the school and district level where broader actions, programs, and policies can be implemented and evaluated based on initial identification of the needs. An important difference exists between these parallel activities at the classroom and central office level. At the district level, where larger datasets extend over a longer period, evaluation design and statistical analysis are called for. In fact this level of information calls for scientifically based research.
Research is now viewed as integral to the cycle of continuous improvement. Research may be carried out by the district’s or state’s own research department or data may be made available to outside researchers as called for in the SLDS and other RFAs. The fundamental difference now is that the research conducted and published before federal funds are used is not the only relevant research. Of course, ED strongly prefers (and at the highest level of funding in i3 requires) that programs have prior evidence. But now the further gathering of evidence is required both in the sense of a separate evaluation and in the sense that funding is to be put toward continuous improvement systems that build research into the innovation itself.
Our recent news item about the i3 program takes note of other important ideas about the research agenda we can expect to influence the reauthorization of ESEA. It is worth noting that the methods called for in i3 are also those most appropriate and practical for local district evaluations of programs. We welcome this new perspective on research considered as a part of the cycle of continuous instructional improvement. — DN
Tuesday, November 17, 2009
Subscribe to:
Posts (Atom)