After a massive infusion of stimulus money into K-12 technology through the Title IID “Enhancing Education Through Technology” (EETT) grants, known also as “ed-tech” grants, the administration is planning to cut funding for the program in future budgets.
Well, they’re not exactly “cutting” funding for technology, but consolidating the dedicated technology funding stream into a larger enterprise, awkwardly named the “Effective Teaching and Learning for a Complete Education” program. For advocates of educational technology, here’s why this may not be so much a blow as a challenge and an opportunity.
Consider the approach stated at the White House “fact sheet”:
“The Department of Education funds dozens of programs that narrowly limit what states, districts, and schools can do with funds. Some of these programs have little evidence of success, while others are demonstrably failing to improve student achievement. The President’s Budget eliminates six discretionary programs and consolidates 38 K-12 programs into 11 new programs that emphasize using competition to allocate funds, giving communities more choices around activities, and using rigorous evidence to fund what works...Finally, the Budget dedicates funds for the rigorous evaluation of education programs so that we can scale up what works and eliminate what does not.”
From this, technology advocates might worry that policy is being guided by the findings of “no discernable impact” from a number of federally funded technology evaluations (including the evaluation mandated by the EETT legislation itself).
But this is not the case. The White House declares, “The President strongly believes that technology, when used creatively and effectively, can transform education and training in the same way that it has transformed the private sector.”
The administration is not moving away from the use of computers, electronic whiteboards, data systems, Internet connections, web resources, instructional software, and so on in education. Rather, the intention is that these tools are integrated, where appropriate and effective, into all of the other programs.
This does put technology funding on a very different footing. It is no longer in its own category. Where school administrators are considering funding from the “Effective Teaching and Learning for a Complete Education” program, they may place a technology option up against an approach to lower class size, a professional development program, or other innovations that may integrate technologies as a small piece of an overall intervention. Districts would no longer write proposals to EETT to obtain financial support to invest in technology solutions. Technology vendors will increasingly be competing for the attention of school district decision-makers on the basis of the comparative effectiveness of their solution—not just in comparison to other technologies but in comparison to other innovative solutions. The administration has clearly signaled that innovative and effective technologies will be looked upon favorably. It has also signaled that effectiveness is the key criterion.
As an Empirical Education team prepares for a visit to Washington DC for the conference of the Consortium for School Networking and the Software and Information Industry Association’s EdTech Government Forum, (we are active members in both organizations), we have to consider our message to the education technology vendors and school system technology advocates. (Coincidentally, we will also be presenting research at the annual conference of the Society for Research on Educational Effectiveness, also held in DC that week). As a research company we are constrained from taking an advocacy role—in principle we have to maintain that the effectiveness of any intervention is an empirical issue. But we do see the infusion of short term stimulus funding into educational technology through the EETT program as an opportunity for schools and publishers. Working jointly to gather the evidence from the technologies put in place this year and next will put schools and publishers in a strong position to advocate for continued investment in the technologies that prove effective.
While it may have seemed so in 1993 when the U.S. Department of Education’s Office of Educational Technology was first established, technology can no longer be considered inherently innovative. The proposed federal budget is asking educators and developers to innovate to find effective technology applications. The stimulus package is giving the short term impetus to get the evidence in place. — DN
Showing posts with label Department of Education. Show all posts
Showing posts with label Department of Education. Show all posts
Tuesday, February 23, 2010
Thursday, July 9, 2009
The Problem with National Experiments
We welcome the statement of the director of the Office of Management and Budget (OMB), Peter R. Orszag, issued as a blog entry, calling for the use of evidence.
“I am trying to put much more emphasis on evidence-based policy decisions here at OMB. Wherever possible, we should design new initiatives to build rigorous data about what works and then act on evidence that emerges — expanding the approaches that work best, fine-tuning the ones that get mixed results, and shutting down those that are failing.”
This suggests a continuous process of improving programs based on evaluations built into the fabric of program implementations, which sounds very valuable. Our concern, however, at least in the domain of education, is that Congress or the Department of Education will contract for a national experiment to prove a program or policy effective. In contrast, we advocate a more localized and distributed approach based on the argument Donald Campbell made in the early 70s in his classic paper “The Experimenting Society” (updated in 1988). He observes that “the U.S. Congress is apt to mandate an immediate, nationwide evaluation of a new program to be done by a single evaluator, once and for all, subsequent implementations to go without evaluation.” Instead, he describes a “contagious cross-validation model for local programs” and recommends a much more distributed approach that would “support adoptions that included locally designed cross-validating evaluations, including funds for appropriate comparison groups not receiving the treatment.” Using such a model, he predicts that “After five years we might have 100 locally interpretable experiments.” (p.303)
Dr. Orszag’s adoption of the “top tier” language from the Coalition for Evidence Based Policy is buying into the idea that an educational program can be proven effective in a single large scale randomized experiment. There are several weaknesses in this approach.
First, the education domain is extremely diverse and, without the “100 locally interpretable experiments,” it is unlikely that educators would have an opportunity to see a program at work in a sufficient number of contexts to begin to build up generalizations. Moreover, as local educators and program developers improve their programs, additional rounds of testing are called for (and even the “top tier” programs should engage in continuous improvement).
Second, the information value of local experiments is much higher for the decision-maker who will always be concerned with performance in his or her school or district. National experiments generate average impact estimates, while giving little information about any particular locale. Because concern with achievement gaps between specific populations differs across communities, it follows that, in a local experiment, reducing a specific gap—not the overall average effect—may well be the effect of primary interest.
Third, local experiments are vastly less expensive than nationally contracted experiments, even while obtaining comparable statistical power. Local experiments can easily be one-tenth the cost of national experiments, thus conducting 100 of them is quite feasible. (We say more about the reasons for the cost differential in a separate policy brief). Better yet, local experiments can be completed in a more timely manner—it need not take five years to accumulate a wealth of evidence. Ironically, one factor making national experiments expensive, as well as slow, is the review process required by OMB!
So while we applaud Dr. Orszag’s leadership in promoting evidence-based policy decisions, we will continue to be interested in how this impacts state and local agencies. We hope that, instead of contracting for national experiments, the OMB and other federal agencies can help state and local agencies to build evaluation for continuous improvement into the implementation of federally funded programs. If nothing else, it helps to have OMB publicly making evidence-based decisions. —DN
Campbell, D. T. (1988). The Experimenting Society. In E. S. Overman (Ed.), Methodology and epistemology for social science: Selected Papers. (pp. 303). Chicago: University of Chicago Press.
“I am trying to put much more emphasis on evidence-based policy decisions here at OMB. Wherever possible, we should design new initiatives to build rigorous data about what works and then act on evidence that emerges — expanding the approaches that work best, fine-tuning the ones that get mixed results, and shutting down those that are failing.”
This suggests a continuous process of improving programs based on evaluations built into the fabric of program implementations, which sounds very valuable. Our concern, however, at least in the domain of education, is that Congress or the Department of Education will contract for a national experiment to prove a program or policy effective. In contrast, we advocate a more localized and distributed approach based on the argument Donald Campbell made in the early 70s in his classic paper “The Experimenting Society” (updated in 1988). He observes that “the U.S. Congress is apt to mandate an immediate, nationwide evaluation of a new program to be done by a single evaluator, once and for all, subsequent implementations to go without evaluation.” Instead, he describes a “contagious cross-validation model for local programs” and recommends a much more distributed approach that would “support adoptions that included locally designed cross-validating evaluations, including funds for appropriate comparison groups not receiving the treatment.” Using such a model, he predicts that “After five years we might have 100 locally interpretable experiments.” (p.303)
Dr. Orszag’s adoption of the “top tier” language from the Coalition for Evidence Based Policy is buying into the idea that an educational program can be proven effective in a single large scale randomized experiment. There are several weaknesses in this approach.
First, the education domain is extremely diverse and, without the “100 locally interpretable experiments,” it is unlikely that educators would have an opportunity to see a program at work in a sufficient number of contexts to begin to build up generalizations. Moreover, as local educators and program developers improve their programs, additional rounds of testing are called for (and even the “top tier” programs should engage in continuous improvement).
Second, the information value of local experiments is much higher for the decision-maker who will always be concerned with performance in his or her school or district. National experiments generate average impact estimates, while giving little information about any particular locale. Because concern with achievement gaps between specific populations differs across communities, it follows that, in a local experiment, reducing a specific gap—not the overall average effect—may well be the effect of primary interest.
Third, local experiments are vastly less expensive than nationally contracted experiments, even while obtaining comparable statistical power. Local experiments can easily be one-tenth the cost of national experiments, thus conducting 100 of them is quite feasible. (We say more about the reasons for the cost differential in a separate policy brief). Better yet, local experiments can be completed in a more timely manner—it need not take five years to accumulate a wealth of evidence. Ironically, one factor making national experiments expensive, as well as slow, is the review process required by OMB!
So while we applaud Dr. Orszag’s leadership in promoting evidence-based policy decisions, we will continue to be interested in how this impacts state and local agencies. We hope that, instead of contracting for national experiments, the OMB and other federal agencies can help state and local agencies to build evaluation for continuous improvement into the implementation of federally funded programs. If nothing else, it helps to have OMB publicly making evidence-based decisions. —DN
Campbell, D. T. (1988). The Experimenting Society. In E. S. Overman (Ed.), Methodology and epistemology for social science: Selected Papers. (pp. 303). Chicago: University of Chicago Press.
Subscribe to:
Posts (Atom)