The recent report on the effectiveness of reading and mathematics software products provides strong evidence that, on average, teachers who are willing to pilot a software product and try it out in their classroom for most of a year are not likely to see much benefit in terms of student reading or math achievement. What does this tell us about whether schools should continue purchasing instructional software systems such as those tested? Unfortunately, not as much as it could have. The study was conducted under the constraint of having to report to Congress, which appropriates funds for national programs, rather than to the school district decision-makers, who make local decisions based on a constellation of school performance, resource, and implementation issues. Consequently we are left with no evidence either way as to the impact of software when purchased and supported by a district and implemented systematically.
By many methodological standards, the study, which cost more than $10 million, is quite strong. The use of random assignment of teachers to take up the software or to continue with their regular methods, for example, assures that bias from self-selection did not play a role as it does in many other technology studies. In our opinion, the main weakness of the study was that it spread the participating teachers out over a large number of districts and schools and tested each product in only one grade. This approach encompasses a broad sample of schools but leaves the individual teachers often as the lone implementer in the school and one of only a few in the district. This potentially reduces the support that would normally be provided by school leadership and district resources, as well as the mutual support of a team of teachers in the building.
We believe that a more appropriate and informative experiment would focus in the implementation in one or a small number of districts and in a limited number of schools. In this way, we can observe an implementation measuring characteristics such as how professional development is organized and how teachers are helped (or not helped) to integrate the software with district goals and standards. While this approach allows us to observe only a limited number of settings, it provides a richer picture that can be evaluated as a small set of coherent implementations. The measures of impact, then, can be associated with a realistic context.
Advocates for school technology have pointed out limitations of the national study. Often the suggestion is that a different approach or focus would have demonstrated the value of educational technology. For example, a joint statement from CoSN, ISTE, and SETDA released April 5, 2007 quotes Dr. Chris Dede, Wirth Professor in Learning Technologies at Harvard University: “In the past five years, emerging interactive media have provided ways to bring new, more powerful pedagogies and content to classrooms. This study misestimates the value of information and communication technologies by focusing exclusively on older approaches that do not take advantage of current technologies and leading edge educational methods.” While Chris is correct that the research did not address cutting edge technologies, it did test software that has been and, in most cases, continues to be successful in the marketplace. It is unlikely that technology advocates would call for taking the older approaches off the market. (Note that Empirical Education is a member of and active participant in CoSN.)
Decision-makers need some basis for evaluating the software that is commercially available. We can’t expect federally funded research to provide sufficiently targeted or timely evidence. This is why we advocate for school districts getting into the routine of piloting products on a small scale before a district-wide implementation. If the pilots are done systematically, they can be turned into small-scale experiments that inform the local decision. Hundreds of such experiments can be conducted quite cost effectively as vendor-district collaborations and will have the advantage of testing exactly the product, professional development, and support for implementation under exactly the conditions that the decision-maker cares about. —DN
Friday, June 15, 2007
National Study of Educational Software a Disappointment
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment