The research journal Education Policy published an article this month that is important for understanding how data and evidence are used at the school district level: “Evidence-Based Decision Making in School District Central Offices” by Meredith Honig and Cynthia Coburn, both alumnae of Stanford’s Graduate School of Education (Honig & Coburn, 2008). Understand that most of the data-driven decision-making research (and most decision-making based on data) occurs at the classroom level; teachers get immediate and actionable information about individual students. But Honig and Coburn are talking about central office administrators. Data at the district level are more complicated and, as the authors document, infused with political complications. When district leaders are making decisions about products or programs to adopt, evidence of the scientific sort is at best one element among many.
Honig and Coburn review three decades of research and, after eliminating purely anecdotal and obviously advocacy pieces, they found 52 books and articles of substantial value. What they document parallels our own experience at Empirical Education in many respects. That is, rigorous evidence, once it is gathered through either reading scientific reviews or conducting local program evaluations, is never used “directly.” It is not a matter of the evidence dictating the decision. They document that scientific evidence is incorporated into a wide range of other kinds of information and evidence. These may include teacher feedback, implementation issues, past experience, or what the neighboring district superintendent said about it—all of which are legitimate sources of information and need to be incorporated into the thinking about what to do. This “working knowledge” is practical and “mediates” between information sources and decisions.
The other aspect of decision-making that Honig and Coburn address involves the organizational or political context of evidence use. In many cases the decision to move forward has been made before the evaluation is complete or even started; thus the evidence from it is used (or ignored) to support that decision or to maintain enthusiasm. As in any policy organization or administrative agency, there is a strong element of advocacy in how evidence is filtered and used. The authors suggest that this filtering for advocacy can be beneficial in helping administrators make the case for programs that could be beneficial.
In other words, there is a cognitive/organizational reality that “mediates” between evidence and policy decisions. The authors contrast this reality with the position they attribute to federal policy makers and the authors of NCLB that scientific evidence ought to be used “directly” or instrumentally to make decisions. In fact, they see the federal policy as arguing that “these other forms of evidence are inappropriate or less valuable than social science research evidence and that reliance on these other forms is precisely the pattern that federal policy makers should aim to break” (p601). This is where their argument is weakest. The contrast they set up between the idea of practical knowledge mediating between evidence and decisions and the idea that evidence should be used directly is a false dichotomy. The “advocate for direct use of evidence” is a straw man. There are certainly researchers and research methodologists who do not study and are not familiar with how evidence is used in district decisions. But not being experts in decision processes does not make them advocates for a particular process called “direct.” The federal policy is not aimed at decision processes. Instead, it aims to raise the standards of evidence in formal research that claims to measure the impact of programs so that, when such evidence is integrated into decision processes and weighed against practical concerns of local resources, local conditions, local constraints, and local goals, the information value is positive. Federal policy is not trying to remove decision processes, it is trying to remove research reports that purport to provide research evidence but actually come to unwarranted conclusions because of poor research design, incorrect statistical calculations, or bias.
We should also not mistake Honig’s and Coburn’s descriptions of decision processes for descriptions of deep, underlying, and unchangeable human cognitive tendencies. It is certainly possible for district decision-makers to learn to be better consumers of research, to distinguish weak advocacy studies from stronger designs, and to identify whether a particular report can be usefully generalized to their local conditions. We can also anticipate an improvement in the level of the conversation between districts’ evaluation departments, curriculum departments, and IT people so that local evaluations are conducted to answer critical questions and to provide useful information that can be integrated with other local considerations into a decision. —DN
Honig, M. I. & Coburn, C. (2008). Evidence-Based Decision Making in School District Central Offices. Educational Policy, 22(4), 578-608.
Sunday, June 1, 2008
How Do Districts Use Evidence?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment