In September’s “The Continued Search for Best Practice,” I suggested that the federally funded Cooperative Research Program in First-Grade Reading Instruction (aka the “first-grade studies”), conducted in the 1960s, remains a “scintillating study” today. A prominent finding was that a comparison of different approaches to teaching beginning reading revealed none to be the most effective. All of the approaches worked well in some contexts and not so well in others. That finding calls attention to the prominent role of context when conceptualizing today’s understanding of “evidence-based best practice.” Here, I extend that view and address two pertinent questions I raised then.
If context is central, does anything go?
Certainly not. It simply means that an informed, experienced, and dedicated practitioner working within a particular context, not research, is at the center of best practice. That was Gerald Duffy’s (1994) point when he said “Viewing research findings as…technical information ignores the reality that teachers must make strategic decisions about when [emphasis added] to apply findings…and when it might be appropriate to ignore findings altogether” (p. 19). (See also David Pearson, 2007). Or, as the first-grade studies suggested decades ago, matching the right action to particular circumstances at the right time is best practice.
Nonetheless, knowing the relevant research is a professional obligation, if for no other reason than to know when it is necessary to justify practice not aligned with it. On the other hand, the body of education research, on whole, is relatively limited and equivocal, leaving much room for interpretation.* Further, it overwhelmingly leans toward measurable achievement, giving short shrift to valued, but less measurable, goals and to other pedagogically relevant factors. We know little about the efficiency, appeal, and negative collateral outcomes of even the most researched approaches and practices. So, research findings may be a useful, albeit limited, resource for considering informed practice, but it is not a prescription for success, or the final arbiter of best practice. It is a starting point for reflective, discriminating practice, not a substitute for it.
The medical profession has a model for evidence-based practice that provides a more balanced and enlightened perspective. Evidence-based practice in health care has been argued to exist at the intersection of research, professional knowledge and experience, local data and information, and patient experiences and preferences (Rycroft-Malone et al., 2004). Anything does not go, but best practice varies from case to case, because it takes into account four sources of input, three of which are contextual.
A widely accepted set of general principles defining “good” (not best) practice would also be a hedge against “anything goes.” It might even include defining malpractice, which Jim Cunningham (1999) has argued is necessary to call ourselves a profession. As far as I know, we do not have a broadly consensual set of such principles, let alone an operational definition of malpractice. Why not, I wonder?
Might literacy research better align with a more contextual view of best practice?
I think so. The bulk of our research literature is grounded in two metaphors: the laboratory for quantitative experimental research and the lens for qualitative naturalistic research. The former must necessarily treat a vast array of dynamic, interacting, and potentially influential contextual factors in classrooms as random variation. It generates broad generalizations with the implicit assumption that, at best, “when all other things are equal, we can say that…” But, as any teacher knows, all things are never equal, and contending with that reality defines the essence of professional practice. The lens metaphor, too, is limited. It enables deep analysis of instructional contexts, but usually with no deliberate investment in understanding how contextual factors might be managed for the sake of improving practice.
There is a third alterative that is gradually taking hold in literacy research. It is referred to generally as design-based research. As implied by the word design, it is grounded in an engineering metaphor (see Reinking, 2011). This approach rigorously studies how an instructional intervention can be designed and implemented to accomplish a valued pedagogical goal. It asks questions such as What contextual factors enhance or inhibit effectiveness, efficiency, and appeal? What iterative adaptations to the intervention make sense in light of those factors? What unanticipated outcomes does the intervention bring about? Does the intervention transform teaching and learning? What pedagogical principles might be learned by trying to make something work, and do those principles stand up across diverse contexts?
In short, it is an approach to research that aligns with the deeply contextual nature of teaching and the need for informed guidance derived from authentic practice, not unequivocal prescriptions for best practice.
In my final installment, I will summarize several published studies that illustrate this approach and how it might inform practice.
*See David Labaree’s (1998) argument that education research is a lesser form of knowledge. See also John Hattie’s (2009) analysis of more the 50,000 experimental studies involving more than 2 million students, leading to his conclusion that the overall effect sizes are moderate. Also noteworthy is the remarkably small number of published experimental studies in literacy that meet the U.S. Department of Education’s What Works Clearinghouse’s most rigorous standards (27 of 836 in one year; see also http://blogs.edweek.org/edweek/inside-school-research/2013/11/useful_reading_research_hard_t.html )
David Reinking is the Eugene T. Moore Professor of Education in the School of Education at Clemson University. During the 2012–2013 academic year, he was a visiting distinguished professor in the Johns Hopkins University School of Education, and in the Spring of 2013, he was a visiting professor at the Università degli Studi della Tuscia in Viterbo Italy.
The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.
References
Cunningham, J.W. (1999). How we can achieve best practices in literacy instruction. In L.B. Gambrell, L.M. Morrow, S.B. Neuman, & M.
Pressley (Eds.), Best practices in literacy instruction (pp. 34–45). New York, NY: Guilford.
Duffy, G.G. (1994). How teachers think of themselves: A key to mindfulness. In J.N. Mangieri & C.C Block (Eds.), Creating powerful thinking in teachers and students: Diverse perspectives (pp. 3–25). Fort Worth, TX: HarperCollins.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London, UK: Routledge.
Labaree, D.F. (1998). Educational researchers: Living with a lesser form of knowledge. Educational Researcher, 27(8), 4–12.
Pearson, P.D. (2007). An endangered species act for literacy education. Journal of Literacy Research, 39(2), 145–162.
Reinking, D. (2011). Beyond the laboratory and lens: New metaphors for literacy research. In P.L. Dunston, L.B. Gambrell, S.K. Fullerton, P.M. Stecker, V.R. Gillis, & CC. Bates (Eds.), 60th yearbook of the Literacy Research Association (pp. 1–17). Oak Creek, WI: Literacy Research Association.
Rycroft-Malone, J., Seers, K., Titchen, A., Harvey, G., Kitson, A., & McCormack, B. (2004). What counts as evidence in evidence-based practice? Journal of Advanced Nursing, 47(1), 81–90.