Literacy Now

Literacy Research
ILA Membership
ILA Next
ILA Journals
ILA Membership
ILA Next
ILA Journals
    • Blog Posts
    • Job Functions
    • Literacy Coach
    • Administrator
    • Classroom Teacher
    • Vocabulary
    • Differentiated Instruction
    • Teaching Strategies
    • Reading
    • Foundational Skills
    • Topics
    • ~8 years old (Grade 3)
    • ~7 years old (Grade 2)
    • ~6 years old (Grade 1)
    • ~5 years old (Grade K)
    • ~4 years old (Grade Pre-K)
    • Student Level
    • Research & Practice: Viewpoints
    • Literacy Research
    • Tutor
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Literacy Education Student
    • Content Types

    Teach “Sight Words” As You Would Other Words

    By Nell K. Duke and Heidi Anne E. Mesmer
     | Jun 23, 2016

    ThinkstockPhotos-499580999_x300In many classrooms we visit, “sight words” receive a very different kind of instruction than other words, taught primarily as an exercise in visual memorization. In this post, we explain why sight words should be taught much as you would teach any other words.

    First, a note about terminology: The term sight word means any word that can be read automatically (Ehri, 2005). Ultimately, any word can and should be a sight word, not just words from the Dolch or Fry lists, for example. For skilled readers, virtually all words have already become sight words. At this point, readers no longer need to engage in decoding (e.g., /c/-/a/-/t/ = /cat/); using an analogy (e.g., cat: like bat with a c); or using sentence context to figure out the words (Ehri, 2005)—they can now read them automatically, without conscious attention. In contrast, often people use the term sight words to mean high-frequency words, many of which do not follow typical English letter–sound relationships (e.g., said, some). They think that these high-frequency words must be learned by sight, without graphophonemic analysis, because of their irregularities. In the remainder of this post, we explain that this is not the case, and we use the term high-frequency words, meaning words that are very common in English, whether regularly or irregularly spelled.

    Memorizing high-frequency words holistically is not the answer. The most powerful mechanism for eventually accessing words by sight is use of the graphophonemic structure, a process that amalgamates the word’s units into memory (Ehri, 1978). Here are five principles to keep in mind when teaching high-frequency words:

    Principle One: Teach high-frequency words along with phonemic awareness, individual letter–sound relationships, and a concept of word (e.g., Flanagan, 2007). In our observation, a great deal of high-frequency word instruction occurs too early—before children have these important pieces in place. For example, some children do not even have a concept of word or understanding of the word boundaries in print and how these map to letters, and yet they are memorizing letter sequences in “sight words.” Similarly, before they even understand the alphabetic principle they are chanting words.  Without a concept of word or alphabetic insight, children will have the mistaken impression that words are unsystematic, and learning will be inefficient in any case. High-frequency word instruction should occur on basically the same pace as instruction in word decoding in general.

    Principle Two: Ask students to use graphophonemic analysis to read high-frequency words (Ehri, 2005). But be sure that instruction intersects with children’s developmental stage (e.g., Bear, Invernizzi, Templeton, & Johnston, 2012). For example, when working with an emergent reader who is solidifying consonant sounds, focus them on the /t/ in to. When working with a full alphabetic reader, teach that in the word and, the a says /ă/, the n says /n/, and the d says /d/. Do this even for words that are not spelled using common letter–sound correspondences. For example, for the word was, we teach that w says /w/, a says /ŭ/, and the s says /z/. This kind of instruction builds a phonological representation of the word, which supports learning of the word.

    Principle Three: Teach high-frequency words in groups that have similar patterns. For example, instead of teaching the word some as a rule breaker, explain that it is like come, above, and love.

    Principle Four: Use high-frequency words to help children learn to decode new words. In one study, children were taught high-frequency words, such as long, can, and her, either with relatively little attention to the letter–sound relationships within them or with extensive analysis of their letter–sound relationships (Ehri, Satlow, & Gaskins, 2009). Children taught the words with full graphophonemic analysis were better able earlier on to analogize from those words to new words—for example to say, “If I know long, then I know strong.’’

    Principle Five: Practice reading high-frequency words in sentences and books. Although we want children to analyze words individually, they also must read them within the context of sentences and books. It is critical that young children understand that reading high-frequency words enables them to unlock meaning within texts of interest to them.

    In sum, we recommend you approach the teaching of high-frequency words, or what you might have been referring to as “sight words,” much as you approach the teaching of other words. Such continuity in instructional approach would be out of “sight”!

    Nell K. Duke is a professor of Literacy, Language, and Culture at the University of Michigan, a member of the ILA Literacy Research Panel, and the author of Inside Information: Developing Powerful Readers and Writers of Informational Text Through Project-Based Instruction. Heidi Anne E. Mesmer is an associate professor of Literacy at Virginia Tech and a member of the ILA Literacy Research Panel. Her research focuses on text and beginning reading instruction.

    The ILA Literacy Research Panel uses this blog to connect ILA members around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

     

    References

    Bear, D.R., Invernizzi, M., Templeton, S., & Johnston, F. (2012). Words their way​ (5th​ ed.). Boston, MA: Pearson.

    Ehri, L.C. (1978). Beginning reading from a psycholinguistic perspective: Amalgamation of word identities. In F.B. Murray, (Ed.), The development of the reading process (International Reading Association Monograph No. 3). Newark, DE: International Reading Association.

    Ehri, L.C. (2005). Learning to read words: Theory, findings, and issues. Scientific Studies of Reading, 9(2), 167–188.

    Ehri, L.C., Satlow, E., & Gaskins, I. (2009). Grapho-phonemic enrichment
    strengthens keyword analogy instruction for struggling young readers. Reading & Writing Quarterly: Overcoming Learning Difficulties, 25(2–3), 162–191.

    Flanigan, K. (2007). A concept of word in text: A pivotal event in early reading acquisition. Journal of Literacy Research, 39(1), 37–70.

    Read More
    • Administrator
    • Job Functions
    • Classroom Teacher
    • Literacy Education Student
    • Teacher Empowerment
    • Curriculum Development
    • Professional Development
    • Policy & Advocacy
    • Student Evaluation
    • Assessment
    • Topics
    • Scintillating Studies
    • Literacy Research
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Policymaker
    • Literacy Coach
    • Blog Posts
    • Content Types

    The Influence of Mandated Tests on Literacy Instruction

    By Gay Ivey
     | May 12, 2016

    78753286_x300In their recent study, Dennis Davis and Angeli Willson set out to illuminate the relationship between literacy instruction and a mandated achievement test in Texas in their Reading Research Quarterly article “Practices and Commitments of Test-Centric Literacy Instruction: Lessons From a Testing Transition.” At the time, schools were undergoing a transition to a new test, the State of Texas Assessments of Academic Readiness (STAAR), a context Davis and Willson believed would magnify the complexities of the teaching–testing dynamic. They interviewed 12 teachers, twice, over a period spanning the first and second years of test implementation and conducted a focus group meeting from the larger sample. They also examined documents publically available on the Texas Education Agency’s website intended to explain the transition to the STAAR and to provide teachers and parents with information about the new tests and their links to the state standards, which had not changed from the previous test.

    Here is a summary of their findings:

    First, instructional practices favoring the items, language, and limitations of the tests were pervasive. “Strategies” for test taking (e.g., prescribed annotations, acronyms for analyzing poems) were frequently substituted for cognitive and metacognitive reading strategies and were legitimized as comprehension processes despite a lack of research supporting their use. Writing instruction was tailored to the tests’ short length requirement and to particular genres. Study participants questioned the time-consuming benchmark testing in terms of both item quality and adherence to good practices in measurement design. They worried that a percentage-passing metric used to evaluate and compare classrooms and schools failed to account for individual student growth over time or differences in prior achievement across groups of students.

    Second, although existing standards did not change when the STAAR was introduced, there arose new uncertainties about how and what to teach. Specifically, there was confusion over what it meant to increase rigor, for example, whether that meant merely teaching harder, providing more difficult tasks, or something else entirely. It was commonly understood the new tests would require students to understand passages holistically rather than just to read for retrieval, and that students would be expected to read a wider range of texts. However, teachers felt they needed sample test items to guide decisions about teaching particular standards.

    Third, Davis and Willson theorized about why these test-centric practices were perpetuated even among teachers who found them to be problematic. Their analysis led them to the following understandings:

    • Teachers were compelled, for students’ sakes, to minimize the differences between what students experienced in class and what they would encounter on the tests.
    • Teachers broke down reading and writing processes into small pieces so they could publicize them (e. g, written objectives on the board) for administrators’ approval, particularly the skills most likely to be included on STAAR items.
    • Inappropriate inferences using benchmark test data had become normalized and accepted, for instance, analysis of a single text item to make inferences about a student’s competence with a standard, or evaluations of a teacher’s quality with no reference to student starting points.

    The authors describe a phenomenon that is far more consequential than “teaching to the test.” They sum up their perspective on the test-centric instruction teachers reported in this way: “Instead of instructional practices bending to align with a test, we see the test being allowed to enlarge and encircle all aspects of instructional practice” (p. 374).

    How can teachers, feeling professionally and morally compromised by such a trend, regain a sense of agency about their work? Because these practices have become normalized and entrenched in schools, Davis and Willson say the first step is to notice and name these indicators of test-centric practices: (1) use of test-like passages for instruction, (2) time spent teaching students how to document evidence of prescribed test-taking strategies, (3) the use of test-like questions as the basis of classroom discussion, and (4) discussions of data from test-formatted practice tests. Awareness of these and similar practices, they suggest, is the first step to principled resistance (Achinstein & Ogawa, 2006).

    Gay Ivey, PhD, is the Tashia F. Morgridge Chair in Reading at the University of Wisconsin-Madison. She is a member of the ILA Literacy Research Panel and currently serves as vice president of the Literacy Research Association.

    The ILA Literacy Research Panel uses this blog to connect ILA members around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Achinstein, B., & Ogawa, R.T. (2006). (In)fidelity: What new teacher resistance reveals about professional principles and prescriptive educational policies. Harvard Educational Review, 76(1), 30–63.

    Davis, D.S., & Willson, A. (2015). Practices and commitments of test-centric literacy instruction: Lessons from a testing transition. Reading Research Quarterly, 50(3), 357–379.

     
    Read More
    • Administrator
    • Job Functions
    • Blog Posts
    • ~8 years old (Grade 3)
    • Classroom Teacher
    • Research
    • Curriculum Development
    • Classroom Instruction
    • Professional Development
    • Topics
    • Literacy Education Student
    • ~7 years old (Grade 2)
    • ~6 years old (Grade 1)
    • ~5 years old (Grade K)
    • ~4 years old (Grade Pre-K)
    • Student Level
    • Research & Practice: Viewpoints
    • Literacy Research
    • Tutor
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Literacy Coach
    • Librarian
    • Content Types

    Getting on the Same Page About Reading by Third Grade in Michigan

    By Nell K. Duke
     | Apr 28, 2016

    200411960-001_x300There is considerable interest across the United States in increasing the number of children who are reading at grade level by the end of third grade (e.g., Rose, 2012). Some responses to this interest, such as mandatory retention policies, are not supported by the weight of research evidence (e.g., Reschly & Christenson, 2013). In contrast, research offers substantial support for the impact of professional development, coaching, and specific instructional practices on literacy growth (e.g., Carlisle & Berebitsky, 2011; Purcell-Gates, Duke, & Stouffer, in press; Yoon, Duncan, Lee, Scarloss, & Shapley, 2007).

    In Michigan, an Early Literacy Task Force has been formed to support professional development, coaching, and the use of research-supported instructional practices statewide. This is no small task. Michigan has 540 Local Education Agencies (LEAs) and 56 Intermediate School Districts charged with providing various kinds of support to those LEAs, as well as a variety of nonprofit and other organizations that interact with literacy education.

    To provide leadership in this context, Michigan’s Association of Intermediate School Administrators (MAISA), through its General Educational Leadership Network (GELN), formed the Early Literacy Task Force. The Task Force comprises representatives from a number of relevant organizations in Michigan, including not only Intermediate School Districts, but also the Michigan Reading Association, the Michigan Department of Education, the Michigan Association for Computer Users in Learning, the Michigan Elementary and Middle School Principals Association, the University of Michigan, Michigan State University, and many others.

    In our first meeting, we agreed there is an enormous need in Michigan to get on the same page about effective early literacy instruction—on the same page about the content of early literacy professional development for Michigan teachers, the focus of literacy coaching, and the literacy instructional practices we want children to experience. Toward that end, we developed two documents, which you can access at the following links:

    Essential Instructional Practices in Early Literacy: Prekindergarten

    Essential Instructional Practices in Early Literacy: Grades K to 3

    In developing the documents, we relied heavily on research and focused on high-utility instructional practices (for further information about the purposes and use of the documents, please see their introductory sections). Given the effectiveness and range of these practices, we believe that focusing professional development and coaching on them could make a measurable difference in reading-by-third-grade outcomes. We are pleased that the documents have already received considerable attention—not only in Michigan but elsewhere in the United States and beyond. Plans are underway to create professional development offerings and materials, including an extensive library of video clips, to support learning about the practices.

    Additional documents, such as Essential Practices in Literacy Coaching and Literacy Essentials School-Level Companion Document are also in the works. Task Force leaders Joanne Hopper (MAISA GELN Director), Naomi Norman (Interim Assistant Superintendent, Achievement & Student Services at the Washtenaw Intermediate School District and the Livingston Education Agency), and Susan Townsend (Director of Instruction & Learning Services at the Jackson Intermediate School District), report a degree of collaboration and unity among education stakeholders that is unprecedented in Michigan. We are now in the same book and, with continued effort, we will be on the same page as well.

    Nell K. Duke is a professor of Literacy, Language, and Culture at the University of Michigan, a member of the ILA Literacy Research Panel, and author of Inside Information: Developing Powerful Readers and Writers of Informational Text Through Project-Based Instruction.

    The ILA Literacy Research Panel uses this blog to connect ILA members around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Carlisle, J.F., & Berebitsky, D. (2011). Literacy coaching as a component of professional development. Reading and Writing Quarterly, 24(7), 773–800.

    Michigan Association of Intermediate School Administrators General Education   Leadership Network Early Literacy Task Force (2016). Essen­tial instructional    practices in early literacy: Prekindergar­ten. Lansing, MI: Authors.

    Michigan Association of Intermediate School Administrators General Education Leadership Network Early Literacy Task Force (2016). Essential instructional practices in early literacy: K to 3. Lansing, MI: Authors.

    Purcell-Gates, V., Duke, N.K., & Stouffer, J. (in press). Teaching literacy: Reading. In D.H. Gitomer & C.A. Bell (Eds.), The AERA handbook of research on teaching (5th ed.). Washington, DC: American Educational Research Association.

    Reschly, A.L., & Christenson, S.L. (2013). Grade retention: Historical perspectives and new research. Journal of School Psychology, 51(3), 319–322. Retrieved from http://dx.doi.org/10.1016/j.jsp.2013.05.002

    Rose, S. (2012). Third grade reading policies. Denver, CO: Education Commission of the States. Retrieved from www.ecs.org/clearinghouse/01/03/47/10347.pdf

    Yoon, K.S., Duncan, T., Lee, S.W.-Y., Scarloss, B., & Shapley, K.L. (2007). Reviewing the evidence on how teacher professional development affects student achievement (Issues & Answers Report, REL 2007–No. 033). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southwest. Retrieved from http://ies.ed.gov/ncee/edlabs/projects/project.asp?ProjectID=70

     
    Read More
    • Professional Development
    • Research & Practice: Viewpoints
    • Research
    • Literacy Research
    • Tutor
    • Topics
    • Teaching Strategies
    • Teacher Educator
    • Student Level
    • Special Education Teacher
    • Reading Specialist
    • Literacy Education Student
    • Literacy Coach
    • Literacies
    • Librarian
    • Learner Types
    • Job Functions
    • Innovating With Technology
    • Content Types
    • Classroom Teacher
    • Blog Posts
    • Administrator
    • Digital Literacies
    • ~9 years old (Grade 4)
    • ~8 years old (Grade 3)
    • ~7 years old (Grade 2)
    • ~6 years old (Grade 1)
    • ~5 years old (Grade K)
    • ~4 years old (Grade Pre-K)
    • ~18 years old (Grade 12)
    • ~17 years old (Grade 12)
    • ~16 years old (Grade 11)
    • ~15 years old (Grade 10)
    • ~14 years old (Grade 9)
    • ~13 years old (Grade 8)
    • ~12 years old (Grade 7)
    • ~11 years old (Grade 6)
    • ~10 years old (Grade 5)

    From Dialogic Tools to a Dialogic Stance

    By Mary M. Juzwik, Mandie Dunn, and Ashley Johnson
     | Apr 14, 2016

    ThinkstockPhotos-103582643_x300We did take a more exploratory, student-driven discussion during class where I was just sparking an idea that the students would run with. At one point, they literally turned in their seats toward each other, and that’s when I knew they were super engaged not only with me, but with each other. It was AWESOME.

    So reflected a preservice teacher we work with, following a lively discussion about Black Lives Matter in an urban 11th-grade English classroom. We are struck by this moment and by the teacher’s excitement.  For her, this moment is unusual and exemplary. Our work focuses around the question: How can such moments of dialogic teaching become more typical, rather than remarkable, in literacy classrooms?  Mary and her colleagues defined dialogic teaching as “the instructional designs and practices that provide students with frequent and sustained opportunities to engage in learning talk” (Juzwik, Borsheim-Black, Caughlan, & Heintz, 2013, p. 5). When teachers create space for such talk, students have an opportunity to build on their own and each other’s ideas and connect them into coherent lines of thinking and inquiry over time (Alexander, 2008; Boyd, 2016). When teachers purposefully nurture and sustain such a stance, they make a dialogic classroom environment possible. Dialogic classroom environments bolster student literacy achievement growth (e.g., Murphy, Wilkinson, Soter, Hennessy, & Alexander, 2009), prepare students for participation in democratic life (Juzwik et al., 2013), foster student engagement (Kelly, 2008), and create more humane and sustainable workplaces for teachers.

    Dialogic tools

    Mary’s research team identified dialogic tools as a key component of literacy teaching that successfully provided students with opportunities for learning talk (Juzwik et al., 2013). They identified both teacher- and student-centered tools such as anticipation guides, teacher-scripted questions, four corners, fishbowls, and literature circles. Teachers and students collaboratively use these tools in planning and classroom practice to scaffold learning talk (Alexander, 2008; Juzwik et al., 2013). We and the teachers we work with find these tools helpful for instructional planning, both short-term (lesson) planning and long-term (unit or yearlong) planning. For example, English teacher Liz Krause puts up a word chart of dialogic tools behind her desk to provide a reminder as she plans. Others provide students with sentence stems or discussion phrases or rubrics to focus students’ attention on dialogic moves.  
    .
    Dialogic tools embedded in dialogic stance

    Talking to learn is more than just increasing student talk or implementing particular tools. Using dialogic tools is more effective when embedded in a broader dialogic stance over time: “A teacher adopting a dialogic stance listens, leads and follows, responds and directs” (Boyd & Markarian, 2015, p. 273). A dialogic stance involves more than successfully enacting some dialogic tool. It further entails a sustained focus on the potential of student and teacher ideas to promote learning and inquiry. For example, a fishbowl tool should focus on the students and teacher building ideas together, not on students performing the elements of a good discussion. At the end of a fish bowl, instead of evaluating how the discussion went, students can instead consider questions about which ideas challenged them most or supported their thinking about a text. These questions emphasize listening, learning, and talking with each other. When teachers orient their classroom practices toward learning talk over the long term, a dialogic classroom environment where students and teachers learn together becomes possible.

    Mary JuzwikMandie DunnAshley JohnsonMary M. Juzwik is a professor at Michigan State University. She is also the coeditor of Research in the Teaching of English and coauthor of Inspiring Dialogue: Talking to Learn in the English Classroom. Mandie Dunn and Ashley Johnson are doctoral students in curriculum, instruction, and teacher education at Michigan State University.

    The ILA Literacy Research Panel uses this blog to connect ILA members around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     
     

    References

    Alexander, R. (2008). Towards dialogic teaching: Rethinking classroom talk (4th ed.). York,  England: Dialogos.

    Boyd, M. (2016). Connecting “man in the mirror”: Developing a classroom teaching and learning trajectory. L1 Educational Studies in Language in Literature, 15, 1–26.

    Boyd, M., & Markarian, W. (2015). Dialogic teaching and dialogic stance: Moving beyond interactional form. Research in the Teaching of English, 49(3), 272–296.

    Juzwik, M.M., Borsheim-Black, C., Caughlan, S., & Heintz, A. (2013). Inspiring dialogue: Talking to learn in the English classroom. New York: Teachers College Press.

    Kelly, S. (2008). Race, social class, and student engagement in middle school English   classrooms. Social Science Research, 37(2), 434–448.

    Murphy, P.K., Wilkinson, I.A.G., Soter, A.O., Hennessy, M.N., & Alexander, J.F. (2009). Examining the effects of classroom discussion on students’ comprehension of text: A meta-analysis. Journal of Educational Psychology, 101(3), 740–764.


     
    Read More
    • Policymaker
    • Topics
    • Librarian
    • Research
    • Curriculum Development
    • Literacy Research
    • Professional Development
    • Education Standards (General)
    • Assessment
    • Research & Practice: Viewpoints
    • Classroom Teacher
    • Administrator
    • Blog Posts
    • Tutor
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Literacy Education Student
    • Job Functions
    • Literacy Coach
    • Content Types

    Better Than CBM: Assessments That Inform Instruction

    By Peter Johnston and Deborah Rowe
     | Mar 31, 2016

    ThinkstockPhotos-86535128_x300An earlier post noted that Curriculum-Based Measurement has come to dominate classroom assessment, and thus curriculum, and that it is not helpful in informing instruction. Teachers need information that helps them see reading and writing through the learners’ eyes. What do students know about print—its purposes, forms, and content? What strategies do they use to make meaning as they read an author’s text or compose one of their own? Effective instruction builds from children’s current strengths while “nudging” them to form new understandings about literacy processes, purposes, and content just beyond their current reach.

    Teachers using ongoing curriculum-based assessments are actually good at determining whether a child’s literacy is more and less developed (Taylor, Anselmo, Foreman, Schatschneider, & Angelopoulos, 2000). For example, in the early grades there is much information in a child’s writing. Merely getting children involved in making books yields a great deal of information about each child’s knowledge of print and how books work, while simultaneously engaging them in long-term projects that build writing stamina. Observing and conferring with young writers provides teachers with information about children’s composition processes and word making, without having to do any testing (e.g., Ray & Cleaveland, 2004; Rowe & Wilson, 2015). For example, from a child’s invented spelling patterns a teacher might recognize that assistance with phonemic awareness is needed (wt = wanted) or not needed (trubl = trouble) and which words are well known and can be used as instructional anchors. But we can also learn about the child’s sense of genre, his or her language choices, punctuation knowledge, how his or her illustrations enhance the textual meanings, revision strategies, attention span, and so forth.

    Records of children’s oral reading, such as running records (Clay, 2000) and associated miscue analysis (Wilde, 2000), even abbreviated forms (Vellutino & Scanlon, 2002), provide similarly productive information about children’s strategic processing of print. For example, we can tell whether children are monitoring and self-correcting when their reading doesn’t make sense or when it makes sense but doesn’t match the print. We can tell what strategies students use to figure out unknown words with and without support. This information can substantively inform our instruction.

    Checklists, too, particularly ones that require supporting evidence, can be useful (Scanlon, Vellutino, Small, Fanuele, & Sweeney, 2005). Instructional book levels such as those used by Reading Recovery or Fountas and Pinnell (1996) also can indicate progress. We are not advocating assigning children books to read by level. Instead, we suggest that teachers keep track of the estimated difficulty of the children’s book choices, the actual difficulty for the particular child, and a record of the strategies the child uses in reading the book. This would indicate whether the level of difficulty is appropriate, a necessary condition for children to be in control of their learning and for building persistence (Gersten, Fuchs, Williams, & Baker, 2001). Indeed, appropriate task difficulty is a core feature of successful interventions in learning disabilities (Swanson & Hoskyn, 1998) but commonly not the fate of less accomplished readers (Allington, 1983).

    Screening by testing all children is unnecessary because teachers who use formative assessments readily identify children making more and less adequate progress. Indeed, teachers unable to do so are poorly prepared to teach any of the children, let alone those experiencing difficulty. The small group of children teachers identify as not progressing well might be given a more detailed, instructionally informative assessment, such as the Observation Survey (Clay, 2004), which provides highly reliable and valid screening information while documenting in detail children’s knowledge of print and strategic sense making (Denton, Ciancio, & Fletcher, 2006). This level of precision, however, is mostly necessary for high-resource decisions, such as additional 1:1 instruction. For older children, assessments like the QRI-V also examine the child’s reading strategies.

    Keeping individual student folders containing multiple data sources (writing, running records, details about word/letter recognition and representation, reading and writing conference notes, records of children’s book discussions, etc.) allows routine, collaborative stocktaking and analysis of children’s development (e.g., McGill-Franzen, Payne, & Dennis, 2010).

    Overall, teachers need support for engaging in assessment practices that help them understand students’ approaches to reading and writing—the attitudes, skills, and strategies students actually use. Assessment results that compare students with normative expectations for reading rate, or that reflect the number of questions answered correctly on a comprehension test, fail to provide the kind of specific information on students’ literacy processes that teachers need. 

    In the end, though, we also need formative assessment of our own teaching practices through analysis of recordings (e.g., of book discussions and 1:1 interactions) and collaborative, data-based observations. Obviously, when children are not successful in our classrooms, our teaching must come under as much scrutiny as the child’s literate development so that we can be responsive to the child’s needs. This is work best done collaboratively with peers because it is equally important that we learn alternatives from those teachers who are meeting with greater success (Bryk, 2015). 

    peter johnstonPeter Johnston, PhD, is Professor Emeritus at the University of Albany-SUNY. He is a member of the ILA Literacy Research Panel. Deborah Rowe is associate professor in the Department of Teaching and Learning in the Peabody College of Education and Human Development at Vanderbilt University in Nashville, TN.

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Allington, R.L. (1983). The reading instruction provided readers of differing reading abilities. Elementary School Journal, 83(5), 548–559.
    Bryk, A.S. (2015). Accelerating how we learn to improve. Educational Researcher, 44(9), 467–477.
    Clay, M.M., (2000). Running records for classroom teachers. Portsmouth, NH: Heinemann.
    Clay, M.M. (2004). An observation survey of early literacy achievement (2nd ed.). Portsmouth, NH: Heinemann.
    Denton, C.A., Ciancio, D.J., & Fletcher, J.M. (2006). Validity, reliability, and utility of the Observation Survey of Early Literacy Achievement. Reading Research Quarterly, 41(1), 8–34.
    Fountas, I.C., & Pinnell, G.S. (1996). Guided reading. Good first teaching for all children. Portsmouth, NH: Heinemann.
    Gersten, R., Fuchs, L.S., Williams, J.P., & Baker, S. (2001). Teaching reading comprehension strategies to students with learning disabilities: A review of research. Review of Educational Research, 71(2), 279–320.
    McGill-Franzen, A., Payne, R.L., & Dennis, D.V. (2010). Responsive intervention: What is the role of appropriate assessment? In P.H. Johnston (Ed.), RTI in literacy—Responsive and comprehensive (pp. 115–132). Newark, DE: International Reading Association.
    Ray, K.W., & Cleaveland, L.B. (2004). About the authors: Writing workshop with our youngest writers. Portsmouth, NH: Heinemann.
    Rowe, D.W., & Wilson, S.J. (2015). The development of a descriptive measure of early childhood writing: Results from the Write Start! writing assessment. Journal of Literacy Research, 47(2), 245–292.
    Scanlon, D.M., Vellutino, F.R., Small, S.G., Fanuele, D.P.,& Sweeney, J.M. (2005). Severe reading difficulties—can they be prevented? A comparison of prevention and intervention approaches. Exceptionality, 13(4), 209–227.
    Swanson, H.L., & Hoskyn, M. (1998). Experimental intervention research on students with learning disabilities: A meta-analysis of treatment outcomes. Review of Educational Research, 68(3), 277–321.
    Taylor, H.G., Anselmo, M., Foreman, A.L., Schatschneider, C., & Angelopoulos, J. (2000). Utility of kindergarten teacher judgments in identifying early learning problems. Journal of Learning Disabilities, 33(2), 200–210.
    Vellutino, F.R., & Scanlon, D.M. (2002). The interactive strategies approach to reading intervention. Contemporary Educational Psychology, 27(4), 573–635.
    Wilde, S. (2000). Miscue analysis made easy: Building on student strengths. Portsmouth, NH: Heinemann.

    Resources

    Teachers College Writing Project website has good free resources for reading benchmark assessments linked to the Common Core State Standards: http://readingandwritingproject.org/

     
    Read More
Back to Top

Categories

Recent Posts

Archives