Thursday, September 15, 2016

We Broke Fluency

    I've been thinking about fluency lately. Listening to my students read aloud is a September ritual, and as I am in sixth grade again the practice has led me to reflection. Fluency has woven its way through my 19 years of teaching, with different years bringing different philosophies. I've been teaching long enough to remember when fluency instruction first started to come back into style.

    1999: Maybe we should listen to students read aloud.


    It sounds ridiculous now, but this wasn't at all the standard instructional practice where I was teaching. It was the late 1990s, Chumbawumba was on the radio, and I was driving a Saturn. I was teaching from a whole language curriculum in which we focused on purposes for writing and purposes for reading--but had huge professional decision-making power for what we taught when. No Child Left Behind wasn't yet heard of, and yearly tests were still several years away. We started listening to kids read aloud and doing Running Records. I loved the moment of listening to a reader problem solve a word. I could learn so much about what was going on inside the reading brain by spending a few moments listening!
 

   2002: Let's try to improve student fluency over time.


   Over the next five years--as the 90s melted into the aughts, I traded in one Saturn for another, and our son born in 1999 grew from an infant to a preschooler--fluency instruction exploded. We went from musing about whether to listen to students reading aloud to being fully outfitted with timers, reading selections, and the Fluency Formula. It all made so much sense! Students who read faster can read more, students who read more read better, and if we could only get students to read faster they would read more and better. I taught with speed drills, I taught with phrase-cued text. Students read aloud to partners, they read aloud chorally, and we practiced reading smoothly and with expression.
    I noticed, though, that fluency data was--well, weird. Some kids showed the beautiful sloping increase, going from reading 100 words per minute to 110 to 120. Great! Others showed more stagnant growth patterns, though, and some perplexing kids even decreased in their words read correctly per minute.

    2006: Let's see which kids are at risk of reading failure by looking at fluency.


   Okay, well, I'm keeping all of this data anyway. In 2006 my husband and I were parenting a toddler and a first grader and working through graduate school together. I liked working with fluency instruction. As I was working on my summarizing book I also read many research journal articles about fluency, and there was a link between fluency and comprehension. A student's oral reading fluency score can help to screen readers who are at danger of reading failure. It wasn't really big news to me--I could already spot troubled readers--but using fluency data to target specific kids for intervention seemed like a great idea.
    This was a year with some of my favorite students ever--and some of my most puzzling. One student (I'll call him Tim) performed disastrously on oral reading fluency tests. He had miscues all over, read very slowly, and showed little prosody. However, he could give amazing insights into what we were reading, and on tests of comprehension showed grade level appropriate scores. What was going on with him? I wished that I could figure out what compensation strategies he was using so that I could teach them to others!

2010: Let's use fluency scores to see if teachers are effective.


    By 2010, our oldest son had grown into a capable, confident reader. His early "strategic" DIBELS scores were completely in the past, and I was starting to wonder about fluency as a screening tool.
    It was probably in this year that I started to feel my deep distrust for consultants. At a meeting about RTI, a consultant who had never been a classroom teacher started talking about how to know if an intervention was effective. "If a student is behind in oral reading fluency, then that student should be gaining at least two words per minute per week to catch up."
    Wait, what?
    I had been keeping fluency data long enough to know that kids almost never show a consistent upward trend. I had also been listening to kids read aloud for long enough to know that weekly progress monitoring of fluency was time-consuming and not all that useful. I'd also noticed that readers slow down when dealing with surprising or incongruent information in a text--which is exactly what we want them to do! A goal of gaining two words per week in fluency twists everything that fluency instruction was meant to do. Fluency isn't the goal; comprehension is the goal, and fluency is just a way to check in on comprehension. Right? Well, it seemed that was wrong.
    Using fluency data to keep tabs on teachers led to some really poor classroom practices. I was shocked when I first heard of first grade teachers doing nonsense word practice so that students could get better DIBELS scores. Nonsense word fluency is a measure, not a goal in itself, and emphasizing the reading of nonsense words seems to show kids that reading isn't supposed to make sense. Fluency instruction was winning at the expense of comprehension instruction--because oral reading speed can be measured much more easily. This kind of measurement was vital for RTI and for seeing if teachers were doing interventions appropriately.

2014: Fluency is broken.


   I started hearing odd things when students read aloud to me. They would take a deep gulp of air before starting to read--the better to rattle off as many words as they can in one minute. Students got used to reading only pieces of a text during one minute fluency probes, getting as far as they could and then stopping, leaving the story behind and never figuring out what happens. Instead of sounding out a word, students would mumble something close and blunder on to read as much as possible. I always wanted to spend more time listening to students read aloud, but ongoing progress monitoring, assessment, and implementation of Common Core standards always seemed to keep this from happening.
    Fluency (as measured by words correct per minute) stopped having much meaning to me. I had grown discouraged with keeping copious pages of data and not seeing much progress. Students were so used to reading as quickly as possible that I couldn't get much insight into their reading processes by doing a fluency probe. Changing fluency test "cut scores" meant that some students who could read beautifully were flagged for fluency intervention, while a few kids who were reading well but not comprehending slipped on by.

2016: Maybe we should listen to students read aloud.

    So this year, I'm going back to the start. 
    I'm going to listen to students read aloud--not for an oral reading fluency probe, not for progress monitoring, not for data or numbers. I'm going to listen to students read aloud so that I can learn about their problem-solving process. I'm going to talk about the text with readers and read together.
    It will take some time to break them out of their progress-monitored habits of taking a deep breath and rattling off as many words as they can. It will be a process to talk them through trying out a word that they do not know, pausing when they get to contradictory details, thinking through a text. But this work is totally worthwhile.
    Maybe I'll even put on my Doc Martens and listen to some Chumbawumaba too.
 

1 comment:

  1. What an excellent post! I have been teaching long enough to notice this trend as well. I especially relate to 2010.

    ReplyDelete