Sunday, September 18, 2016

The Cult of Finished Work

    When I was in second grade, I had a wonderful teacher. Quiet and soft-spoken, she engaged us in reading self-selected books in an age of basal readers, encouraged us to complete cooperative projects, and taught eight-year-olds how to summarize with strategies I still use today.
    I clearly remember one episode from my days spent with her. One day, as I was probably reading Hitty: Her First Hundred Years, I got the idea to create a newspaper for dolls. During seatwork time I called my teacher over to show her my idea. "I love it," she said. "Make more."
    She didn't mean someday, she meant now. So I went and worked with a few other interested students to make a tiny newspaper. We created advertisements for things that dolls would like, tried to think about articles for dolls, and wrote in super-tiny writing.
    Did we finish? I forget. I certainly don't have a finished product, and I don't think we continued making more past that magical morning. But the experience of the thrill of creation has stayed with me.

    I come back to thinking about this story a lot as I think about school. In many schools, there is an obsession with Finished Work. Students must finish "their work", which means that every line is filled in on the worksheet, every problem completed on the math page, every paragraph written of the essay. And yet I wonder--is this necessary?

Conflation of Learning with Completing
    In some ways, this obsession with finished work conflates learning with task completion. It's easy to see why these two processes are confused. Learning is messy, recursive, and difficult to measure. On the other hand, task completion is a simple yes/no question: Is it done?
    But every teacher knows that a student who has completed a task has not necessarily mastered a concept. The reverse is also true. Using task completion as a substitute for learning is shaky at best and damaging at worst.

Linear vs. Recursive Processes
    How can a focus on task completion be damaging? Research into higher-level thinking tasks like synthesizing reveals that the best work comes from those who use a recursive process. In a recursive process, the learner moves back and forth between tasks like reading and writing. Learners take time to consider information and act upon it. Sometimes they find dead ends which must be abandoned.
    But a focus on task completion encourages a linear process. Students get an assignment, finish it, and move on to the next. Students rarely look back at finished tasks, because they are finished. They do not engage in recursive thinking, because they are busy looking on to what they must do next.

Breaking Free
    Moving beyond the cult of Finished Work can be hard for a teacher. At first, you feel as if you are not a Good Teacher. Good Teachers have neat little gradebooks filled with marks that show exactly what students have learned. Allowing a student to "get away with" not completing an assignment makes one feel like a shirker, a wishy-washy impostor who doesn't have high standards.
    But the feeling changes when we think about the true purpose of the work. The activities we assign are learning tools. If an activity has no value beyond the learning, then why must it be completed?
    Moving beyond Finished Work requires a teacher to ask two questions:

What is the learning goal for this assignment?

Does this assignment have value beyond the learning goal?

What is the learning goal?
    This question seems so simple, but is deceptively complex. What do we want students to learn from an assignment? If the task is practice, then how much practice is needed before we can say that students have attained the skill? These are hard questions to consider, but they are so worthwhile for reflection.

Does this assignment have value beyond the learning goal?
    To think about this question, a teacher has to consider the WHY of an assignment. An assignment that is authentic and has real world value may very well have a purpose beyond the learning. Students may be motivated and excited to complete this task even if they know the concepts behind it very well.
    Many school assignments, however, have little long-term value. Consider a page of 30 long division problems. If a student can do 10 with no problems, what is the purpose of having that student complete all 30? Is it so that the page is "done"? That student's time would be better spent on some kind of enrichment activity in which he or she is challenged to go beyond the long division algorithm and consider its utility in the real world.


Sole, Isabel, Miras, Mariana, Castells, Nuria, Espino, Sandra, Minguela, Marta. 2013. Written Communication, v30 n1, p 63-90.

Thursday, September 15, 2016

We Broke Fluency

    I've been thinking about fluency lately. Listening to my students read aloud is a September ritual, and as I am in sixth grade again the practice has led me to reflection. Fluency has woven its way through my 19 years of teaching, with different years bringing different philosophies. I've been teaching long enough to remember when fluency instruction first started to come back into style.

    1999: Maybe we should listen to students read aloud.

    It sounds ridiculous now, but this wasn't at all the standard instructional practice where I was teaching. It was the late 1990s, Chumbawumba was on the radio, and I was driving a Saturn. I was teaching from a whole language curriculum in which we focused on purposes for writing and purposes for reading--but had huge professional decision-making power for what we taught when. No Child Left Behind wasn't yet heard of, and yearly tests were still several years away. We started listening to kids read aloud and doing Running Records. I loved the moment of listening to a reader problem solve a word. I could learn so much about what was going on inside the reading brain by spending a few moments listening!

   2002: Let's try to improve student fluency over time.

   Over the next five years--as the 90s melted into the aughts, I traded in one Saturn for another, and our son born in 1999 grew from an infant to a preschooler--fluency instruction exploded. We went from musing about whether to listen to students reading aloud to being fully outfitted with timers, reading selections, and the Fluency Formula. It all made so much sense! Students who read faster can read more, students who read more read better, and if we could only get students to read faster they would read more and better. I taught with speed drills, I taught with phrase-cued text. Students read aloud to partners, they read aloud chorally, and we practiced reading smoothly and with expression.
    I noticed, though, that fluency data was--well, weird. Some kids showed the beautiful sloping increase, going from reading 100 words per minute to 110 to 120. Great! Others showed more stagnant growth patterns, though, and some perplexing kids even decreased in their words read correctly per minute.

    2006: Let's see which kids are at risk of reading failure by looking at fluency.

   Okay, well, I'm keeping all of this data anyway. In 2006 my husband and I were parenting a toddler and a first grader and working through graduate school together. I liked working with fluency instruction. As I was working on my summarizing book I also read many research journal articles about fluency, and there was a link between fluency and comprehension. A student's oral reading fluency score can help to screen readers who are at danger of reading failure. It wasn't really big news to me--I could already spot troubled readers--but using fluency data to target specific kids for intervention seemed like a great idea.
    This was a year with some of my favorite students ever--and some of my most puzzling. One student (I'll call him Tim) performed disastrously on oral reading fluency tests. He had miscues all over, read very slowly, and showed little prosody. However, he could give amazing insights into what we were reading, and on tests of comprehension showed grade level appropriate scores. What was going on with him? I wished that I could figure out what compensation strategies he was using so that I could teach them to others!

2010: Let's use fluency scores to see if teachers are effective.

    By 2010, our oldest son had grown into a capable, confident reader. His early "strategic" DIBELS scores were completely in the past, and I was starting to wonder about fluency as a screening tool.
    It was probably in this year that I started to feel my deep distrust for consultants. At a meeting about RTI, a consultant who had never been a classroom teacher started talking about how to know if an intervention was effective. "If a student is behind in oral reading fluency, then that student should be gaining at least two words per minute per week to catch up."
    Wait, what?
    I had been keeping fluency data long enough to know that kids almost never show a consistent upward trend. I had also been listening to kids read aloud for long enough to know that weekly progress monitoring of fluency was time-consuming and not all that useful. I'd also noticed that readers slow down when dealing with surprising or incongruent information in a text--which is exactly what we want them to do! A goal of gaining two words per week in fluency twists everything that fluency instruction was meant to do. Fluency isn't the goal; comprehension is the goal, and fluency is just a way to check in on comprehension. Right? Well, it seemed that was wrong.
    Using fluency data to keep tabs on teachers led to some really poor classroom practices. I was shocked when I first heard of first grade teachers doing nonsense word practice so that students could get better DIBELS scores. Nonsense word fluency is a measure, not a goal in itself, and emphasizing the reading of nonsense words seems to show kids that reading isn't supposed to make sense. Fluency instruction was winning at the expense of comprehension instruction--because oral reading speed can be measured much more easily. This kind of measurement was vital for RTI and for seeing if teachers were doing interventions appropriately.

2014: Fluency is broken.

   I started hearing odd things when students read aloud to me. They would take a deep gulp of air before starting to read--the better to rattle off as many words as they can in one minute. Students got used to reading only pieces of a text during one minute fluency probes, getting as far as they could and then stopping, leaving the story behind and never figuring out what happens. Instead of sounding out a word, students would mumble something close and blunder on to read as much as possible. I always wanted to spend more time listening to students read aloud, but ongoing progress monitoring, assessment, and implementation of Common Core standards always seemed to keep this from happening.
    Fluency (as measured by words correct per minute) stopped having much meaning to me. I had grown discouraged with keeping copious pages of data and not seeing much progress. Students were so used to reading as quickly as possible that I couldn't get much insight into their reading processes by doing a fluency probe. Changing fluency test "cut scores" meant that some students who could read beautifully were flagged for fluency intervention, while a few kids who were reading well but not comprehending slipped on by.

2016: Maybe we should listen to students read aloud.

    So this year, I'm going back to the start. 
    I'm going to listen to students read aloud--not for an oral reading fluency probe, not for progress monitoring, not for data or numbers. I'm going to listen to students read aloud so that I can learn about their problem-solving process. I'm going to talk about the text with readers and read together.
    It will take some time to break them out of their progress-monitored habits of taking a deep breath and rattling off as many words as they can. It will be a process to talk them through trying out a word that they do not know, pausing when they get to contradictory details, thinking through a text. But this work is totally worthwhile.
    Maybe I'll even put on my Doc Martens and listen to some Chumbawumaba too.

Saturday, September 10, 2016

The Potato Chip Rubric: Teaching Students How to Understand Rubrics

     Well, I've started my 20th year of teaching! I'm eleven days into the school year, and I just love my class. Because I moved from fourth grade to sixth grade I've had these students before, and it's just wonderful to see how they have grown and matured. Plus they are SO MUCH FUN.

    Another side effect of my move to sixth grade is that I've been looking back at my files and resources, rediscovering lessons that I taught before. One of my old favorites is The Reviser's Toolbox by Barry Lane. In this book, Barry Lane suggests talking with students about "the horserace of criteria" and working with students to create their own rubrics for something real and tangible.

    This sounded like the perfect plan for this week! I've started the year with Flood Warning (free) from my Summary and Analysis series. These include short articles with summary prompts and analysis responses.

    As students prepared to discuss and turn in their first response, I wanted them to really understand the rubric that my co-teacher and I would be using. For many students, rubrics are rather opaque tools that just don't make much sense.

    Enter the Potato Chip rubric! On Day 1, we talked about how we could create a rubric for potato chips. Here were some highlights from our discussion:

-What would we include on a rubric for potato chips?
-What makes the ideal potato chip? How do you know that you are tasting a really excellent chip?
-What score points would we like to have? (This led to a conversation of 3-point versus 4-point rubrics.)
-What criteria do we use to score potato chips? (Interested discussions at tables resulted from this--students debated whether "flavor" and "taste" meant the same thing, whether a potato chip's size should be evaluated, and whether we should just look at plain potato chips or included flavored chips.)

   Then, I connected the potato chip rubrics to our summary rubric. What criteria do we use to score a summary? Why? Students scored their own summaries using the rubric. "I never knew what a rubric really meant before," one student confided. "This is so hard," another student said.

    Could we rate apple cider using the potato chip rubric? I asked, and of course the students laughed at the idea of crunchy apple cider. Just as you can't rate apple cider with a potato chip rubric, you can't rate an analysis with a summary rubric. We discussed the differences between the two rubrics and students scored themselves on this one as well.

    Okay, so I could have stopped there. But what would be the fun in that? The next day, my co-teacher and I both brought in some potato chips for students to taste. We had blind taste tests and students used their own rubrics to rate the chips! Kids had so much fun, and of course the teachers did too. Students noticed how a chip could get an excellent rating for texture but not for taste, and vice versa. We also learned that a blind taste test can lead to surprising results--all of us expected to dislike the Lay's Classic chips, but those turned out to be a favorite of many! This led to conversations about how teachers often score work without looking at which student wrote the paper.

    Teaching students for a second time means that I've been able to dive into curriculum more quickly this year than in other years. But lessons like The Potato Chip Rubric are still important for the start of the year. These kinds of activities give our class a shared foundation and help to demystify some of the tasks of teaching and learning.