LIBRARY
March 2006 -- vol. 95, no. 4 The Trouble with Rubrics Once upon a time I vaguely thought of assessment in dichotomous terms: The old approach, which consisted mostly of letter grades, was crude and uninformative, while the new approach, which included things like portfolios and rubrics, was detailed and authentic. Only much later did I look more carefully at the individual floats rolling by in the alternative assessment parade -- and stop cheering. For starters, I realized that it’s hardly sufficient to recommend a given approach on the basis of its being better than old-fashioned report cards. By that criterion, just about anything would look good. I eventually came to understand that not all alternative assessments are authentic. My growing doubts about rubrics in particular were prompted by the assumptions on which this technique rested and also the criteria by which they (and assessment itself) were typically judged. These doubts were stoked not only by murmurs of dissent I heard from thoughtful educators but by the case made forthis technique by its enthusiastic proponents. For example, I read in one article that “rubrics make assessing student work quick and efficient, and they help teachers to justify to parents and others the grades that they assign to students.”[1] To which the only appropriate response is: Uh-oh. First of all, something that’s commended to teachers as a handy strategy of self-justification during parent conferences (“Look at all these 3’s, Mrs. Grommet! How could I have given Zach anything but a B?”) doesn’t seem particularly promising for inviting teachers to improve their practices, let alone rethink their premises. Second, I’d been looking for an alternative to grades because research shows three reliable effects when students are graded: They tend to think less deeply, avoid taking risks, and lose interest in the learning itself.[2] The ultimate goal of authentic assessment must be the elimination of grades. But rubrics actually help to legitimategrades by offering a new way to derive them. They do nothing to address the terrible reality of students who have been led to focus on getting A’s rather than on making sense of ideas. Finally, there’s the matter of that promise to make assessment “quick and efficient.” I’ve graded enough student papers to understand the appeal here, but the best teachers would react to that selling point with skepticism, if not disdain. They’d immediately ask what we had to sacrifice in order to spit out a series of tidy judgments about the quality of student learning. To ponder that question is to understand how something that presents itself as an innocuous scoring guide can be so profoundly wrongheaded.
Consistent and uniform standards are admirable, and maybe even workable, when we’re talking about, say, the manufacture of DVD players. The process of trying to gauge children’s understanding of ideas is a very different matter, however. It necessarily entails the exercise of human judgment, which is an imprecise, subjective affair. Rubrics are, above all, a tool to promote standardization, to turn teachers into grading machines or at least allow them to pretend that what they’re doing is exact and objective. Frankly, I’m amazed by the number of educators whose opposition to standardized tests and standardized curricula mysteriously fails to extend to standardized in-class assessments. The appeal of rubrics is supposed to be their high interrater reliability, finally delivered to language arts. A list of criteria for what should be awarded the highest possible score when evaluating an essay is supposed to reflect near-unanimity on the part of the people who designed the rubric and is supposed to assist all those who use it to figure out (that is, to discover rather than to decide) which essays meet those criteria. Now some observers criticize rubrics because they can never deliver the promised precision; judgments ultimately turn on adjectives that are murky and end up being left to the teacher’s discretion. But I worry more about the success of rubrics than their failure. Just as it’s possible to raise standardized test scores as long as you’re willing to gut the curriculum and turn the school into a test-preparation factory, so it’s possible to get a bunch of people to agree on what rating to give an assignment as long as they’re willing to accept and apply someone else’s narrow criteria for what merits that rating. Once we check our judgment at the door, we can all learn to give a 4 to exactly the same things. This attempt to deny the subjectivity of human judgment is objectionable in its own right. But it’s also harmful in a very practical sense. In an important article published in 1999, Linda Mabry, now at Washington State University, pointed out that rubrics “are designed to function as scoring guidelines, but they also serve as arbiters of quality and agents of control” over what is taught and valued. Because “agreement among scorers is more easily achieved with regard to such matters as spelling and organization,” these are the characteristics that will likely find favor in a rubricized classroom. Mabry cites research showing that “compliance with the rubric tended to yield higher scores but produced ‘vacuous’ writing.”[3] To this point, my objections assume only that teachers rely on rubrics to standardize the way they think about student assignments. Despite my misgivings, I can imagine a scenario where teachers benefit from consulting a rubric briefly in the early stages of designing a curriculum unit in order to think about various criteria by which to assess what students end up doing. As long as the rubric is only one of several sources, as long as it doesn’t drive the instruction, it could conceivably play a constructive role. But all bets are off if students are given the rubrics and asked to navigate by them. The proponent I quoted earlier, who boasted of efficient scoring and convenient self-justification, also wants us to employ these guides so that students will know ahead of time exactly how their projects will be evaluated. In support of this proposition, a girl who didn’t like rubrics is quoted as complaining, “If you get something wrong, your teacher can prove you knew what you were supposed to do.”[4] Here we’re invited to have a good laugh at this student’s expense. The implication is that kids’ dislike of these things proves their usefulness – a kind of “gotcha” justification. Just as standardizing assessment for teachers may compromise the quality of teaching, so standardizing assessment for learners may compromise the learning. Mindy Nathan, a Michigan teacher and former school board member told me that she began “resisting the rubric temptation” the day “one particularly uninterested student raised his hand and asked if I was going to give the class a rubric for this assignment.” She realized that her students, presumably grown accustomed to rubrics in other classrooms, now seemed “unable to function unless every required item is spelled out for them in a grid and assigned a point value. Worse than that,” she added, “they do not have confidence in their thinking or writing skills and seem unwilling to really take risks.”[5] This is the sort of outcome that may not be noticed by an assessment specialist who is essentially a technician, in search of practices that yield data in ever-greater quantities. A B+ at the top of a paper tells a student very little about its quality, whereas a rubric provides more detailed information based on multiple criteria. Therefore, a rubric is a superior assessment. The fatal flaw in this logic is revealed by a line of research in educational psychology showing that students whose attention is relentlessly focused on how well they’re doing often become less engaged with what they're doing. There’s a big difference between thinking about the content of a story you’re reading (for example, trying to puzzle out why a character made a certain decision), and thinking about your own proficiency at reading. “Only extraordinary education is concerned with learning,” the writer Marilyn French once observed, whereas “most is concerned with achieving: and for young minds, these two are very nearly opposites.”[6] In light of this distinction, it’s shortsighted to assume that an assessment technique is valuable in direct proportion to how much information it provides. At a minimum, this criterion misses too much. But the news is even worse than that. Studies have shown that too much attention to the quality of one’s performance is associated with more superficial thinking, less interest in whatever one is doing, less perseverance in the face of failure, and a tendency to attribute the outcome to innate ability and other factors thought to be beyond one’s control.[7] To that extent, more detailed and frequent evaluations of a student’s accomplishments may be downright counterproductive. As one sixth grader put it, “The whole time I’m writing, I’m not thinking about what I’m saying or how I’m saying it. I’m worried about what grade the teacher will give me, even if she’s handed out a rubric. I’m more focused on being correct than on being honest in my writing.”[8] In many cases, the word even in that second sentence might be replaced with especially. But, in this respect at least, rubrics aren’t uniquely destructive. Any form of assessment that encourages students to keep asking, “How am I doing?” is likely to change how they look at themselves and at what they’re learning, usually for the worse. What all this means is that improving the design of rubrics, or inventing our own, won’t solve the problem because the problem is inherent to the very idea of rubrics and the goals they serve. This is a theme sounded by Maja Wilson in her extraordinary new book, Rethinking Rubrics in Writing Assessment.[9] In boiling “a messy process down to 4-6 rows of nice, neat, organized little boxes,” she argues, assessment is “stripped of the complexity that breathes life into good writing.” High scores on a list of criteria for excellence in essay writing do not mean that the essay is any good because quality is more than the sum of its rubricized parts. To think about quality, Wilson argues, “we need to look to the piece of writing itself to suggest its own evaluative criteria” – a truly radical and provocative suggestion. Wilson also makes the devastating observation that a relatively recent “shift in writing pedagogy has not translated into a shift in writing assessment.” Teachers are given much more sophisticated and progressive guidance nowadays about how to teach writing but are still told to pigeonhole the results, to quantify what can’t really be quantified. Thus, the dilemma: Either our instruction and our assessment remain “out of synch” or the instruction gets worse in order that students’ writing can be easily judged with the help of rubrics. Again, this is not a matter of an imperfect technique. In fact, when the how’s of assessment preoccupy us, they tend to chase the why’s back into the shadows. So let’s shine a light over there and ask: What’s our reason for trying to evaluate the quality of students’ efforts? It matters whether the objective is to (1) rank kids against one another, (2) provide an extrinsic inducement for them to try harder, or (3) offer feedback that will help them become more adept at, and excited about, what they’re doing. Devising more efficient rating techniques – and imparting a scientific luster to those ratings – may make it even easier to avoid asking this question. In any case, it’s certainly not going to shift our rationale away from (1) or (2) and toward (3). Neither we nor our assessment strategies can be simultaneously devoted to helping all students improve and to sorting them into winners and losers. That’s why we have to do more than reconsider rubrics. We have to reassess the whole enterprise of assessment, the goal being to make sure it’s consistent with the reason we decided to go into teaching in the first place.
NOTES 1. Heidi Goodrich Andrade, “Using Rubrics to Promote Thinking and Learning,” Educational Leadership, February 2000, p. 13. 2. I review this research in Punished by Rewards (Houghton Mifflin, 1993) and The Schools Our Children Deserve (Houghton Mifflin, 1999), as well as in “From Degrading to De-Grading,” High School Magazine, March 1999. 3. Linda Mabry, “Writing to the Rubric,” Phi Delta Kappan, May 1999, pp. 678, 676. 4. Quoted by Andrade, “Understanding Rubrics,” in http://learnweb.harvard.edu/alps/thinking/docs/rubricar.htm. Another educator cites this same quotation and adds: “Reason enough to give rubrics a closer look!” It’s also quoted on the RubiStar website, which is a sort of on-line rubric-o-matic. 5. Mindy Nathan, personal communication, October 26, 2004. As a student teacher, Nathan was disturbed to find that her performance, too, was evaluated by means of a rubric that offered a ready guide for evaluating instructional “competencies.” In an essay written at the end of her student-teaching experience, she commented, “Of course, rubrics don’t lie; they just don’t tell the whole story. They crunch a semester of shared learning and love into a few squares on a sheet that can make or break a career.” That’s why she vowed, “I won’t do this to my students. My goal as a teacher will be to preserve and present the human aspects of my students that defy rubric-ization.” 6. Marilyn French, Beyond Power: On Women, Men, and Morals (New York: Summit, 1985), p. 387. 7. For more on the distinction between performance and learning – and the detrimental effects of an excessive focus on performance -- see The Schools Our Children Deserve, chap. 2, which reviews research by Carol Dweck, Carole Ames, Carol Midgley, John Nicholls, and others. 8. Quoted in Natalia Perchemlides and Carolyn Coutant, “Growing Beyond Grades,” Educational Leadership, October 2004, p. 54. Notice that this student is actually making two separate points. Even some critics of rubrics, who are familiar with the latter objection – that honesty may suffer when technical accuracy is overemphasized – seem to have missed the former one. 9. Maja Wilson, Rethinking Rubrics in Writing Assessment (Portsmouth, NH: Heinemann, 2006). |
||
Copyright © 2006 by Alfie Kohn. This article may be downloaded, reproduced, and distributed without permission as long as each copy includes this notice along with citation information (i.e., name of the periodical in which it originally appeared, date of publication, and author's name). Permission must be obtained in order to reprint this article in a published work or in order to offer it for sale in any form. Please write to the address indicated on the Contact Us page. |
||
www.alfiekohn.org -- © Alfie Kohn |