In the week before our school year started, our in-service professional development days focused on the topic of formative assessment and what techniques and strategies make for the most effective types of formative assessment. (If you’d like a more thorough recap of those sessions, check out the write up in Fine Print, one of my school’s online publications).
Leading us through these sessions was Jan Chappuis, who has written a number of books on formative assessment. Her presentation focused on her Seven Strategies book and the ways we could both implement these techniques and use them to help improve student learning and foster a “learning orientation” within students.
Jan’s day-and-a-half presentation was really dense and filled with more specific suggestions and ideas about restructuring classroom activities than one could possibly hope to implement in a single year (let alone in the last few days before a new school year). She did, however, note at the end of her presentation that the best approach for integrating formative assessment into one’s classroom is to “start small and keep going.”
With that admonition to adopt and work to implement something from her presentation, I gravitated toward her suggestions to use rubrics (framed around specific learning goals rather than check-list, task-completion goals) and sample student work (of both excellent and not so excellent quality) as a way to help students understand both what they’re supposed to be learning and how they can become more adept at self-assessment.
Rubrics of Yesteryear
My interest in using more effective rubrics, however, was spurred entirely by this presentation. Last year my colleague Kate and I experimented with using the “single-point rubric” as a way to get away from the overwhelming check-box features of traditional rubrics. This change had the benefit of pushing me to explicitly articulate (and visually center) the major learning goals for a particular assignment.
I used the “single-point” rubric for a seminar on disability history that I taught last fall and found it a useful framework for explaining to students how they were doing on the various learning goals of the assignment. I even wrote a whole post for my students about my rationale for using this rubric and what I hoped they’d gain from it. The labor involved in articulating the positive and not so positive aspects of each piece of student work, however, ended up being pretty overwhelming by the end of the semester.
I ended the term feeling unsure about the net benefit of this framework. Yes, it gave a lot of feedback, but how effective was it when I shared both positive and negative aspects for a single learning goal? Did it always give students a clear sense of what to work on to improve? Unfortunately, I didn’t survey my students about their reactions to this rubric format, so I don’t have a clear idea about how well it worked. Missed opportunity [sigh].
Everything Old (or at least rubrics) Is New Again
So, when Jan Chappuis made learning goal-centered rubrics a centerpiece of her presentation as a way to do less grading and commenting while also providing more effective and punctual feedback, I was intrigued.
Jan recommended that rubrics should be written in student-friendly language (often using the first person—a stylistic choice that makes the learning goals more accessible and thereby helps students self-assess more readily) and only include as many different tiers/levels as there are gradations of mastery. In other words, if you only see four different levels of student skill for a learning goal, there should only be four potential outcomes on that rubric.
These guidelines ultimately recommended (and many of Jan’s models confirmed) using a more traditional-looking rubric with lots of boxes and descriptions of performance at various levels.
With our shift to Canvas this year, those recommendations ended up being good news because (at present) Canvas’ rubric creation tool doesn’t allow one to create a single-point rubric. Instead, the tool creates the fairly traditional rubrics with lots of boxes and descriptors—essentially the kind Jan recommended using with students.
I used this style of rubric for the first time this year for a comparative writing assignment about our summer reading books for AP European History: Stephen Greenblatt’s The Swerve and Joyce Appleby’s Shores of Knowledge. Because I use this introductory writing assignment to get a sense of students’ ability to structure an argument, use evidence, and offer analytical commentary, I only have them write a two paragraph response—an introduction with a thesis and one body paragraph. Given this narrow focus, I similarly made my rubric focus only on the learning/writing goals that apply to those part of an essay. Here’s what I’ve developed/adapted from an excellent writing rubric created by my colleague Kate:
I spent a day walking students through the rubric and reading two sample essays, which gave them the opportunity to put the rubric into action. By working with the students through one strong and one weak example, I hoped to both give them a sense of what I’m looking for in this assignment and give them some practice at identifying those characteristics in anonymous student work. By the end of that day, students had become pretty adept at evaluating these elements in sample work and grounding their assessments in the particular language of the rubric.
Although this marks a good start for me in terms of using rubrics and sample student work more extensively this year, it nevertheless leaves me with the remaining challenge of figuring out how to translate those “learning goal”-based rubrics into grades that are recognizable on the traditional grading scale. In experimenting with this task, I was heartened by a comment Jan made during her visit: (I’m paraphrasing, but it was something to the effect of) “it doesn’t matter what type of grading system you have so long as those grades are based on the learning goals of the course.”
But how do you grade it?
Good question, Italicized Header 3! Before creating this rubric, I did some research into how others have gone about translating learning goal or standards-based grades into a more traditional format. Here are a few links that I found useful in explaining potential solutions for that process:
- Always Formative, “Translating to a Letter Grade”
- MeTA Musings, “Standards-Based Grading: Converting to Letter Grades”
- Sam Shah, “My SBG Rubric”; “My SBG System”
Of all the systems explained in those posts (and others I haven’t linked to), I found the “Logic” or “Piecewise Function” for converting learning goal-based grades into traditional grades (explained in the Always Formative post above) the most compelling and adaptable. With that inspiration, I went about drafting, getting feedback on, and revising my own “Piecewise Function” for this particular assignment. Here’s what I settled on:
At present, I’ve only used this translation table for 11 essays, but I think it’s leaving me with predictable (and similar) results to what I’ve gotten in previous years when using a more holistic approach to evaluating assignments like this one. My hope, however, is that this rubric provides students with clear feedback that will help them see where they should focus their attention on upcoming writing assignments. I’ll certainly have more to say on all those topics once I’ve finished grading all the essays and get some feedback from the students.
I’d love to hear how others have used systems like this one and what advice they have. Given that math and science teachers wrote those blog posts from which I drew my inspiration and models, I’d love to hear insights from humanities (and especially history) teachers who have used a similar model. What types of scales have others used? How have students reacted to the feedback from the rubrics versus the translated grade? How has this system worked when the learning goals aren’t as explicitly skill-based but are more focused on content?
There’s a whole boatload of material online about what formative assessment is and how best to implement it in the classroom, but I’ll leave that to your Google or YouTube searching. Here’s just one example of the sort of tutorial/instructional materials that you can find (thanks to my colleague, Wendell, for passing along the following video) that addresses the benefits and best methods for implementing formative assessment: