Academic Skills, Grading, Rubrics, teaching

“Gettin’ ‘Bric-y Wit It”

If this post’s title made you think of the canonical Will Smith song, “Gettin’ Jiggy Wit It,” then congratulations, you got my terrible allusion! You now likely have that song stuck in your head. As recompense for suffering that indignity, you might just find an exciting surprise if you read through this post to the end.

But Will Smith isn’t really the point of this post. Rubrics are! (That’s the cruelest bait-and-switch of all time; I’m sorry).


“Rubrics, you say? Now I feel like this!” – via GIPHY

In my last post, I wrote about using a learning goal-based rubric¬†as a formative assessment technique. In that case, I used a rubric focused on five writing skills to first evaluate sample essays with my students; then I used it evaluate my students’ own writing on a similar prompt.

That process worked pretty successfully, I think. Although I’ve not had a ton of follow-up conversations with students about that first assignment, those few chats that I have had focused on how the student did in terms of those specific¬†learning goals. Furthermore, we ended those conversations with the student have clear and specific ideas about how to improve on those skills moving forward. In other words, they weren’t just “bottom line” conversations about the grade on the assignment, which is what I’d hope to achieve.

As a way to carry this momentum forward, I wanted to make a rubric for one of the types of assessments I use most frequently in my history classes: ID Terms.



“…and I’m historically significant because:”


I remember ID terms as a central feature of my own history classes in high school and college. The guidance I received about how best to approach these terms remained pretty consistent both in my own education and I’ve carried those guidelines into my own teaching. For over a decade now, I’ve explained that good ID term responses should do two things:

  1. Explain WHAT the term is.
  2. Explain WHY that term is significant.

However, I’ve always verbally¬†articulated those expectations to my students. After that discussion, I’ve then given students practice in writing IDs, using their sample IDs as fodder for feedback about the ways in which their responses are strong and how they could improve.

However, in the hopes of providing students with something more codified to use in the process of studying and writing ID terms, I thought I should put those general expectations into a rubric framed around what I perceive to be the main learning goals of historical ID terms.

So, below is my first draft at a rubric that captures the two key elements of ID terms, puts my expectations into (hopefully) clear language, and gives students clear guidance on¬†what they’re striving for when writing ID terms and conducting historical analysis in general.


As you can perhaps tell from the screenshot, I’ve built this rubric in Canvas with the hopes of using it frequently to give students feedback on practice ID terms they write and submit digitally. As of yet, I’ve not figured out how to use multiple versions of this rubric on a single assessment, which would be helpful, for instance, if an online quiz or test included multiple ID terms.

That issue, however, is a problem for another day, so in the meantime, I’ll leave with a request for feedback and suggestions:

  • What language am I missing in this rubric?
  • How could I reframe these criteria differently or more effectively for students?
  • Are the distinctions between the various levels of mastery clear enough in the language?
  • Any other thoughts?

And now, I’ll¬†really leave you with what you’ve been hoping to get to this whole post!

Grading, Rubrics, teaching, Writing

Formative Assessment, Rubrics, and (that pesky ol’ issue of) Grades

In the week before our school year started, our in-service professional development days focused on the topic of formative assessment and what techniques and strategies make for the most effective types of formative assessment. (If you’d like a more thorough recap of those sessions, check out the write up in¬†Fine Print, one of my school’s online publications).


Leading us through these sessions was¬†Jan Chappuis, who has written a number of books on formative assessment. Her presentation focused on her Seven Strategies book and the ways we could both implement these techniques¬†and use them to help improve student learning and foster a “learning orientation” within students.


Jan’s day-and-a-half presentation was really dense and filled with more specific suggestions and ideas about restructuring classroom activities than one could possibly hope to implement in a single year (let alone in the last few days before a new school year). She did, however, note at the end of her presentation that the best approach for integrating formative assessment into one’s classroom is to “start small and keep going.”

With that admonition to adopt and work to implement¬†something¬†from her presentation, I gravitated toward her suggestions to use rubrics (framed around specific learning goals rather than check-list, task-completion goals) and sample student work (of both excellent and¬†not so excellent quality) as a way to help students understand both¬†what they’re supposed to be learning and¬†how¬†they can become more adept at self-assessment.

Rubrics of Yesteryear

My interest in using more effective rubrics, however, was spurred entirely by this presentation. Last year my colleague Kate and I experimented with using the “single-point rubric” as a way to get away from the overwhelming check-box features of traditional rubrics. This change had the benefit of pushing me to explicitly articulate (and visually center) the major learning goals for a particular assignment.



Sample “Single-Point” Rubric via the Cult of Pedagogy. The “Breakfast in Bed” assignment is obviously one of the most important encapsulations of learning in any history course.

I used the “single-point” rubric for a seminar on disability history that I taught last¬†fall and found it a useful framework for explaining to students how they were doing on the various learning goals of the assignment. I even wrote a whole post for my students about my rationale for using this rubric and what I hoped they’d gain from it. The labor involved in articulating the positive and not so positive aspects of each piece of student work, however, ended up being pretty overwhelming by the end of the semester.

I ended the term feeling unsure about the net benefit of this framework. Yes, it gave a lot of feedback, but how effective was it when I shared both positive and negative aspects for a single learning goal? Did it always give students a clear sense of what to work on to improve? Unfortunately, I didn’t survey my students about their reactions to this rubric format, so I don’t have a clear idea about how well it worked. Missed opportunity [sigh].

Everything Old (or at least rubrics) Is New Again

So, when Jan Chappuis made learning goal-centered rubrics a centerpiece of her presentation as a way to do less grading and commenting while also providing more effective and punctual feedback, I was intrigued.

Jan recommended that rubrics should be written in student-friendly language (often using the first person‚ÄĒa stylistic choice that makes the learning goals more accessible and thereby helps students self-assess more readily) and only include as many different tiers/levels as there are gradations of mastery. In other words, if you only see four different levels of student skill for a learning goal, there should only be four¬†potential outcomes on that rubric.

These guidelines ultimately recommended (and many of Jan’s models confirmed) using a more traditional-looking rubric with lots of boxes and descriptions of performance at various levels.

With our shift to Canvas this year, those recommendations ended up being good news because (at present) Canvas’ rubric creation tool doesn’t allow one to create a single-point rubric. Instead, the tool creates the fairly traditional rubrics with lots of boxes and descriptors‚ÄĒessentially the kind Jan recommended using with students.


“I call this one, ‘Rubrique Vintage‘”

I used this style of rubric for the first time this year for a comparative writing assignment about our summer reading books for AP European History: Stephen Greenblatt’s¬†The Swerve and Joyce Appleby’s¬†Shores of Knowledge. Because I use this introductory writing assignment to get a sense of students’ ability to structure an¬†argument, use evidence, and offer analytical commentary, I only have them write a two paragraph response‚ÄĒan introduction with a thesis and one body paragraph. Given this narrow focus, I similarly made my rubric focus only on the learning/writing goals¬†that apply to those part of an essay. Here’s what I’ve developed/adapted from an excellent writing rubric created by my colleague Kate:


I spent a day walking students through the rubric and reading two sample essays, which gave them the opportunity to put the rubric into action. By working with the students through one strong and one weak example, I hoped to both give them a sense of what I’m looking for in this assignment and give them some practice at identifying those characteristics in anonymous student work. By the end of that day, students had become pretty adept at evaluating these elements in sample work and grounding their assessments in the particular language of the rubric.

Although this marks a good start for me in terms of using rubrics and sample student work more extensively this year, it nevertheless leaves me with the remaining challenge of figuring out how to translate those “learning goal”-based rubrics into grades that are recognizable on the traditional grading scale. In experimenting with this task, I was heartened by a comment Jan made during her visit: (I’m paraphrasing, but it was something to the effect of) “it doesn’t matter¬†what¬†type of grading system you have so long as those grades are based on the learning goals of the course.”

But how do you grade it?

Good question, Italicized Header 3! Before creating this rubric, I did some research into how others have gone about translating learning goal or standards-based grades into a more traditional format. Here are a few links that I found useful in explaining potential solutions for that process:

Of all the systems explained in those posts (and others I haven’t linked to), I found the “Logic” or “Piecewise Function” for converting learning goal-based grades into traditional grades (explained in the Always Formative post above) the most compelling and adaptable. With that inspiration, I went about drafting, getting feedback on, and revising my own “Piecewise Function” for this particular assignment. Here’s what I settled on:


At present, I’ve only used this translation table for 11 essays, but I think it’s leaving me with predictable (and similar) results to what I’ve gotten in previous years when using a more holistic approach to evaluating assignments like this one. My hope, however, is that this rubric provides students with clear feedback that will help them see where they should focus their attention on upcoming writing assignments. I’ll certainly have more to say on all those topics once I’ve finished grading all the essays and get some feedback from the students.

I’d love to hear how others have used systems like this one and what advice they have. Given that¬†math and science teachers wrote those blog posts from which I drew my inspiration and models, I’d love to hear insights from humanities (and especially history) teachers who have used a similar model.¬†What types of scales have others used? How have students reacted to the feedback from the rubrics versus the translated grade? How has this system worked when the learning goals aren’t as explicitly skill-based but are more focused on content?

Nota Bene

There’s a whole boatload of material online about what formative assessment is and how best to implement it in the classroom, but I’ll leave that to your Google or YouTube searching. Here’s just one example of the sort of tutorial/instructional materials that you can find (thanks to my colleague, Wendell, for passing along the following video) that addresses the benefits and best methods for implementing formative assessment: