Rubrics and Numerical Grades - Hacking the 100-Point Scale, Part 3

As part of thinking through my 100-point scale redesign, I'd like you to share some of your thoughts on a rubric scenario.

Rubrics are great for how they clearly classify different components of assessment for a given task. They also use language that, ideally, gives students the feedback to know what they did well, and where they fell short on that assessment. Here's an example rubric with three performance levels and three categories for a generic assignment:

Screen Shot 2016-06-13 at 5.23.27 PM

I realize some of you might be craving some details of the task and associated descriptors for each level. I'm looking for something here that I think might be independent of the task details.

The student shown above has scores of 1, 2, and 3 respectively for the three categories on this assignment, and all three categories are equally important. Suppose also that in my assessment system, I need to identify a student as being a 1, 2, 3, or 4 in the associated skills based on this assessment.

More generally, I want to be able to take a set of three scores on the rubric and generate a performance level of the student that earned them. I'd like to get your sense of classifying students into the four levels this way.

Here are the rubrics I'd like your help with:

I've created a Desmos Activity using Activity Builder to collect your thoughts. I chose Activity Builder because (a) Desmos is awesome, and (b) the internet is keeping me from Google Docs.

You can access that activity here.

I'll be using the results as an input for a prototype idea I have to make this process a bit easier for all involved. Thanks in advance!

4 thoughts on “Rubrics and Numerical Grades - Hacking the 100-Point Scale, Part 3

    1. I agree that this is necessary. It's also done in a haphazard way that involves setting points or weights that seem 'good enough'. My argument (which I will develop later on) is that we can decide what our priorities are in determining achievement levels, and the weights can develop from those priorities.

    1. This is a totally a valid question, but I'm excluding it for a practical reason: we don't need the number of rubric categories to match our levels. We make categorizations of students all the time based on our experience, and the mismatch in dimensions here forces us to make choices on what matters. I'll be clarifying my reasons further in my next post.

Leave a Reply