- I gave a quiz not long ago with the following question adapted from the homework:

The value of 5 points for the problem came from the following rubric I had in my head while grading it:

- +1 point for a correct free body diagram
- +1 for writing the sum of forces in the y-direction and setting it equal to ma
_{y} - +2 for recognizing that gravity was the only force acting at the minimum speed
- +1 for the correct final answer with units

Since learning to grade Regents exams back in New York, I have always needed to have some sort of rubric like this to grade anything. Taking off random quantities of points without being able to consistently justify a reason for a 1 vs. 2 point deduction just doesn't seem fair or helpful in the long run for students trying to learn how to solve problems.

As I move ever more closely toward implementing a standards based grading system, using a clearly defined rubric in this way makes even more sense since, ideally, questions like this allow me to test student progress relative to standards. Each check-mark on this rubric is really a binary statement about a student relative to the following standards related questions:

- Does the student know how to properly draw a free body diagram for a given problem?
- Can a student properly apply Newton's 2nd law algebraically to solve for unknown quantities?
- Can a student recognize conditions for minimum or maximum speeds for an object traveling in a circle?
- Does a student provide answers to the question that are numerically consistent with the rest of the problem and including units?

It makes it easy to have the conversation with the student about what he/she does or does not understand about a problem. It becomes less of a conversation about 'not getting the problem' and more about not knowing how to draw a free body diagram in a particular situation.

The other thing I realize about doing things this way is that it changes the actual process of students taking quizzes when they are able to retake. Normally during a quiz, I answer no questions at all - it is supposed to be time for a student to answer a question completely on their own to give them a test-like situation. In the context of a formative assessment situation though, I can see how this philosophy can change. Today I had a student that had done the first two parts correctly but was stuck.

**Him: I don't know how to find the normal force. There's not enough information.**

**Me: All the information you need is on the paper. [Clearly this was before I flip-flopped a bit.]**

**Him: I can't figure it out.**

I decided, with this rubric in my head, that if I was really using this question to assess the student on these five things, that I could give the student what was missing, and still assess on the remaining 3 points. After telling the student about the normal force being zero, the student proceeded to finish the rest of the problem correctly. The student therefore received a score of 3/5 on this question. That seems to be a good representation about what the student knew in this particular case.

Why this seems slippery and slopey:

- In the long term, he doesn't get this sort of help. On a real test in college, he isn't getting this help. Am I hurting him in the long run by doing this now?
- Other students don't need this help. To what extent am I lowering my standards by giving him information that others don't need to ask for?
- I always talk about the real problem of students not truly seeing material on their own until the test. This is why there are so many students that say they get it during homework, but not during the test - in reality, these students usually have friends, the teacher, example problems, recently going over the concept in class on their side in the case of 'getting it' when they worked on homework.

Why this seems warm and fuzzy, and most importantly, a good idea in the battle to helping students learn:

- Since the quizzes are formative assessments anyway, it's a chance to see where he needs help. This quiz question gave me that information and I know what sort of thing we need to go over. He doesn't need help with FBDs. He needs help knowing what happens in situations where an object is on the verge of leaving uniform circular motion. This is not a summative assessment, and there is still time for him to learn how to do problems like this on his own.

- This is a perfect example of how a student can learn from his/her mistakes. It's also a perfect example of how targeted feedback helps a student improve.

- For a student stressed about assessments anyway (as many tend to be) this is an example of how we might work to change that view. Assessments can be additional sources of feedback if they are carefully and deliberately designed. If we are to ever change attitudes about getting points, showing students how assessments are designed to help them learn instead of being a one-shot deal is a really important part of this process.

To be clear, my students are given one-shot tests at the end of units. It's how I test retention and the ability to apply the individual skills when everything is on the table, which I think is a distinctly different animal than the small scale skills quizzes I give and that students can retake. I think those are important because I want students to be able to both apply the skills I give them and decide which skills are necessary for solving a particular problem.

That said, it seems like a move in the right direction to have tried this today. It is yet one more way to start a conversation with students to help them understand rather than to get them points. The more I think about it, the more I feel that this is how learning feels when you are an adult. You try things, get feedback and refine your understanding of the problem, and then use that information to improve. There's no reason learning has to be different for our students.

Perhaps, indicating the Fn is zero indicates to the student there is nothing else to consider and the problem becomes an obvious plug and chug. I wonder if an intermediate scaffold might allow you to make a better judgment. Lead with conceptual question like: draw a FBD for the coaster at the top of the loop at 2 significantly different speeds and provide a rationale for your decision. Then pose quantitative min velocity question. If kid calls you over now and you examine concept Q, and say normal points up then discriminating Forces is the predominant issue not the nuance of circular motion problem. If normals are correctly drawn direct attention to concept question and have them to think about the general trend in the diagrams as the velocity goes to a minimum and walk away. If they can't get it from there, the issue is their ability to reason as you set them up to get the problem right without telling them the Fn is zero. This set up may also facilitate any remediation session as well.

Thanks for the comment, Shannon.

I agree that giving them the condition of Fn = 0 does kind of take a big chunk of the reasoning out of the problem. In the rubric I used, this concept was really 20% of the points allotted to the problem, so a student that can do the rest is really showing things that other questions will likely be able to assess.

The other piece is that since the student is right in front of me, I CAN look at his/her free body diagram and decide if the Fn = 0 is even a place to start the conversation. If there was another issue, say a tension drawn or an upwards force at that point, this is showing more fundamental errors in the student's reasoning related to drawing a reasonable FBD for the problem. That's the nice freedom I have being able to react on the spot to what the students are doing. It also is something I couldn't do with 25 students in the room, which I am thankful not to have to manage.

This is something I really wrestle with. This year, since I'm a big outlier in terms of answering questions, I've decided to try making my students fly solo and do assessments without

anyhelp from me. This is mainly because I don't want them to be dependent on a teacher's help when they get to later classes. But I see lots of value in what you are doing as well.I'm also not quite sure how I would incorporate this into a binary SBG system I use at the moment. Almost any help from me would show you haven't mastered it, so that wouldn't really encourage students to ask me questions.

What I'd really like to do is simply be able to make assessments just conversations between students and myself to figure out where they're struggling and then have some easy way to report a measure of student understanding. Whenever I do this, I find myself helping way too much, students eventually getting to "a right answer" and me not at all confident that that understanding belongs to the student.

It's a really thorny issue for me, and I appreciate this post giving me a needed different perspective.

I always appreciate your comments, John.

I think I fit into the same boat as you - I am always answering student questions with more questions since that conversation is always really rich from a learning standpoint. I have almost exclusively been a 'no-questions' guy during test and quizzes, in fact I tell any new group of students before the first test I give that I tend to be a jerk when it comes to answering questions during the exam. If there is a legitimate question about formatting, instructions, etc, I will of course answer and inform the rest of the class. When I find them asking questions about content during a unit test, I pretty much repeat "I can't answer that right now" and walk away.

As for the binary measures for SBG, I think if your standards are defined narrowly in the way I defined them in the rubric, it's possible. But I'm pretty early on in the SBG learning curve, and I am guessing that if I define everything too narrowly so that skills are tested in complete independence of other skills, I'm going to end up with a list of skills that numbers in the hundreds by the end of the year. That's something I need to really work on for sure. Whether to go binary or a 1-4 scale is something I've thought a lot about, and I'm leaning toward the latter. This is just because I don't know a consistent way to locate when my assessment switches from "don't get it" to "get it" and I think I need to define that well if I'm going to go that way and make it fair for students. I'd love to get your input on how you make that distinction.

Finally, I completely agree on conversations - there is a lot you can get out of students talking about their thinking. The biggest issue is time. I can know what students are thinking after a 30 second conversation and have a good idea how to adjust my teaching, but it's never good enough to give a numerical measure of that understanding.

This is a great post. There are three points that really struck me.

The first was how you you connect the rubric with standards through the binary statements and how the rubric guides the conversation with the student about his/her knowledge gaps.

By offering limited assistance on problems, you're taking to the next level the idea that students should be able to take formative assessments as many times as they like.

Finally, I think you're right on about how allowing students to retake formative assessments will help change their perceptions about their real purpose.

I'd love to learn more about how you use the data you collect from these assessments to drive your instruction. How are you analyzing it? How do you plan around the data collect?

Hi Stew,

Thanks for reading and for your comments. I haven't always felt this way, and still am not totally sold on making it a regular thing for the reasons I (and others) have mentioned. It does, however, seem like a natural fit for showing students that learning is an iterative process and for using assessment for its intended purpose of measuring what students know/don't know, understand/don't understand, or can do/can't do.

You might look at a previous post of mine about my homework policies - I tend to collect all of it as an additional source of information on what students are thinking. These quizzes are just another form of it. It is really easy to get an idea of what mistakes/misconceptions students have from seeing their written work, and it's even easier when the students are in front of me. This is another push for why I like using quiz situations like this to have the conversation I described in this post. There's something more real about facing a problem on a quiz rather than homework, so I think I get a more realistic idea of what students can do in a quiz situation. That said, there might be measurement error that comes into play if I do this sort of thing too often and as the line between homework as feedback and quiz as feedback blurs.

I don't know - I'm still clearly in the experimental stages of figuring this out.