Choosing the Next Question
If a student can solve $latex 3x – 1 = 5 $ for x, how convinced are we of that student’s ability to solve two step equations?
If that same student can also solveĀ $latex 14 = 3x + 2 $ , how does our assessment of their ability change, if at all?
What about $latex -2-3x= 5$ ?
Ideally, our class activities push students toward ever increasing levels of generalization and robustness. If a student’s method for solving a problem is so algorithmic that it fails when a slight change is made to the original problem, that method is clearly not robust enough. We need sufficiently different problems for assessing students so that we know their method works in all cases we might through their way.
In solving $latex 3x-1 = 5 $ , for example, we might suggest to a student to first add the constant to both sides, and then divide both sides by the coefficient. If the student is not sure what ‘constant’ or ‘coefficient’ mean, he or she might conclude that the constant is the number to the right of the x, and the coefficient is the number to the left. This student might do fine with $latex 10 =2x-4 $ , but would run into trouble solving $latex -2-3x = 5$ . Each additional question gives more information.
The three equations look different. The operation that is done as a first step to solving all three is the same, though the position of the constant is different in all three. Students that are able to solve all three are obviously proficient. What does it mean that a student can solve the first and last equations, but not the middle one? Or just the first two? If a student answers a given question correctly, what does that reveal about the student’s skills related to that question?
It’s the norm to consider these issues in choosing questions for an assessment. The more interesting question to me theses days is that if we’ve seen what a student does on one question, what should the next question be? Adaptive learning software tries to do this based on having a large data set that maps student abilities to right/wrong answers. I’m not sure that it succeeds yet. I still think the human mind has the advantage in this task.
Often this next step involves scanning a textbook or thinking up a new question on the spot. We often know the next question we want when we see it. The key then is having those questions readily available or easy to generate so we can get them in front of students.
I think that the other big advantage human teachers have over adaptive software is that we can ask, “how did you do this problem?” or “why did you do this first?” or “can you make up a problem similar to this one that would be easier/harder for you?” or a host of other questions that unpack students’ thinking and that give us information we can use to create follow-up questions and problems that will challenge the student or allow a misconception to be brought out more clearly. The software only knows that the answer was right or wrong, and that’s significantly less information (I’m not even counting the wealth of information the teacher likely has about the student’s prior work and thinking, disposition towards math, and how to motivate him or her).
I totally agree – the question that comes next doesn’t need to be another question on content. Having that dialogue with the students is likely more informative than another problem.