SBG and Leveling Up, Part 3: The Machine Thinks!
Read the first two posts in this series here:
SBG and Leveling Up, Part 1
SBG and Leveling Up, Part 2: Machine Learning
…or you can read this quick review of where I’ve been going with this:
- When a student asks to be reassessed on a learning standard, the most important inputs that contribute to the student’s new achievement level are the student’s previously assessed level, the difficulty of a given reassessment question, and the nature of any errors made during the reassessment.
- Machine learning offers a convenient way to find patterns that I might not otherwise notice in these grading patterns.
Rather than design a flow chart that arbitrarily figures out the new grade given these inputs, my idea was to simply take different combinations of these inputs, and use my experience to determine what new grade I would assign. Any patterns that exist there (if there are any) would be determined by the machine learning algorithm.
I trained the neural network methodically. These were the general parameters:
- I only did ten or twenty grades at any given time to avoid the effects of fatigue.
- I graded in the morning, in the afternoon, before lunch, and after lunch, and also some at night.
- I spread this out over a few days to minimize the effects of any one particular day on the training.
- When I noticed there weren’t many grades at the upper end of the scale, I changed the program to generate instances of just those grades.
- The permutation-fanatics among you might be interested in the fact that there are 5*3*2*2*2 = 120 possibilities for numerical combinations. I ended up grading just over 200. Why not just grade every single possibility? Simple – I don’t pretend to think I’m really consistent when I’m doing this. That’s part of the problem. I want the algorithm to figure out what, on average, I tend to do in a number of different situations.
After training for a while, I was ready to have the network make some predictions. I made a little visualizer to help me see the results:
You can also see this in action by going to the CodePen, clicking on the ‘Load Trained Data’ button, and playing around with it yourself. There’s no limit to the values in the form, so some crazy results can occur.
The thing that makes me happiest about the result is that there’s nothing surprising about the results.
- Conceptual errors are the most important ones that limit students from making progress from one level to the next. This makes sense. Once a student has made a conceptual error, I generally don’t let students increase their proficiency level
- Students with low scores that ask for the highest difficulty problems probably shouldn’t.
- Students that have an 8 can get to a 9 by doing a middle difficulty level problem, but can’t get to a 10 in one reassessment without doing the highest difficulty level problem. On the other hand, a student that is a 9 that makes a conceptual error on a middle difficulty problem are brought back to a 7.
When I shared this with students, the thing they seemed most interested to use this to do is decide what sort of problem they want for a given reassessment. Some students with a 6 have come in asking for the simplest level question so they can be guaranteed a rise to a 7 if they answer correctly. A lot of level 8 students want to become a 10 in one go, but often make a conceptual error along the way and are limited to a 9. I clearly have the freedom to classify these different types of errors as I see fit when a student comes to meet with me. When I ask students what they think about having this tool available to them, the response is usually that it’s a good way to be fair. I’m pretty happy about that.
I’ll continue playing with this. It was an interesting way to analyze my thinking around something that I consider to still be pretty fuzzy, even this long after getting involved with SBG in my classes.