Tag Archives: machine learning

SBG and Leveling Up, Part 3: The Machine Thinks!

Read the first two posts in this series here:

SBG and Leveling Up, Part 1
SBG and Leveling Up, Part 2: Machine Learning

...or you can read this quick review of where I've been going with this:

  • When a student asks to be reassessed on a learning standard, the most important inputs that contribute to the student's new achievement level are the student's previously assessed level, the difficulty of a given reassessment question, and the nature of any errors made during the reassessment.
  • Machine learning offers a convenient way to find patterns that I might not otherwise notice in these grading patterns.

Rather than design a flow chart that arbitrarily figures out the new grade given these inputs, my idea was to simply take different combinations of these inputs, and use my experience to determine what new grade I would assign. Any patterns that exist there (if there are any) would be determined by the machine learning algorithm.

I trained the neural network methodically. These were the general parameters:

  • I only did ten or twenty grades at any given time to avoid the effects of fatigue.
  • I graded in the morning, in the afternoon, before lunch, and after lunch, and also some at night.
  • I spread this out over a few days to minimize the effects of any one particular day on the training.
  • When I noticed there weren't many grades at the upper end of the scale, I changed the program to generate instances of just those grades.
  • The permutation-fanatics among you might be interested in the fact that there are 5*3*2*2*2 = 120 possibilities for numerical combinations. I ended up grading just over 200. Why not just grade every single possibility? Simple - I don't pretend to think I'm really consistent when I'm doing this. That's part of the problem. I want the algorithm to figure out what, on average, I tend to do in a number of different situations.

After training for a while, I was ready to have the network make some predictions. I made a little visualizer to help me see the results:

You can also see this in action by going to the CodePen, clicking on the 'Load Trained Data' button, and playing around with it yourself. There's no limit to the values in the form, so some crazy results can occur.

The thing that makes me happiest about the result is that there's nothing surprising about the results.

  • Conceptual errors are the most important ones that limit students from making progress from one level to the next. This makes sense. Once a student has made a conceptual error, I generally don't let students increase their proficiency level
  • Students with low scores that ask for the highest difficulty problems probably shouldn't.
  • Students that have an 8 can get to a 9 by doing a middle difficulty level problem, but can't get to a 10 in one reassessment without doing the highest difficulty level problem. On the other hand, a student that is a 9 that makes a conceptual error on a middle difficulty problem are brought back to a 7.

When I shared this with students, the thing they seemed most interested to use this to do is decide what sort of problem they want for a given reassessment. Some students with a 6 have come in asking for the simplest level question so they can be guaranteed a rise to a 7 if they answer correctly. A lot of level 8 students want to become a 10 in one go, but often make a conceptual error along the way and are limited to a 9. I clearly have the freedom to classify these different types of errors as I see fit when a student comes to meet with me. When I ask students what they think about having this tool available to them, the response is usually that it's a good way to be fair. I'm pretty happy about that.

I'll continue playing with this. It was an interesting way to analyze my thinking around something that I consider to still be pretty fuzzy, even this long after getting involved with SBG in my classes.

Theory of Knowledge and the Thinking Machine

tl,dr

I created an interactive lesson called Thinking Machine for use with a talk I gave to the IB theory of knowledge class, which is currently on a unit studying mathematics.

Screen Shot 2015-03-21 at 9.47.12 PM
The lesson made good use of the Meteor Blaze library as well as the Desmos Graphing Calculator API. Big thanks to Eli and Jason from Desmos for helping me with putting it together.


I was asked by a colleague if I was interested in speaking to the IB theory of knowledge class during the mathematics unit. I barely let him finish his request before I started talking about what I was interested in sharing with them.

If you read this blog, you know that I'm fascinated by the intersection of computers and mathematical thinking. If you don't, now you do. More specifically, I spend a great deal of time contemplating the connections between mathematics and programming. I believe that computers can serve as a stepping stone between students understanding of arithmetic and the abstract idea of a variable.

The fact that computers do precisely what their programmers make them do is a good thing. We can forget this easily, however, in our world that has computers doing fairly sophisticated things behind the scenes. The fact that Siri can understand what we say, and then do what we ask, is impressive. The extent to which the computer knows what it is doing is up for debate. It's pretty hard to argue though that computers aren't doing similar types of reasoning processes that humans do in going about their day.

Here's what I did with the class:

I began by talking about myself as a mathematical thinker. Contrary to what many of them might think, I don't spend my time going around the world looking for equations to solve. I don't seek out calculations for fun. In fact, I actively dislike making calculations. What I really enjoy is finding interesting problems to solve. I get a great deal of satisfaction and a greater understanding of the world through doing so.

What does this process involve? I make observations of the world. I look for situations, ideas, and images that interest me. I ask questions about what I see, and then use my understanding of the world, including knowledge in the realm of mathematics, to construct possible answers. As a mathematical and scientific thinker, this process of gathering evidence, making predictions using a model, testing them, and then adjusting those models is in my blood.

I then set the students loose to do an activity I created called Thinking Machine. I styled it after the amazing lessons that the Desmos team puts together, and used their tools to create it. More on that later. Check it out, and come back when you're done.

The activity begins with a first step asks students make a prediction of a mathematical rule created by the computer. The rule is never complicated - always a linear function. When the student enters the correct rule, the computer says to move on.

The next step is to turn the tables on the student - the computer will guess a rule (limited to linear, quadratic, cubic, or exponential functions) based on three sets of inputs and outputs that the student provides. Beyond those three inputs, the student should only answer 'yes' or 'no' to the guesses that the computer provides.

The computer learns by adjusting its model based on the responses. Once the certainty is above a certain level, the computer gives its guess of the rule, and shows the process it went through of using the student's feedback to make its decision. When I did this with the class, more than half of the class had their guesses correctly determined. I've since tweaked this to make it more reliable.

After this, we had a discussion about whether or not the computer was thinking. We talked about what it means for a computer to have knowledge of a problem at hand. Where did that knowledge come from? How does it know what is true, and what is not? How does this relate to learning mathematics? What elements of thinking are distinctly human? Creativity came up a couple times as being one of these elements.

This was a perfect segue to this video about the IBM computer Watson learning to be a chef:

Few were able to really explain this away as being uncreative, but they weren't willing to claim that Watson was thinking here.

Another example was this video from the Google Deep Thinking lab:

I finished by leading a conversation about data collection and what it signifies. We talked about some basic concepts of machine learning, learning sets, and some basic ideas about how this compared to humans learning and thinking. One of my closing points was that one's experience is a data set that the brain uses to make decisions. If computers are able to use data in a similar way, it's hard to argue that they aren't thinking in some way.

Students had some great comments questions along the way. One asked if I thought we were approaching the singularity. It was a lot of fun to get the students thinking this way, especially in a different context than in my IB Math and Physics classes. Building this also has me thinking about other projects for the future. There is no need to invent a graphing library on your own, especially for use in an activity used with students - Desmos definitely has it all covered.

Technical Details

I built Thinking Machine using Bootstrap, the Meteor Blaze template engine, jQuery, and the Desmos API. I'm especially thankful to Eli Luberoff and Jason Merrill from Desmos who helped me with using the features. I used the APIto do two things:

  • Parse the user's rule and check it against the computer's rule using some test values
  • Graph the user's input and output data, perform regressions, and give the regression parameters

The whole process of using Desmos here was pretty smooth, and is just one more reason why they rock.

The learning algorithm is fairly simple. As described (though much more briefly) in the activity, the algorithm first assumes that the four regressions of the data are equally likely in an array called isThisRight. When the user clicks 'yes' for a given input and output, the weighting factor in the associated element of the array is doubled, and then the array is normalized so that the probabilities add to 1.

The selected input/output is replaced by a prediction from a model that is selected according to the weights of the four models - higher weights mean a model is more likely to be selected. For example, if the quadratic model is higher than the other three, a prediction from the quadratic model is more likely to be added to the list of four. This is why the guesses for a given model appear more frequently when it has been given a 'yes' response.

Initially I felt that asking the user for three inputs was a bit cheap. It only takes two points to define a line or an exponential regression, and three for a quadratic regression. I could have written a big switch statement to check if data was linear or exponential, and then quadratic, and then say it had to then be cubic. I wanted to actually give a learning algorithm a try and see if it could figure out the regression without my programming in that logic directly. In the end, the algorithm works reasonable well, including in cases where you make a mistake, or you give two repeated inputs. With only two distinct points, the program is able to eventually figure out the exponential and quadratic, though cubic rules give it trouble. In the end, the prediction of the rule is probability based, which is what I was looking for.

The progress bar is obviously fake, but I wanted something in there to make it look like the computer was thinking. I can't find the article now, but I recall reading somewhere that if a computer is able to respond too quickly to a person's query, there's a perception that the results aren't legitimate. Someone help me with this citation, please.