My ninth graders are working on building functions and modeling in the final unit of the year. There is plenty of good material out there for doing these tasks as a way to master the Common Core standards that describe these skills.

I had a sudden realization that a great source for these types of tasks might be my Calculus materials. Related rates, optimization, and applications of integrals in a Calculus course generally require students to write models of functions and then apply their differentiation or integration knowledge to arrive at a result. The first step in these questions usually involves writing a function, with subsequent question parts requiring Calculus methods to be applied to that function.

I dug into my resources for these topics and found that these questions might be excellent modeling tasks for the ninth grade students if I simply pull out the steps that require Calculus. Today's lesson using these adapted questions was really smooth, and felt good from a vertical planning standpoint.

I could be late to this party. My apologies if you realized this well before I did.

I created an interactive lesson called Thinking Machine for use with a talk I gave to the IB theory of knowledge class, which is currently on a unit studying mathematics.

The lesson made good use of the Meteor Blaze library as well as the Desmos Graphing Calculator API. Big thanks to Eli and Jason from Desmos for helping me with putting it together.

I was asked by a colleague if I was interested in speaking to the IB theory of knowledge class during the mathematics unit. I barely let him finish his request before I started talking about what I was interested in sharing with them.

If you read this blog, you know that I'm fascinated by the intersection of computers and mathematical thinking. If you don't, now you do. More specifically, I spend a great deal of time contemplating the connections between mathematics and programming. I believe that computers can serve as a stepping stone between students understanding of arithmetic and the abstract idea of a variable.

The fact that computers do precisely what their programmers make them do is a good thing. We can forget this easily, however, in our world that has computers doing fairly sophisticated things behind the scenes. The fact that Siri can understand what we say, and then do what we ask, is impressive. The extent to which the computer knows what it is doing is up for debate. It's pretty hard to argue though that computers aren't doing similar types of reasoning processes that humans do in going about their day.

Here's what I did with the class:

I began by talking about myself as a mathematical thinker. Contrary to what many of them might think, I don't spend my time going around the world looking for equations to solve. I don't seek out calculations for fun. In fact, I actively dislike making calculations. What I really enjoy is finding interesting problems to solve. I get a great deal of satisfaction and a greater understanding of the world through doing so.

What does this process involve? I make observations of the world. I look for situations, ideas, and images that interest me. I ask questions about what I see, and then use my understanding of the world, including knowledge in the realm of mathematics, to construct possible answers. As a mathematical and scientific thinker, this process of gathering evidence, making predictions using a model, testing them, and then adjusting those models is in my blood.

I then set the students loose to do an activity I created called Thinking Machine. I styled it after the amazing lessons that the Desmos team puts together, and used their tools to create it. More on that later. Check it out, and come back when you're done.

The activity begins with a first step asks students make a prediction of a mathematical rule created by the computer. The rule is never complicated - always a linear function. When the student enters the correct rule, the computer says to move on.

The next step is to turn the tables on the student - the computer will guess a rule (limited to linear, quadratic, cubic, or exponential functions) based on three sets of inputs and outputs that the student provides. Beyond those three inputs, the student should only answer 'yes' or 'no' to the guesses that the computer provides.

The computer learns by adjusting its model based on the responses. Once the certainty is above a certain level, the computer gives its guess of the rule, and shows the process it went through of using the student's feedback to make its decision. When I did this with the class, more than half of the class had their guesses correctly determined. I've since tweaked this to make it more reliable.

After this, we had a discussion about whether or not the computer was thinking. We talked about what it means for a computer to have knowledge of a problem at hand. Where did that knowledge come from? How does it know what is true, and what is not? How does this relate to learning mathematics? What elements of thinking are distinctly human? Creativity came up a couple times as being one of these elements.

This was a perfect segue to this video about the IBM computer Watson learning to be a chef:

Few were able to really explain this away as being uncreative, but they weren't willing to claim that Watson was thinking here.

Another example was this video from the Google Deep Thinking lab:

I finished by leading a conversation about data collection and what it signifies. We talked about some basic concepts of machine learning, learning sets, and some basic ideas about how this compared to humans learning and thinking. One of my closing points was that one's experience is a data set that the brain uses to make decisions. If computers are able to use data in a similar way, it's hard to argue that they aren't thinking in some way.

Students had some great comments questions along the way. One asked if I thought we were approaching the singularity. It was a lot of fun to get the students thinking this way, especially in a different context than in my IB Math and Physics classes. Building this also has me thinking about other projects for the future. There is no need to invent a graphing library on your own, especially for use in an activity used with students - Desmos definitely has it all covered.

Technical Details

I built Thinking Machine using Bootstrap, the Meteor Blaze template engine, jQuery, and the Desmos API. I'm especially thankful to Eli Luberoff and Jason Merrill from Desmos who helped me with using the features. I used the APIto do two things:

Parse the user's rule and check it against the computer's rule using some test values

Graph the user's input and output data, perform regressions, and give the regression parameters

The whole process of using Desmos here was pretty smooth, and is just one more reason why they rock.

The learning algorithm is fairly simple. As described (though much more briefly) in the activity, the algorithm first assumes that the four regressions of the data are equally likely in an array called isThisRight. When the user clicks 'yes' for a given input and output, the weighting factor in the associated element of the array is doubled, and then the array is normalized so that the probabilities add to 1.

The selected input/output is replaced by a prediction from a model that is selected according to the weights of the four models - higher weights mean a model is more likely to be selected. For example, if the quadratic model is higher than the other three, a prediction from the quadratic model is more likely to be added to the list of four. This is why the guesses for a given model appear more frequently when it has been given a 'yes' response.

Initially I felt that asking the user for three inputs was a bit cheap. It only takes two points to define a line or an exponential regression, and three for a quadratic regression. I could have written a big switch statement to check if data was linear or exponential, and then quadratic, and then say it had to then be cubic. I wanted to actually give a learning algorithm a try and see if it could figure out the regression without my programming in that logic directly. In the end, the algorithm works reasonable well, including in cases where you make a mistake, or you give two repeated inputs. With only two distinct points, the program is able to eventually figure out the exponential and quadratic, though cubic rules give it trouble. In the end, the prediction of the rule is probability based, which is what I was looking for.

The progress bar is obviously fake, but I wanted something in there to make it look like the computer was thinking. I can't find the article now, but I recall reading somewhere that if a computer is able to respond too quickly to a person's query, there's a perception that the results aren't legitimate. Someone help me with this citation, please.

I wrote last spring about beginning my projectile motion unit with computational models for projectiles. Students focused on using the computer model alone to solve problems, which led into a discussion of a more efficient approach with less trial and error. The success of this approach made me wonder about introducing the much more simpler particle model for constant acceleration (abbreviated CAPM) using a computational model first, and then extending the patterns we observed to more general situations

We started the unit playing around with the Javascript model located here and the Geogebra data visualizer here.

The first activity was to take some position data for an object and model it using the CAPM model. I explained that the computational model was a mathematical tool that generated position and velocity data for a particle that traveled with constant acceleration. This was a tedious process of trial and error by design.

The purpose here was to show that if position data for a moving object could be described using a CAPM model, then the object was moving with constant acceleration. The tedium drove home the fact that we needed a better way. We explored some different data sets for moving objects given as tables and graphs and ￼discussed the concepts of acceleration and using a linear model for velocity. We recalled how we can use a velocity vs. time graph to find displacement. That linear model for velocity, at this point, was the only algebraic concept in the unit.

In previous versions of my physics course, this was where I would nudge students through a derivation of the constant acceleration equations using what we already understood. Algebra heavy, with some reinforcement from the graphs.

This time around, my last few lessons have all started using the same basic structure:

Here's some graphical or numerical data for position versus time or a description of a moving object. Model it using the CAPM data generator.

Does the CAPM model apply? Have a reason for your answer.

If it does, tell me what you know about its movement. How far does it go? What is its acceleration? Initial velocity? Tell me everything that the data tells you.

For our lesson discussing free fall, we started using the modeling question of asking what we would measure to see if CAPM applies to a falling object. We then used a spark timer (which I had never used before, but found hidden in a cabinet in the lab) to measure the position of a falling object.

They took the position data, modeled it, and got something similar to 9.8 m/s^{2} downward. They were then prepared to say that the acceleration was constant and downwards while it was moving down, but different when it was moving up. They quickly figured out that they should verify this, so they made a video and used Logger Pro to analyze it and see that indeed the acceleration was constant.

The part that ended up being different was the way we looked at 1-D kinematics problems. I still insisted that students use the computer program to model the problem and use the results to answer the questions. After some coaching, the students were able to do this, but found it unsatisfying. When I assigned a few of these for students to do on their own, they came back really grumpy. It took a long time to get everything in the model to work just right - never on the first try did they come up with an answer. Some figured out that they could directly calculate some quantities like acceleration, which reduced the iteration a bit, but it didn't feel right to them. There had to be a better way.

This was one of the problems I gave them. It took a lot of adjustment to get the model to match what the problem described, but eventually they got it:

Once the values into the CAPM program and it gave us this data, we looked at it together to answer the question. Students started noticing things:

The maximum height is half of the acceleration.

The maximum height happens halfway through the flight.

The velocity goes to zero halfway through the flight.

Without any prompting, students saw from the data and the graph that we could model the ball's velocity algebraically and find a precise time when the ball was at maximum height. This then led to students realizing that the area of the triangle gave the displacement of the ball between being thrown and reaching maximum height.

This is exactly the sort of reasoning that students struggle to do when the entire treatment is algebraic. It's exactly the sort of reasoning we want students to be doing to solve these problems. The computer model doesn't do the work for students - it shows them what the model predicts, and leaves the analysis to them.

The need for more accuracy (which comes only from an algebraic treatment) then comes from students being uncomfortable with an answer that is between two values. The computation builds a need for the algebraic treatment and then provides some of the insight for a more generalized approach.

Let me also be clear about something - the students are not thrilled about this. I had a near mutiny during yesterday's class when I gave them a standards quiz on the constant acceleration model. They weren't confident during the quiz, most of them wearing gigantic frowns. They don't like the uncertainty in their answers, they don't like lacking a clear roadmap to a solution, they don't like being without a single formula they can plug into to find an answer. They said these things even after I graded the quizzes and they learned that the results weren't bad.

I'm fine with that. I'd rather that students are figuring out pathways to solutions through good reasoning than blindly plugging into a formula. I'd rather that all of the students have a way in to solving a problem, including those that lack strong algebraic skills. Matching a model to a problem or situation is not a complete crap shoot. They find patterns, figure out ways to estimate initial velocity or calculate acceleration and solidify one parameter to the model before adjusting another.

Computational models form one of the only ways I've found that successfully allows students of different skill levels to go from concrete to abstract reasoning in the context of problem solving in physics. Here's the way the progression goes up the ladder of abstraction for the example I showed above:

The maximum height of the ball occurred at that time. Student points to the graph.

The maximum height of the ball happened when the velocity of the ball went to zero in this situation. I'll need to adjust my model to find this time for different problems.

The maximum height of the ball always occurs when the velocity of the ball goes to zero. We can get this approximate time from the graph.

I can model the velocity algebraically and figure out when the ball velocity goes to zero exactly. Then we can use the area to find the maximum height.

I can use the algebraic model for velocity to find the time when the ball has zero velocity. I can then create an algebraic model for position to get the position of the ball at this time.

My old students had to launch themselves up to step five of that progression from the beginning with an algebraic treatment. They had to figure out how the algebraic models related to the problems I gave them. They eventually figured it out, but it was a rough slog through the process. This was my approach for the AP physics students, but I used a mathematical approach for the regular students as well because I thought they could handle it. They did handle it, but as a math problem first. At the end, they returned to physics land and figured out what their answers meant.

There's a lot going on here that I need to process, and it could be that I'm too tired to see the major flaws in this approach. I'm constantly asking myself 'why' algebraic derivations are important. I still do them in some way, which means I still see some value, but the question remains. Abstracting concepts to general cases in physics is important because it is what physicists do. It's the same reason we should be modeling the scientific method and the modeling process with students in both science and math classes - it's how professionals work within the field.

Is it, however, how we should be exposing students to content?

I am still reviewing algebra concepts in my Math 9 course with students. The whole unit is all about algebraic operations, but my students have seen this material at least once, in some cases two years running.

Not long ago, I made the assertion that the most harmful part of introducing students to the world of real-world algebra looks like this:

Let x = the number of ________

Why is this so harmful?

For practiced mathematicians, math teachers, and students that have endured school math for long enough, there are a couple steps that actually happen internally before this step of defining variables. Establishing a context for the numbers and the operations that link them together are the most important part of producing a correct mathematical model for a given situation. A level of intuition and experience is necessary if one is going to successfully skip straight to this step, and most students don't have this intuition or experience.

We have to start with the concrete because most people (including our students) start their thinking in concrete terms. This is the reason I have raved previously about the CME Project and the effectiveness of using their guess-check-generalize concept in introducing word problems to students. It forms an effective bridge between the numbers that students understand and the abstract concept of a variable. It encourages experimentation and analysis of whether a given answer matches the constraints of a problem.

This method, however, screams for computers to take care of the arithmetic so that students can focus on manipulating the variables involved. Almost all of the Common Core Standards for Mathematical Practice point toward this being an important focus for our work with students. I haven't had a great point in my curriculum since I really started getting into computational thinking to try out my ideas from the beginning, but today gave me a chance to do just that.

Here's how I introduced students to what I wanted them to do:

I then had them open up this spreadsheet and actually complete the missing elements of the spreadsheet on their own. Some students had learned to do similar tasks in a technology class, but some had not. 02 - SPR - Translating Algebraic Expressions

The students that needed to have conversations about tricky concepts like three less than a number had them with me and with other students when they came up. Students that didn't quickly moved through the first set. I'd go and throw some different numbers for 'a number' and see that they were all changing as expected. Then we moved to a more abstract task:

It's great to see that you know how to use different operations on the number in that cell. Now let's generalize. Pick a variable you like - x, or N, or W - it doesn't matter. What would each of these cells become then? Write those results together with the words in your notebook and show me when you're done.

The ease with which students moved to this next task was much greater than it has ever been for me in past lessons. We also had some really great conversations about x*2 compared with 2x, and the fact that both are correct from an arithmetic standpoint, but one is more 'traditional' than the other.

Once students got to this point, I pushed them toward a slightly higher level task that still began concrete. This is the second sheet from the spreadsheet above:

Here we had multiple variables going at once, but this was not a stretch for most students. The key that I needed to emphasize here for some students was that the red text was all calculated. I wanted to put information in the black boxes with black text only, and have the spreadsheet calculate the red values. This required students to identify what the relationship between the variables needed to be to obtain the answer they knew in their head had to be true. This is CCSS MP2, almost verbatim.

This is all solidifying into a coherent framework of using spreadsheet and programming tools to reinforce algebra instruction from the start. There's still plenty to figure out, but this is a start. I'll share what I come up with along the way.

I had everything in line to start the constant velocity model unit: stop watches, meter sticks, measuring tape. All I had to do was find the set of working battery operated cars that I had used last year. I found one of them right where I left it. Upon finding another one, I remembered that didn't work last year either, and I hadn't gotten a replacement. The two other cars were LEGO robot cars that I had built specifically for this task, and all I would need would be to build those cars, program them to run their motors forward, and I was ready to go.

Then I remembered that my computer had been swapped for a new model over the summer, so my old LEGO programming applications were gone. Install software nowhere to be found, I went to the next option: buying new ones.

I made my way to a couple stores that sold toys and had sold me one of the cars from last year. They only had remote control ones, and I didn't want to add the variable of taping the controllers to the on position so they would run forward. Having a bunch of remote control cars in class is a recipe for distraction. In a last ditch effort to try to improve the one working car that I had, I ended up snapping the transmission off of the motor. I needed another option.

John Burk's post about using some programming in this lab and ending it in a virtual race had me thinking how to address the hole I had dug myself into. I have learned that the challenge of running the Python IDE on a class of laptops in various states of OSX make it tricky to have students use Visual Python or even the regular Python environment.

I have come to embrace the browser as the easiest portal for having students view and manipulate the results of a program for the purposes of modeling. Using Javascript, the Raphael drawing framework, Camtasia, and a bit of hurried coding, I was able to put together the following materials: Car 1 Part 1 Car-2-Model- Constant Velocity model data generator (HTML)

When it came to actually running the class, I asked students to generate a table of time (in seconds) and position data (in meters) for the car from the video. The goal was to be able to figure out when the car would reach the white line. I found the following:

Students were using a number of different measuring tools to make their measurements. Some used rulers in centimeters or inches, others created their own ruler in units of car lengths. The fact that they were measuring a virtual car rather than a real one made no difference in terms of the modeling process of deciding what to measure, and then measuring it.

Students asked for the length of the car almost immediately. They realized that the scale was important, possibly as a consequence of some of the work we did with units during the preceding class.

By the time it came to start generating position data, we had a realization about the difficulty arising from groups lacking a common origin. Students tended to agree on velocity as was expected, but their inability This was especially the case when groups were transitioning to the data from Car 2.

Some students saw the benefit of a linear regression immediately when they worked with the constant velocity model data generator. They saw that they could use the information from their regression in the initial values for position, time, and velocity. I didn't have to say a thing here - they figured it out without requiring a bland introduction to the algebraic model in the beginning.

I gave students the freedom to sketch a graph of their work on a whiteboard, on paper, or using Geogebra. Some liked different tools. Our conversation about the details afterwards was the same.

I wish I had working cars for all of the groups, but that's water under the bridge. I've grown to appreciate the flexibility that computer programming has in providing full control over different aspects of a simulation. It would be really easy to generate and assign each group a different virtual car, have them analyze it, and then discuss among themselves who would win in a race. Then I hit play and we watch it happen. This does get away from some of the messiness inherent in real objects that don't drive straight, or slow down as the batteries die, but I don't think this is the end of the world when we are getting started. Ignoring that messiness forever would be a problem, but providing a simple atmosphere for starting exploration of modeling as a philosophy doesn't seem to be a bad way to introduce the concept.

One of my goals has always been to differentiate my job from that of a paid explainer. Good teaching is not explaining exclusively - though it can be part of the process. This is why many people seek a great video or activity that thoroughly explains a concept that puzzles them. The process of learning should be an interactive one. An explanation should lead into another question, or an activity that applies the concept.

For the past two years, I've done a demo activity to open my physics class that emphasizes the subtle difference between a mental model for a phenomenon and having just a good explanation for it. A mental model makes predictions and is therefore testable. An explanation is the end of a story.

The demo equipment involves a cylindrical neodymium magnet and an aluminum tube of diameter slightly larger than the magnet. It is the standard eddy current/Lenz's law/electromagnetic induction demo showing what happens when a magnet is dropped into a tube that is of a non-magnetic material. What I think I've been successful at doing is converting the demo into an experience that opens the course with the creation of a mental model and simultaneous testing of that model.

I walk into the back of the classroom with the tube and the magnet (though I don't tell them that it is one) and climb on top of a table. I stand with the tube above the desk and drop the magnet concentrically into the tube.

Students watch what happens. I ask for them to share their observations. A paraphrased sample:

The thing fell through the tube slowly than it should have

It's magnetic and is slowing down because it sticks to the side

There's so much air in the tube that it slows down the falling object.

I could explain that one of them is correct. I don't. I first ask them to turn their observation into an assertion that should then be testable by some experiment. 'The object is a magnet' becomes 'if the object is a magnet, then it should stick to something made out of steel.' This is then an experiment we can do, and quickly.

When the magnet sticks strongly to the desk, or paper clips, or that something else happens that establishes that the object is magnetic, we can further develop our mental model for what is happening. Since the magnet sticks to steel, and the magnet seems to slow down when it falls, the tube must be made of some magnetic metal. How do we test this? See if the magnet sticks to the tube. The fact that it doesn't stick as it did to the steel means that our model is incomplete.

Students then typically abandon the magnet line of reasoning and go for air resistance. If they went for this first (as has happened before) I just reverse the order of these experiments with the above magnetic discussion. If the object is falling slowly, it must be because the air is slowing it down. How do we test this? From the students: drop another object that is the same size as the first and see if it falls at the same speed. I have a few different objects that I've used for this - usually an aluminum plug or part from the robotics kit works - but the students also insist on taping up the holes that these objects have so that it is as close to the original object as possible. It doesn't fall at the same speed though. When students ask to add mass to the object, I oblige with whatever materials I have on hand. No change.

The mental model is still incomplete.

We've tried changing the object - what about the tube? Assertion from the students: if the material for the tube matters, then the object should fall at a different speed with a plastic tube. We try the experiment with a PVC pipe and see that the magnet speeds along quite unlike it did in the aluminum tube. This confirms our assertion - this is moving us somewhere, though it isn't clear quite where yet.

Students also suggest that friction is involved - this can still be pushed along with the assertion-experiment process. What would you expect to observe if friction is a factor? Students will say they should hear it scraping along or see it in contact with the edges of the tube. I invited a student to stare down the end of the tube as I dropped the magnet. He was noticeably excited by seeing it hover lightly down the entire length of the tube, only touching its edges periodically.

Students this year asked to change the metal itself, but I unfortunately didn't have a copper tube on hand. That would have been awesome if I had. They asked if it would be different if the tube was a different shape. Instead of telling them, I asked them what observation they would expect to make if the tube shape mattered. After they made their assertion, I dropped the magnet into a square tube, and the result was very similar to with the circular tube.

All of these experiments make clear that the facts that (a) the object is a magnet and (b) the tube is made of metal are somehow related. I did at this point say that this was a result of a phenomenon called electromagnetic induction. For the first time during the class, I saw eyes glaze over. I wish I hadn't gone there. I should have just said that we will eventually develop some more insight into why this might happen, but for now, let's be happy that we've developed some understanding of what factors are involved.

All of these opportunities to get students making assertions and then testing them is the scientific method as we normally teach it. The process is a lot less formal than having them write a formal hypothesis, procedure, and conclusion in a lab report - appropriate given that it was the first day of the class - and it makes clear the concept of science as an iterative process. It isn't a straight line from a question to an answer, it is a cyclical process that very often gets hidden when we emphasize the formality of the scientific method in the form of a written lab report. Yes, scientists do publish their findings, but this isn't necessarily what gets them up in the morning.

Some other thoughts:

This process emphasizes the value of an experiment either refuting or supporting our hypothesis. There is a consequence to a mental model when an experiment shows what we expected it to show. It's equally instructive when it doesn't.I asked the students how many times we were wrong in our exploration of the demo. They counted more than five or six. How often do we provide opportunities for students to see how failure is helpful? We say it. Do we show how?

I finally get why some science museums drive me nuts. At their worst, they are nothing more than clusters of express buses from observation/experiment to explanation. Press the button/lift the flap/open the window/ask the explainer, get the answer. If there's not another step to the exhibit that involves an application of what was learned, an exhibit runs the risk of continuing to perpetuate science as a box of answers you don't know. I'm not saying there isn't value in tossing a bunch of interesting experiences at visitors and knowing that only some stuff will stick. I just think there should be a low floor AND a high ceiling for the activities at a good museum.

Mental models must be predictive within the realm in which they are used. If you give students a model for intangible phenomena - the lock and key model for enzymes in biology for example - that model should be robust enough to have students make assertions and predictions based on their conception of the model, and test them. The lock and key model works well to explain why enzymes can lose effectiveness under high temperature because the shape of the active site changing (real world) matches our conception of a key being of the wrong shape (model). Whenever possible, we should expose students to places where the model breaks down, if for no other reason, to show that it can. By definition, it is an incomplete representation of the universe.

I was working on orbits and gravitation with my AP Physics B students, and as has always been the case (including with me in high school), they were having trouble visualizing exactly what it meant for something to be in orbit. They did well calculating orbital speeds and periods as I asked them to do for solving problems, but they weren't able to understand exactly what it meant for something to be in orbit. What happens when it speeds up from the speed they calculated? Slowed down? How would it actually get into orbit in the first place?

Last year I made a Geogebra simulation that used Euler's method to generate the trajectory of a projectile using Newton's Law of Gravitation. While they were working on these problems, I was having trouble opening the simulation, and I realized it would be a simple task to write the simulation again using the Python knowledge I had developed since. I also used this to-scale diagram of the Earth-Moon system in Geogebra to help visualize the trajectory.

I quickly showed them what the trajectory looked like close to the surface of the Earth and then increased the launch velocity to show what would happen. I also showed them the line in the program that represented Newton's 2nd law - no big deal from their reaction, though my use of the directional cosines did take a bit of explanation as to why they needed to be there.

I offered to let students show their proficiency on my orbital characteristics standard by using the program to generate an orbit with a period or altitude of my choice. I insist that they derive the formulae for orbital velocity or period from Newton's 2nd law every time, but I really like how adding the simulation as an option turns this into an exercise requiring a much higher level of understanding. That said, no students gave it a shot until this afternoon. A student had correctly calculated the orbital speed for a circular orbit, but was having trouble configuring the initial components of velocity and position to make this happen. The student realized that the speed he calculated through Newton's 2nd had to be vertical if the initial position was to the right of Earth, or horizontal if it was above it. Otherwise, the projectile would go in a straight line, reach a maximum position, and then crash right back into Earth.

The other part of why this numerical model served an interesting purpose in my class was as inspired by Shawn Cornally's post about misconceptions surrounding gravitational potential and our friend mgh. I had also just watched an NBC Time Capsule episode about the moon landing and was wondering about the specifics of launching a rocket to the moon. I asked students how they thought it was done, and they really had no idea. They were working on another assignment during class, but while floating around looking at their work, I was also adjusting the initial conditions of my program to try to get an object that starts close to Earth to arrive in a lunar orbit.

Thinking about Shawn's post, I knew that getting an object out of Earth's orbit would require the object reaching escape velocity, and that this would certainly be too fast to work for a circular orbit around the moon. Getting the students to see this theoretically was not going to happen, particularly since we hadn't discussed gravitational potential energy among the regular physics students, not to mention they had no intuition about things moving in orbit anyway.

I showed them the closest I could get without crashing:

One student immediately noticed that this did seem to be a case of moving too quickly. So we reduced the initial velocity in the x-direction by a bit. This resulted in this:

We talked about what this showed - the object was now moving too slowly and was falling back to Earth. After getting the object to dance just between the point of making it all the way to the moon (and then falling right past it) and slowing down before it ever got there, a student asked a key question:

Could you get it really close to the moon and then slow it down?

Bingo. I didn't get to adjust the model during the class period to do this, but by the next class, I had implemented a simple orbital insertion burn opposite to the object's velocity. You can see and try the code here at Github. The result? My first Earth - lunar orbit design. My mom was so proud.

The real power here is how quickly students developed intuition for some orbital mechanics concepts by seeing me play with this. Even better, they could play with the simulation themselves. They also saw that I was experimenting myself with this model and enjoying what I was figuring out along the way.

I think the idea that a program I design myself could result in surprising or unexpected output is a bit of a foreign concept to those that do not program. I think this helps establish for students that computation is a tool for modeling. It is a means to reaching a better understanding of our observations or ideas. It still requires a great amount of thought to interpret the results and to construct the model, and does not eliminate the need for theoretical work. I could guess and check my way to a circular orbit around Earth. With some insight on how gravity and circular motion function though, I can get the orbit right on the first try. Computation does not take away the opportunity for deep thinking. It is not about doing all the work for you. It instead broadens the possibilities for what we can do and explore in the comfort of our homes and classrooms.

After the elections last night, I found I was looking back at Nate Silver's blog at the New York Times, Five Thirty Eight.

Here was his predicted electoral college map:

...and here was what ended up happening (from CNN.com):

I've spent some time reading through Nate Silver's methodology throughout the election season. It's detailed enough to get a good idea of how far he and his team have gone to construct a good model for simulating the election results. There is plenty of description of how he has used available information to construct the models used to predict election results, and last night was an incredible validation of his model. His popular vote percentage for Romney was predicted to be 48.4%, with the actual at 48.3 %. Considering all of the variables associated with human emotion, the complex factors involved in individuals making their decisions on how to vote, the fact that the Five Thirty Eight model worked so well is a testament to what a really good model can do with large amounts of data.

My fear is that the post-election analysis of such a tool over emphasizes the hand-waving and black box nature of what simulation can do. I see this as a real opportunity for us to pick up real world analyses like these, share them with students, and use it as an opportunity to get them involved in understanding what goes into a good model. How is it constructed? How does it accommodate new information? There is a lot of really smart thinking that went into this, but it isn't necessarily beyond our students to at a minimum understand aspects of it. At its best, this is a chance to model something that is truly complex and see how good such a model can be.

I see this as another piece of evidence that computational thinking is a necessary skill for students to learn today. Seeing how to create a computational model of something in the real world, or minimally seeing it as an comprehensible process, gives them the power to understand how to ask and answer their own questions about the world. This is really interesting mathematics, and is just about the least contrived real world problem out there. It screams out to us to use it to get our students excited about what is possible with the tools we give them.

With Algebra 2 this week, I decided it was time to get on the Angry Birds wagon. I didn't even mention exactly what we were going to do with it - the day before, the students found the above image in the class directory on the school server, and were immediately intrigued. This was short lived when they learned they weren't going to find out what it would be used for until the day after.

To maximize the time spent actually mathematical modeling, I used the video Frank Noschese posted on his blog for all students. They could pick any of the three birds and do the following:

Part A:
Birds are launched at 6, 13, and 22 seconds in the video. Let's call each one Bird A, Bird B, and Bird C.
• Take a screenshot of any of the complete paths of birds A, B, or C.
• Import the picture into Geogebra. Create the most accurate model you can for the bird you selected. What is the equation that models the path? Does it match that of your neighbors?

Part B:
• Go back to the video and the part in the video for the bird that you picked. Move forward to a frame shortly after the bird is launched, take a screenshot, and put it again into Geogebra. Can you create a model that hits the landing point you found before using only the white dots that show only the beginning of the path?

If not, find the earliest possible time at which you can do this. Post a screenshot of your model and the equations for the models you came up with for both Part A and Part B.

My hope is not to just use the excitement of using Angry Birds in class to motivate knowing how to model using quadratic functions. That seems a bit too much like a gimmick. The most interesting and realistic use (and ultimately the most powerful capability of any model) of this source of data is to come up with as accurate of a prediction of the behavior of the trajectory as is possible using minimal information. It's easy to come up with a quadratic model that matches the entire path after the fact. Could they do this only twenty frames after launch? Ten?

The students quickly started seeing how wildly the parabola changes shape when the points being used to model the parabola are all close together. This made obvious the importance of collecting data over a range of values in creating a model - the students caught on pretty quickly to this fact.

I think Angry Birds served as a cool "something different" for the class and has a lot of potential in a math class, as it does in physics. I am hoping to use this as a springboard to have students understand the power of models and ultimately choose something to model that allows them to predict a phenomenon that is of some importance to their own adolescent worlds. I don't exactly know what this might be, and I have some suggestions for students to make if they are unable to come up with anything, but this tends to be one of those ideas that eventually results in a few students doing some very original work. Given my interest in ultimately getting students to participate in the Google Science Fair, I think this is just the thing to push them in the right direction of making their own investigation.

Over the course of my vacation in New Zealand, I found myself rethinking many things about the subjects I teach. This wasn't really because I was actively seeing the course concepts in my interactions on a daily basis, but rather because the sensory overload of the new environment just seemed to shock me into doing so.

One of these ideas is the balance between abstraction and concrete ideas. Being able to physically interact with the world is probably the best way to learn. I've seen it myself over and over again in my own classes and in my own experience. There are many situations in which the easiest way to figure something out is to just go out and do it. I tried to do this the first time I wanted to learn to ride a bicycle - I knew there was one in the garage, so I decided one afternoon to go and try it out. I didn't need the theory first to ride a bicycle - the best way is just to go out and try it.

Of course, my method of trying it was pretty far off - as I understood the problem , riding a bicycle first required that you get the balancing down. So I sat for nearly an hour rocking from side to side trying to balance.

My dad sneaked into the garage to see what I was up to, and pretty quickly figured it out and started laughing. He applauded my initiative in wanting to learn how to do it, but told me there is a better way to learn. In other words, having just initiative is not enough - a reliable source of feedback is also necessary for solving a problem by brute force. That said, with both of these in hand, this method will often beat out a more theoretical approach.

This also came to mind when I read a comment from a Calculus student's portfolio. I adjusted how I presented the applications of derivatives a bit this year to account for this issue, but it clearly wasn't good enough. This is what the student said:

Something I didn't like was optimisation. This might be because I wasn't there for most of
the chapter that dealt with it, so I didn't really understand optimisation. I realise that optimisation applies most to real life, but some of the examples made me think that, in real life, I would have just made the box big enough to fit whatever needed to fit inside or by the time I'd be done calculating where I had to swim to and where to walk to I could already be halfway there.

Why sing the praises of a mathematical idea when, in the real world, no logical person would choose to use it to solve a problem?

This idea appeared again when reading The Mathematical Experience by Philip J. Davis and Reuben Hersh during the vacation. On page 302, they make the distinction between analytical mathematics and analog mathematics. Analog math is what my Calculus student is talking about, using none of "the abstract symbolic structures of 'school' mathematics." The shortest distance between two points is a straight line - there is no need to prove this, it is obvious! Any mathematical rules you apply to this make the overall concept more complex. On the other hand, analytic mathematics is "hard to do...time consuming...fatiguing...[and] performed only by very few people" but often provides insight and efficiency in some cases where there is no intuition or easy answer by brute force. The tension between these two approaches is what I'm always battling in my mind as a swing wildly from exploration to direct instruction to peer instruction to completely constructivist activities in my classroom.

Before I get too theoretical and edu-babbly, let's return to the big idea that inspired this post.

I went mountain biking for the first time. My wife and I love biking on the road, and we wanted to give it a shot, figuring that the unparalleled landscapes and natural beauty would be a great place to learn. It did result in some nasty scars (on me, not her, and mostly on account of the devilish effects of combining gravity, overconfidence, and a whole lot of jagged New Zealand mountainside) but it was an incredible experience. As our instructors told us, the best way to figure out how to ride a mountain bike down rocky trails is to try it, trust intuition, and to listen to advice whenever we could. There wasn't any way to really explain a lot of the details - we just had to feel it and figure it out.

As I was riding, I could feel the wind flowing past me and could almost visualize the energy I carried by virtue of my movement. I could look down and see the depth of the trail sinking below me, and could intuitively feel how the potential energy stored by the distance between me and the center of the Earth was decreasing as I descended. I had the upcoming unit on work and energy in physics in the back of my mind, and I knew there had to be some way to bring together what I was feeling on the trail to the topic we would be studying when we returned.

When I sat down to plan exactly how to do this, I turned to the great sources of modeling material for which I have incredible appreciation of being able to access , namely from Kelly O'Shea and the Modeling center at Arizona State University. In looking at this material I have found ways this year to adapt what I have done in the past to make the most of the power of thinking and students learning with models. I admittedly don't have it right, but I have really enjoyed thinking about how to go through this process with my students. I sat and stared at everything in front of me, however - there was conflict with the way that I previously used the abstract mathematical models of work, kinetic energy, and potential energy in my lessons and the way I wanted students to intuitively feel and discover what the interaction of these ideas meant. How much of the sense of the energy changes I felt as I was riding was because of the mathematical model I have absorbed over the years of being exposed to it?

The primary issue that I struggle with at times is the relationship between the idea of the conceptual model as being distinctly different from mathematics itself, especially given the fact that one of the most fundamental ideas I teach in math is how it can be used to model the world. The philosophy of avoiding equations because they are abstractions of the real physics going on presumes that there is no physics in formulating or applying the equations. Mathematics is just one type of abstraction.

A system schema is another abstraction of the real world. It also happens to be a really effective one for getting students to successfully analyze scenarios and predict what will subsequently happen to the objects. Students can see the objects interacting and can put together a schema to represent what they see in front of them. Energy, however, is an abstract concept. It's something you know is present when observing explosions, objects glowing due to high temperature, baseballs whizzing by, or a rock loaded in a slingshot. You can't, however, observe or measure energy in the same way you can measure a tension force. It's hard to really explain what it is. Can a strong reliance on mathematics to bring sense to this concept work well enough to give students an intuition for what it means?

I do find that the way I have always presented energy is pretty consistent with what is described in some of the information on the modeling website - namely thinking about energy storage in different ways. Kinetic energy is "stored" in the movement of an object, and can be measured by measuring its speed. Potential energy is "stored" by the interaction of objects through a conservative force. Work is a way for one to object transfer energy to another through a force interaction, and is something that can be indicated from a system schema. I haven't used energy pie diagrams or bar charts or energy flow diagrams, but have used things like them in my more traditional approach.

The main difference in how I have typically taught this, however, is that mathematics is the model that I (and physicists) often use to make sense of what is going on with this abstract concept of energy. I presented the equation definition of work at the beginning of the unit as a tool. As the unit progressed, we explored how that tool can be used to describe the various interactions of objects through different types of forces, the movement of the objects, and the transfer of energy stored in movement or these interactions. I have never made students memorize equations - the bulk of what we do is talk about how observations lead to concepts, concepts lead to mathematical models, and then models can then be tested against what is observed. Equations are mathematical models. They approximate the real world the same way a schema does. This is the opposite of the modeling instruction method, and admittedly takes away a lot of the potential for students to do the investigating and experimentation themselves. I have not given this opportunity to students in the past primarily because I didn't know about modeling instruction until this past summer.

I have really enjoyed reading the discussions between teachers about the best ways to transition to a modeling approach, particularly in the face of the knowledge (or misinformation) they might already have . I was especially struck by a comment I read in one of the listserv articles by Clark Vangilder (25 Mar 2004) on this topic of the relationship between mathematical models and physics:

It is our duty to expose the boundaries between meaning, model, concept and representation. The Modeling Method is certainly rich enough to afford this expense, but the road is long, difficult and magnificent. The three basic modeling questions of "what do you see...what can you measure...and what can you change?" do not address "what do you mean?" when you write this equation or that equation...The basic question to ask is "what do you mean by that?," whatever "that" is.

Our job as teachers is to get students to learn to construct mental models for the world around them, help them test their ideas, and help them understand how these models do or do not work. Pushing our students to actively participate in this process is often difficult (both for them and for us), but is inevitably more successful in getting them to create meaning for themselves on the content of what we teach. Whether we are talking about equations, schema, energy flow diagrams, or discussing video of objects interacting with each other, we must always be reinforcing the relationship between any abstractions we use and what they represent. The abstraction we choose should be simple enough to correctly describe what we observe, but not so simple as to lead to misconception. There should be a reason to choose this abstraction or model over a simpler one. This reason should be plainly evident, or thoroughly and rigorously explored until the reason is well understood by our students.