# Same Skills, Virtual Car: Constant Velocity Particle Model

I had everything in line to start the constant velocity model unit: stop watches, meter sticks, measuring tape. All I had to do was find the set of working battery operated cars that I had used last year. I found one of them right where I left it. Upon finding another one, I remembered that didn't work last year either, and I hadn't gotten a replacement. The two other cars were LEGO robot cars that I had built specifically for this task, and all I would need would be to build those cars, program them to run their motors forward, and I was ready to go.

Then I remembered that my computer had been swapped for a new model over the summer, so my old LEGO programming applications were gone. Install software nowhere to be found, I went to the next option: buying new ones.

I made my way to a couple stores that sold toys and had sold me one of the cars from last year. They only had remote control ones, and I didn't want to add the variable of taping the controllers to the on position so they would run forward. Having a bunch of remote control cars in class is a recipe for distraction. In a last ditch effort to try to improve the one working car that I had, I ended up snapping the transmission off of the motor. I needed another option.

John Burk's post about using some programming in this lab and ending it in a virtual race had me thinking how to address the hole I had dug myself into. I have learned that the challenge of running the Python IDE on a class of laptops in various states of OSX make it tricky to have students use Visual Python or even the regular Python environment.

I have come to embrace the browser as the easiest portal for having students view and manipulate the results of a program for the purposes of modeling. Using Javascript, the Raphael drawing framework, Camtasia, and a bit of hurried coding, I was able to put together the following materials:

Car 1 Part 1
Car-2-Model-
Constant Velocity model data generator (HTML)

When it came to actually running the class, I asked students to generate a table of time (in seconds) and position data (in meters) for the car from the video. The goal was to be able to figure out when the car would reach the white line. I found the following:

• Students were using a number of different measuring tools to make their measurements. Some used rulers in centimeters or inches, others created their own ruler in units of car lengths. The fact that they were measuring a virtual car rather than a real one made no difference in terms of the modeling process of deciding what to measure, and then measuring it.
• Students asked for the length of the car almost immediately. They realized that the scale was important, possibly as a consequence of some of the work we did with units during the preceding class.
• By the time it came to start generating position data, we had a realization about the difficulty arising from groups lacking a common origin. Students tended to agree on velocity as was expected, but their inability This was especially the case when groups were transitioning to the data from Car 2.
• Some students saw the benefit of a linear regression immediately when they worked with the constant velocity model data generator. They saw that they could use the information from their regression in the initial values for position, time, and velocity. I didn't have to say a thing here - they figured it out without requiring a bland introduction to the algebraic model in the beginning.
• I gave students the freedom to sketch a graph of their work on a whiteboard, on paper, or using Geogebra. Some liked different tools. Our conversation about the details afterwards was the same.

I wish I had working cars for all of the groups, but that's water under the bridge. I've grown to appreciate the flexibility that computer programming has in providing full control over different aspects of a simulation. It would be really easy to generate and assign each group a different virtual car, have them analyze it, and then discuss among themselves who would win in a race. Then I hit play and we watch it happen. This does get away from some of the messiness inherent in real objects that don't drive straight, or slow down as the batteries die, but I don't think this is the end of the world when we are getting started. Ignoring that messiness forever would be a problem, but providing a simple atmosphere for starting exploration of modeling as a philosophy doesn't seem to be a bad way to introduce the concept.

# Class-sourcing data generation through games

In my newly restructured first units for ninth and tenth grade math, we tackle sets, functions, and statistics. In the past, teaching these topics have always involved collecting some sort of data relevant to the class - shoe size, birthday, etc. Even though making students part of the data collection has always been part of my plan, it always seems slower and more forced than I want it to be. I think the big (and often incorrect) assumption is that because the data is coming from students, they will find it relevant and enjoyable to collect and analyze.

This summer, I remembered a blog post from Dan Meyer not too long ago describing a brilliantly simple game shared by Nico Rowinsky on Twitter. I had tried this manually with pencil and paper and students since hearing about it. It always required a lot of effort collecting and ordering papers with student guesses, but student enthusiasm for the game usually compelled me to run a couple of rounds before getting tired of it. It screamed for a technology solution.

I spent some time this summer learning some of the features of the Meteor Javascript web framework after a recommendation from Dave Major. It has the real-time update capabilities that make it possible to collect numbers from students and reveal a scoreboard to all users simultaneously. You can see my (imperfect) implementation hosted at http://lownumber.meteor.com, and the code at Github here. Dave was, as always, a patient mentor during the coding process, eagerly sharing his knowledge and code prototypes to help me along.

If you want to start your own game with friends, go to lownumber.meteor.com/config/ and select 'Start a new game', then ask people to play. Wherever they are in the world, they will all see the results show up almost instantly when you hit the 'Show Results' button on that page. I hosted this locally on my laptop during class so that I could build a database of responses for analysis later by students.

The game was, as expected, a huge hit. The big payoff was the fact that we could quickly play five or six games in my class of twenty-two grade nine students in a matter of minutes and built some perplexity through the question of how one can increase his or her chances of winning. What information would you need to know about the people playing? What tools do we have to look at this data? Here comes statistics, kids.

It also quickly led to a discussion with the class about the use of computers to manage larger sets of data. Only in a school classroom would one calculate measures of central tendency by hand for a set of data that looks like this:

This set also had students immediately recognizing that 5000 was an outlier. We had a fascinating discussion when some students said that out of the set {2,2,3,4,8}, 8 should be considered an outlier. It led us to demand a better definition for outlier than 'I know it when I see it'. This will come soon enough.

The game was also a fun way to introduce sets with the tenth graders by looking at the characteristics of a single set of responses. Less directly related to the goal of the unit, but a compelling way to get students interacting with each other through numbers. Students that haven't tended to speak out in the first days of class were on the receiving end of class-wide cheers when they won - an easy channel for low pressure positive attention.

As you might also expect, students quickly figured out how to game the game. Some gave themselves entertaining names. Others figured out that they could enter multiple times, so they did, though still putting in their name each time. Some entered decimals which the program rounded to integers. All of these can be handled by code, but I'm happy with how things worked out as is.

If you want instructions on running this locally for your classroom, let me know. It won't be too hard to set up.

# Day 1 in Physics - Models vs. Explanations

One of my goals has always been to differentiate my job from that of a paid explainer. Good teaching is not explaining exclusively - though it can be part of the process. This is why many people seek a great video or activity that thoroughly explains a concept that puzzles them. The process of learning should be an interactive one. An explanation should lead into another question, or an activity that applies the concept.

For the past two years, I've done a demo activity to open my physics class that emphasizes the subtle difference between a mental model for a phenomenon and having just a good explanation for it. A mental model makes predictions and is therefore testable. An explanation is the end of a story.

The demo equipment involves a cylindrical neodymium magnet and an aluminum tube of diameter slightly larger than the magnet. It is the standard eddy current/Lenz's law/electromagnetic induction demo showing what happens when a magnet is dropped into a tube that is of a non-magnetic material. What I think I've been successful at doing is converting the demo into an experience that opens the course with the creation of a mental model and simultaneous testing of that model.

I walk into the back of the classroom with the tube and the magnet (though I don't tell them that it is one) and climb on top of a table. I stand with the tube above the desk and drop the magnet concentrically into the tube.

Students watch what happens. I ask for them to share their observations. A paraphrased sample:

• The thing fell through the tube slowly than it should have
• It's magnetic and is slowing down because it sticks to the side
• There's so much air in the tube that it slows down the falling object.

I could explain that one of them is correct. I don't. I first ask them to turn their observation into an assertion that should then be testable by some experiment. 'The object is a magnet' becomes 'if the object is a magnet, then it should stick to something made out of steel.' This is then an experiment we can do, and quickly.

When the magnet sticks strongly to the desk, or paper clips, or that something else happens that establishes that the object is magnetic, we can further develop our mental model for what is happening. Since the magnet sticks to steel, and the magnet seems to slow down when it falls, the tube must be made of some magnetic metal. How do we test this? See if the magnet sticks to the tube. The fact that it doesn't stick as it did to the steel means that our model is incomplete.

Students then typically abandon the magnet line of reasoning and go for air resistance. If they went for this first (as has happened before) I just reverse the order of these experiments with the above magnetic discussion. If the object is falling slowly, it must be because the air is slowing it down. How do we test this? From the students: drop another object that is the same size as the first and see if it falls at the same speed. I have a few different objects that I've used for this - usually an aluminum plug or part from the robotics kit works - but the students also insist on taping up the holes that these objects have so that it is as close to the original object as possible. It doesn't fall at the same speed though. When students ask to add mass to the object, I oblige with whatever materials I have on hand. No change.

The mental model is still incomplete.

We've tried changing the object - what about the tube? Assertion from the students: if the material for the tube matters, then the object should fall at a different speed with a plastic tube. We try the experiment with a PVC pipe and see that the magnet speeds along quite unlike it did in the aluminum tube. This confirms our assertion - this is moving us somewhere, though it isn't clear quite where yet.

Students also suggest that friction is involved - this can still be pushed along with the assertion-experiment process. What would you expect to observe if friction is a factor? Students will say they should hear it scraping along or see it in contact with the edges of the tube. I invited a student to stare down the end of the tube as I dropped the magnet. He was noticeably excited by seeing it hover lightly down the entire length of the tube, only touching its edges periodically.

Students this year asked to change the metal itself, but I unfortunately didn't have a copper tube on hand. That would have been awesome if I had. They asked if it would be different if the tube was a different shape. Instead of telling them, I asked them what observation they would expect to make if the tube shape mattered. After they made their assertion, I dropped the magnet into a square tube, and the result was very similar to with the circular tube.

All of these experiments make clear that the facts that (a) the object is a magnet and (b) the tube is made of metal are somehow related. I did at this point say that this was a result of a phenomenon called electromagnetic induction. For the first time during the class, I saw eyes glaze over. I wish I hadn't gone there. I should have just said that we will eventually develop some more insight into why this might happen, but for now, let's be happy that we've developed some understanding of what factors are involved.

All of these opportunities to get students making assertions and then testing them is the scientific method as we normally teach it. The process is a lot less formal than having them write a formal hypothesis, procedure, and conclusion in a lab report - appropriate given that it was the first day of the class - and it makes clear the concept of science as an iterative process. It isn't a straight line from a question to an answer, it is a cyclical process that very often gets hidden when we emphasize the formality of the scientific method in the form of a written lab report. Yes, scientists do publish their findings, but this isn't necessarily what gets them up in the morning.

Some other thoughts:

• This process emphasizes the value of an experiment either refuting or supporting our hypothesis. There is a consequence to a mental model when an experiment shows what we expected it to show. It's equally instructive when it doesn't.I asked the students how many times we were wrong in our exploration of the demo. They counted more than five or six. How often do we provide opportunities for students to see how failure is helpful? We say it. Do we show how?
• I finally get why some science museums drive me nuts. At their worst, they are nothing more than clusters of express buses from observation/experiment to explanation. Press the button/lift the flap/open the window/ask the explainer, get the answer. If there's not another step to the exhibit that involves an application of what was learned, an exhibit runs the risk of continuing to perpetuate science as a box of answers you don't know. I'm not saying there isn't value in tossing a bunch of interesting experiences at visitors and knowing that only some stuff will stick. I just think there should be a low floor AND a high ceiling for the activities at a good museum.
• Mental models must be predictive within the realm in which they are used. If you give students a model for intangible phenomena - the lock and key model for enzymes in biology for example - that model should be robust enough to have students make assertions and predictions based on their conception of the model, and test them. The lock and key model works well to explain why enzymes can lose effectiveness under high temperature because the shape of the active site changing (real world) matches our conception of a key being of the wrong shape (model). Whenever possible, we should expose students to places where the model breaks down, if for no other reason, to show that it can. By definition, it is an incomplete representation of the universe.

# Standards Based Grading & Unit Tests

I am gearing up for another year, and am sitting in my new classroom deciding the little details that need to be figured out now that it is the "later" that I knew would come eventually. Last year was the first time I used SBG to assess my students. One year in, I understand things much better than when I first introduced the concept to my students. By the end of the year, they were pretty enthusiastic about the system and appreciated that I had made the change.

I wonder now about the role of unit tests. Students did not get an individual grade for a test at the end of a unit - instead just a series of adjustments to their proficiency levels for the different standards of the related unit, and other units if there were questions that assessed them. While there were times for students to reassess during class and before and after school, a full period devoted to this purpose helped in a few unique ways that I really appreciate:

• All students reassessing at the same time means no issues with scheduling time for retakes.
• Students that have already demonstrated their ability to work independently to apply content standards are given an opportunity to do so in the context of all of the standards of the unit. They need to decide which standards apply in a given situation, which is a higher level rung of cognitive demand. This is why students that perform well on a unit exam usually move up to a 4 or 5 for the related standards.
• Students that miss a full period assessment due to illness, school trips, etc. know that they must find another time to assess on the standards in order to raise their mastery level. It changes the conversation from 'you missed the test, so here's a zero' to 'you missed an opportunity to raise your mastery level, so your mastery levels are staying right where they are while we move on to new topics.'

I also like the unintended connection to the software term unit testing in which the different components of a piece of software are checked to see that they function independently and in concert with each other. This is what we are interested in seeing through reassessment, no?

My question to the blogosphere is to fill in the holes of my understanding here. What are the other reasons to have unit exams? Or should I get rid of them altogether and just have more scheduled extended times to reassess consistently, regardless of progress throughout the content of the semester?