In piecing together States-n-Plates, I wanted to learn more about React, a web application library created by Facebook. In the process, I found myself finding parallels in how I go about learning anything.
Before I describe the details, I'll give a (hopefully brief) description of what React does.
An HTML page normally consists of HTML tags that tell the browser what to display, with other rules that also describe how the HTML should look. A page created through React reframes that page as a series of components that each serve a different function. The image components need to have the ability to be dragged onto the targets. The targets need to be able to accept a dragged image, and need to be able to indicate whether the image dragged onto the target corresponds with the correct plate. The scoreboard needs to know how many states have been correctly matched at any given time. The components also have the ability to respond when a user clicks, types, or drags other components into them.
In a well designed React application, each component uses information from the component that contains it in order to behave (or, um, react) as the application is used. Building the pathways for how this information flows from one component into another is deliberately designed so that each component can act independently from another.
When I first started working on creating States and Plates, I started with a fully formed webpage that looked much like the final product above. I followed the React documents that then suggested breaking the page down into components, one by one. I did this without really understanding in detail what I was doing, but was able to get the components to each have the appearance of the original web page, which felt like real progress. Eventually, my progress was halted when I reached the limits of what I understood. I needed help.
It was at this point when I picked up a book on React and started working through the basics. I began to understand better what the guiding philosophies of React were - the design decisions, the behaviors that one component had in response to another, and how to think through an application the React way. This was where it was helpful to read the perspectives of some people much more experienced then me - I understood the vocabulary they used and could make the connections I needed to make progress.
With some of the basics figured out, I rethought the application from scratch. Rather than starting with the webpage as a whole, I started creating components and making sure each one worked as expected before moving on.
By the end, I felt comfortable thinking about my application both from a bird's eye view and on an individual component level. I needed to have the experience of breaking the idea down into individual pieces and seeing how they interacted with each other to produce the whole. I needed to take time seeing what rules guided the function of one component in order to understand the whole. If I had started by reading the documentation as my step one, I would not have had the context that the big picture view of the application yielded for when I actually did so in my learning. Both views were important, and neither view was sufficient on its own to lead to full understanding or transfer.
We need to give our students opportunities to have both views of the content we teach. Insisting that student mastery of the basics is a necessary gatekeeper to higher levels of thought misses opportunities to understand the context of that basic knowledge. Student exploration of concepts through Desmos or Geogebra or problem solving is a great way to engage with the standards of mathematical practice, but without discussion, review of underlying concepts, or (gasp!) direct instruction where needed, opportunities for growth might be limited.
Let's make sure, as a team, that we are attacking this problem from both ends.
I've required my IB classes in the past two years to be able to draw some standard functions from memory as part of our function families unit. Creating quizzes for this has been a hassle since I've manually had to build these using Word or LibreOffice. I greatly dislike formatting things using either software package.
I decided this week that creating these quizzes using HTML seemed like a perfect application of my developing React skills. Here's the result:
The order of the functions randomly generates on each page load, which makes it easy to generate new versions. I've been able to export these as PDF files and then send them right to the printer.
For two years in a row, I've hit a sweet spot of engagement, discussion, and really invigorating student interaction with one particular exercise in my web design course. I sit with a web browser console open, and just ask students to go through this cycle:
Make a prediction of what's going to appear when I hit enter.
See what actually appears.
Adjust your model and repeat.
Here was today's series:
I say almost nothing aside from "here's another one". The amount of laughter, head slapping, and students talking through their attempts to understand is a beautiful thing to witness. The fact that no student blurts out the answer speaks to the respect my students have for each other and for this model.
This is a simple type of activity that I do from time to time, and only from time to time, because I don't want it to lose its novelty. There's no engagement from a real world context. There's no lecture beforehand about what I'm about to do, and how I want them to respond. (Ok, I do ask that they not blurt out the answer or how it works once they know, but that's about it.)
I hope to establish an unspoken agreement with my students that goes something like this:
There is a pattern, and I am confident that you'll be able to figure it out.
If you can't get it right away, that's fine. You probably aren't the only one.
If you are the only one, then you have a lot of people around to nudge you in the right direction.
If you're wrong, you'll get another chance to be right in just a minute.
Once you know how it works, you might not care anymore. Enjoy the journey.
Getting this agreement across takes time and trust and is really difficult to force. It's remarkably satisfying when it happens. The important part is the consistent commitment to failure: Everyone will fail at least once. Everyone will also likely be wrong at least once after they are right.
I created an interactive lesson called Thinking Machine for use with a talk I gave to the IB theory of knowledge class, which is currently on a unit studying mathematics.
The lesson made good use of the Meteor Blaze library as well as the Desmos Graphing Calculator API. Big thanks to Eli and Jason from Desmos for helping me with putting it together.
I was asked by a colleague if I was interested in speaking to the IB theory of knowledge class during the mathematics unit. I barely let him finish his request before I started talking about what I was interested in sharing with them.
If you read this blog, you know that I'm fascinated by the intersection of computers and mathematical thinking. If you don't, now you do. More specifically, I spend a great deal of time contemplating the connections between mathematics and programming. I believe that computers can serve as a stepping stone between students understanding of arithmetic and the abstract idea of a variable.
The fact that computers do precisely what their programmers make them do is a good thing. We can forget this easily, however, in our world that has computers doing fairly sophisticated things behind the scenes. The fact that Siri can understand what we say, and then do what we ask, is impressive. The extent to which the computer knows what it is doing is up for debate. It's pretty hard to argue though that computers aren't doing similar types of reasoning processes that humans do in going about their day.
Here's what I did with the class:
I began by talking about myself as a mathematical thinker. Contrary to what many of them might think, I don't spend my time going around the world looking for equations to solve. I don't seek out calculations for fun. In fact, I actively dislike making calculations. What I really enjoy is finding interesting problems to solve. I get a great deal of satisfaction and a greater understanding of the world through doing so.
What does this process involve? I make observations of the world. I look for situations, ideas, and images that interest me. I ask questions about what I see, and then use my understanding of the world, including knowledge in the realm of mathematics, to construct possible answers. As a mathematical and scientific thinker, this process of gathering evidence, making predictions using a model, testing them, and then adjusting those models is in my blood.
I then set the students loose to do an activity I created called Thinking Machine. I styled it after the amazing lessons that the Desmos team puts together, and used their tools to create it. More on that later. Check it out, and come back when you're done.
The activity begins with a first step asks students make a prediction of a mathematical rule created by the computer. The rule is never complicated - always a linear function. When the student enters the correct rule, the computer says to move on.
The next step is to turn the tables on the student - the computer will guess a rule (limited to linear, quadratic, cubic, or exponential functions) based on three sets of inputs and outputs that the student provides. Beyond those three inputs, the student should only answer 'yes' or 'no' to the guesses that the computer provides.
The computer learns by adjusting its model based on the responses. Once the certainty is above a certain level, the computer gives its guess of the rule, and shows the process it went through of using the student's feedback to make its decision. When I did this with the class, more than half of the class had their guesses correctly determined. I've since tweaked this to make it more reliable.
After this, we had a discussion about whether or not the computer was thinking. We talked about what it means for a computer to have knowledge of a problem at hand. Where did that knowledge come from? How does it know what is true, and what is not? How does this relate to learning mathematics? What elements of thinking are distinctly human? Creativity came up a couple times as being one of these elements.
This was a perfect segue to this video about the IBM computer Watson learning to be a chef:
Few were able to really explain this away as being uncreative, but they weren't willing to claim that Watson was thinking here.
Another example was this video from the Google Deep Thinking lab:
I finished by leading a conversation about data collection and what it signifies. We talked about some basic concepts of machine learning, learning sets, and some basic ideas about how this compared to humans learning and thinking. One of my closing points was that one's experience is a data set that the brain uses to make decisions. If computers are able to use data in a similar way, it's hard to argue that they aren't thinking in some way.
Students had some great comments questions along the way. One asked if I thought we were approaching the singularity. It was a lot of fun to get the students thinking this way, especially in a different context than in my IB Math and Physics classes. Building this also has me thinking about other projects for the future. There is no need to invent a graphing library on your own, especially for use in an activity used with students - Desmos definitely has it all covered.
I built Thinking Machine using Bootstrap, the Meteor Blaze template engine, jQuery, and the Desmos API. I'm especially thankful to Eli Luberoff and Jason Merrill from Desmos who helped me with using the features. I used the APIto do two things:
Parse the user's rule and check it against the computer's rule using some test values
Graph the user's input and output data, perform regressions, and give the regression parameters
The whole process of using Desmos here was pretty smooth, and is just one more reason why they rock.
The learning algorithm is fairly simple. As described (though much more briefly) in the activity, the algorithm first assumes that the four regressions of the data are equally likely in an array called isThisRight. When the user clicks 'yes' for a given input and output, the weighting factor in the associated element of the array is doubled, and then the array is normalized so that the probabilities add to 1.
The selected input/output is replaced by a prediction from a model that is selected according to the weights of the four models - higher weights mean a model is more likely to be selected. For example, if the quadratic model is higher than the other three, a prediction from the quadratic model is more likely to be added to the list of four. This is why the guesses for a given model appear more frequently when it has been given a 'yes' response.
Initially I felt that asking the user for three inputs was a bit cheap. It only takes two points to define a line or an exponential regression, and three for a quadratic regression. I could have written a big switch statement to check if data was linear or exponential, and then quadratic, and then say it had to then be cubic. I wanted to actually give a learning algorithm a try and see if it could figure out the regression without my programming in that logic directly. In the end, the algorithm works reasonable well, including in cases where you make a mistake, or you give two repeated inputs. With only two distinct points, the program is able to eventually figure out the exponential and quadratic, though cubic rules give it trouble. In the end, the prediction of the rule is probability based, which is what I was looking for.
The progress bar is obviously fake, but I wanted something in there to make it look like the computer was thinking. I can't find the article now, but I recall reading somewhere that if a computer is able to respond too quickly to a person's query, there's a perception that the results aren't legitimate. Someone help me with this citation, please.
I presented to some FIRST LEGO League teachers on the programming software for the LEGO Mindstorms EV3 last week. My goal was to present the basics of programming in the system so that these teachers could coach their students through the process of building a program.
The majority of programs that students create are the end product of a lot of iteration. Students generally go through this process to build a program to do a given task:
Make an estimate (or measurement) of how far the motors must rotate in order to move the robot to a given location.
Program the motors to run for this distance.
Run the program to see how close the robot gets to the desired location.
Adjust the number in Step 1. Repeat until the robot ends up in the right location.
Once the program gets the robot to the right location, this process is repeated for the next task that the robot must perform. I've also occasionally suggested a mathematical approach to calculate these distances, but the reality is that students would rather just try again and again until the robot program works. It's a great way to introduce students to the idea of programming as a sequence of instructions, as well as familiarity with the idea that getting a program right on the first try is a rarity. It's how I've instructed students for years - a low bar for entry given that this requires a simple program, and a high ceiling since the rest of programming instructions are extensions of this concept.
I now believe, however, that another common complaint that coaches (including me) have had about student programs is a direct consequence of this approach. Most programs (excluding those students with a lot of experience) require the robot to be aimed correctly at the beginning of the program. As a result, students spend substantial time aiming their robot, believing that this effort will result in a successful run. While repeatability is something that we emphasize with students (I have a five in a row success rule before calling a mission program completed) it's the method that is more at fault here.
The usual approach in this situation is to suggest that students use sensors in the program to help with repeatability. The reason they don't do so isn't that they don't know how to use sensors. It is that the aim and shoot method is, or seems, good enough. It is so much easier in the student's mind to continue the simpler approach than invest in a new method. It's like when I've asked my math students to add the numbers from 1 to 30, for example. Despite the fact that they have learned how to quickly calculate arithmetic series before, many of them pick up their calculators and enter the numbers into a sum, one at a time, and then hit enter. The human tendency is to stick to those patterns and ideas that are familiar until there is truly a need to expand beyond them. We stick with what works for us.
One of my main points to the teachers in my presentation was that I'm making a subtle change to how I coach my students through this process. I'm calling it 'sensors first'.
The tasks I give my students in the beginning to learn programming are going to require sensors in order to complete. Instead of telling students to program their robot to drive a given distance and stop, I'll ask them to drive their robot forward until a sensor on their robot sees a red line. I'll also require that I start the robot anywhere I want in the test of their program.
It's a subtle difference, and requires no difference in the programming. In the EV3 software, here's what it looks like in both cases, using wheels to control the distance, and a sensor:
What am I hoping will be different?
Students will look to the challenges I give them with the design requirement built in that aim-and-shoot isn't an option that will result in success. If they start off thinking that way, they might always think how a sensor could be used to make the initial position of the robot irrelevant. FLL games always have a number of printed features on the mat that can be used to help with this sort of task.
When I do give tasks where the students can start the robot wherever they choose, students will (hopefully) think first whether or not the starting position should matter or not. In cases where it doesn't, then they might decide to still use a sensor to guide them (hopefully for a reason), or drop down to a distance based approach when it makes sense to do so. This means students will be routinely thinking what tool will best do the job, rather than trying to use one tool to do everything.
This philosophy might even prompt a more general need for ways to reduce the uncertainty and compound error effect associated with an aim and shoot approach. Using the side of the table as a way to guide straight line driving is a common and simple approach.
These sorts of problem solving approaches are exactly the way successful engineering design cycle works. Solutions should be found that maximize the effectiveness of a design while minimizing costs. I'm hoping this small change to the way I teach my students this year gets them spending more time using the tools built into the robot well, rather than trying to make a robot with high variability (caster wheels, anyone?) do the same thing two times in a row.
I previously wrote about something I tried at the beginning of last year with my students that probed this question a bit. My contention then was that writing expressions is something that occurs with students only in math class world, and that it is an inherently non-interactive process. The spirit of what variables do is something with which students have familiarity. It's the abstraction of the mathematical representation that pushes that familiarity away from them.
I'm going to use a different expression problem since the one in Dan's post doesn't do it for me.
Dan estimates that around 3/4 of any group of people drink soda.
I'd start with this activity that students would be able to answer:
Students could each click on the people go through the process of figuring out how many in each group drink soda according to Dan's estimate, and would record the number in each group. The third group serves to construct a bit of controversy for discussion purposes. In doing this four times, students are presumably going through a similar process each time.
Mathematics serves to create structure for this repetition, but on its own, is not necessarily in the realm of what our students would do to manage this repetition. Programming provides a way to bridge this gap using the same idea of variables that exists in the mathematical realm, and here is where the value sits for this discussion.
In the post I mentioned previously, I said that I briefly showed students how to type expressions into a spreadsheet and play around with inputs and outputs so that they match concrete values. In a non 1:1 laptop classroom, I might start with this:
A calculation links the outputs to the inputs in each of these tables. Students have concrete values sitting in front of them, so they will notice that each of these tables must be making the wrong calculations, even though they each have one correct value. Here, we have the computer making the same calculation each time, but these calculations do not work in each case. This is the wrong model to match our data. The computer is doing exactly what we are telling it to do, but the model is wrong.
How do we fix this, class? Obviously we use a different computational model. I might have students decide in a group what calculation I need to do to correctly reproduce the values from the exercise, and elicit those suggestions from them.
Once we establish this correct model, this calculation we are making is common to every set of data. We can show that this calculation makes an interesting prediction of 7.5 people liking soda in the group of 10. We can use this calculation to predict how many people in a group of 28 drink soda (and in a 1:1 classroom, I'd have them go through this entire programming process themselves.)
I might now generate a table hundreds of entries long and ask whether there is a better way to represent the set of all possible answers to this question. The table will work, but it is tedious. We need a better way. How do we do this? Here is where variables come in.
Programmers use variables because they want to build a program that produces a correct output for every possible input that might be used to solve a given problem or design. Mathematicians also want to have the same level of universality, and have a syntax and structure that allows for efficient communication of that universality. Computers are really good at calculating. The human brain is really good at managing the abstraction of designing those calculations. This, ultimately, is what we want students to be able to do, but they often get lost in both the design stage and the calculation stage, especially because these get divorced from the actual problem students are trying to solve.
If we can have students spend more time in the design stage and get feedback on whether their calculations are correct, that's the sweet spot for making the jump to using mathematical variables.
I really don't like reviewing for exams. I don't think I'm the only one that thinks this, by far.
If I create a the review sheet, I'm the one going through all of the content of the unit and identifying what might be important. It would be much more valuable to have students do this. I've also been filling the school server with notes and handouts of what we do each day, so they could be the ones deciding which problems are representative of the unit.
Suppose I do make a new set of review problems available to students. If students have this set of problems to work through during class, I spend my time circulating and answering questions and giving feedback, which is the best use of my time with students. Better yet, students answer each other questions, and give each other feedback. They lose the opportunity to see the scope of the entire semester themselves because, outside of the set of problems I prepare for them, they don't actually take the time to see that scope on their own. They only see my curated sample and interpret it according to their own understanding of the relationship between review problems I select and problems I select for an exam.
I've had students themselves create review sheets, but this always has its own set of issues. Is it on paper or online? If on paper, how does this sheet efficiently get shared with other students? The benefit of an online resource is the ease of sharing. The difficulty comes from (1) the difficulty of communicating mathematics on a computer and (2) compiling that resource in one place. It's a lot of work to scan student work and paste it into a document. Unless I am meticulous in making sure that all students are using the same program (which is a lot of work for a class of twenty-four students all with their own laptops) this becomes a lot of work (again) for me. I'll do it if I really believe it is worth the effort for students, but I'm always looking to be efficient in that effort. I also don't want to put this effort on the shoulders of a student to together. And before someone tells me to use Google Docs and its amazing collaborative tools, I'll bring up the governmental disruption of Google services and leave it to you to figure out why that isn't an option for me and my students.
In the end, I have to decide which is the most valuable for students relative to a review. Is it getting feedback on what a student does and does not understand? Is it going back over the entire semester's material and figuring out what is important relative to a cumulative final?
If I have to pick a theme of my online experiments this year, it has been the search for effective ways to leverage social pressure and student use of technology to improve the quality of the time we spend in the classroom together. In the past, I have been the one collecting student work and putting it in one place when I've tried doing things differently for exam review. That organization is precisely something computers do well if we design a scheme for them to use.
Here's what I have had students do this year:
Each student has a blog where they post their own review sheet for one standard. They submit the URL of their post and their standard number through the same site through which they sign up for SBG reassessments. They see a list of the pages submitted by other students:
This serves as a central portal through which students can access each other's pages. Each student controls their own page and URL information, which saves me the effort to collect it all.
Why am I really excited about this list?
I curate the list. I decide whether a page has met the requirements of the assignment, and students can see those pages with a checkmark and a WB for my initials. If a student needs to improve something, I can tell them specifically what isn't meeting the requirements and help them fix it. Everyone doesn't have to wait for everyone else to be finished for the review process to begin. I don't decide what goes into each page generally, but I do help students decide what should be there. Beyond that, I don't have to do any compilation myself.
Students (ideally) vote on a page if they think it meets the requirements. Students can each vote once for each page, and see a checkmark once they have voted. This gets them thinking about the quality of what they see in the work of other students. I have been largely impressed with what students have put together for this project, and students are being fairly generous with this. I'm ok with that at this point because of the next point:
Students have an incentive to actually visit each other's pages. I have no idea how many students actually use the review sheets we've produced together in the past. I doubt it is very many. There's some aspect of game theory involved here, but if a student sees that others are visiting his or her own pages, that student might feel more compelled to visit the pages of other students. Everyone benefits from seeing what everyone else is doing. If some review happens as a result, that's a major bonus. They love seeing the numbers adjust real time as votes come in. There is a requirement that each vote include a code that is embedded in the post they are voting for, just so someone isn't voting for them all without visiting the page.
Students were actually using the pages to review today. Students were answering each other's questions and getting feedback sometimes from the authors themselves.
I get to have valuable conversations about citing resources online.
I really loved how engaged students were today in either developing their pages or working on each other's review problems. It was one of the most productive review days I've had, particularly in light of the fact that I didn't have to write a single problem of my own. I did have to write the code, of course, but that was a lot more interesting to me today than thinking of interesting assessment items that I'd rather just put on an exam.
Frequent readers likely know about my obsession with playing around the borders of computational thinking and mathematical reasoning. This question from James has some richness that I think brings out the strengths of considering both approaches quite nicely. For one of the few times I can remember since starting my teaching career, I went to a computational solution before analyzing it analytically.
A computational approach is pretty simple. In Python:
sum = 0
for i in range(1,11):
for j in range(1,11):
sum += i*j
sum = 0
for(j = 1;j<=10;j++)
The basic idea is the same in both languages. We iterate over each number in the first row and column of the multiplication table and add them up. From a first look, one could call this a brute force way to a solution, and therefore not elegant from a mathematical standpoint.
Taking this approach does, however, reveal some of the underlying mathematical structure that is needed to resolve this using other techniques. The sequence below is exactly how I analyzed the problem once I had written the program to solve it:
For a single row of the table, we are adding together the elements of that row. Instead of adding the individual elements together one by one, we could instead think about finding the sum of the elements of a single row, and then add together all of the rows. For example: . This is a simple arithmetic series.
Each row is the same as the row before it, aside from each element being multiplied by the first element in the row. Every row's sum therefore is being multiplied by the numbers in the first column of the table. .
Taking this one step further, this is equivalent to the sum of that first row multiplying the sum of the first column: . In other words, the answer to our problem is really the square of the sum of that first row (or column), or 55*55.
I bring up this problem because I think it suggests a useful connection between a practical method of solving a problem, and what we often expect in the world of classroom mathematics. This is clearly a great application of concepts behind a traditional presentation of arithmetic series, and a teacher might give this as part of such a unit to see if students are able to see the structure of the arithmetic series formulas within it.
My question is what a teacher does if he or she presents this problem and the students don't make that connection. Is the next step a whole class discussion about how to proceed? Is it a leading question asking how arithmetic series applies here? This, by the way, zaps the whole point of the activity if the goal was to see if students see that underlying structure based on what they already know. Once this happens, it becomes yet another 'example' presented to the class.
I wonder what happens if a computer/spreadsheet solution is consistently recognized throughout the class as a viable tool to investigate problems like this. A computer solution is really nothing more than an abstraction of the process of adding the numbers together one by one. If a student did actually do this by hand, we'd groan and ask if they thought there was a better way, and the response inevitably is 'yes, but I don't know a better way'. In the way I found myself thinking about this problem last night, I started from the computational method, discovered the structure from those computations, and then found a path toward a more elegant solution using algebraic techniques.
In other words, I made use of the structure of my program to identify an analytical approach. Contrast this with a more traditional approach where we start with an abstract definition of an arithmetic series (by hand), do practice problems (by hand) and once we understand how it works, use computational shortcuts.
Computation makes the process of finding a more elegant way seems much more natural - in the best situations, it builds intellectual need for an easier way. It is arbitrary to say that a student should be able to do a problem without a calculator. Computational tools demand that we find a more compelling reason to solve problems by hand if computers are able to do them rapidly once they are set up to solve them through programming. It is a realistic motivation to show that an easier way speeds up finding a solution to a problem by a factor of 10. It means less waiting for a web page to load or an image to post.
The language of mathematics is difficult enough to throw in the additional complications of computer language syntax. I fully acknowledge that this is a hurdle. I also think, however, that this syntax is more closely related to the concepts that we are trying to teach our students (3*x is three times x) than we sometimes think. The power of computer programming to be a bridge between the hand calculations that our students do and the abstractions of the mathematical content we teach is too great to ignore.
I'm starting a new unit reviewing algebraic skills tomorrow. My students have most certainly moved through evaluating algebraic expressions, solving linear equations, and combining like terms before. Much of tomorrow's class will involve me drifting between students working on this to get an idea of their skill level - certainly not a developmental lesson on these ideas unless I really see the need.
My question is on making these concepts new. The thing that comes to mind most immediately is using this as an opportunity to get students started on concepts of computational thinking. Students have seen the concepts of variables, substitution, and evaluation, but I think (and hope) that the ideas of using a computer to do these things is new enough to whet their appetites to potentially learn more.
What does the computer do well? (Compute).
What must we do to get it to do so? (Communicate to the computer correctly what we want to compute.)
Now that I see I can increase the font size in Chrome for the console, or zoom in using Camtasia, I can make the code much more visible than it is now. Work for the morning.
We have to tell the computer explicitly that 2x is 2*x. This is a fact that often gets glossed over when students haven't seen it for a while.
Selling programming as a fast and easily accessible calculator isn't a compelling pitch - I completely get that. At this point though, I'm not trying to sell the computer as the way to do things. My students all have computers with them in their classes. If making them unafraid to do something that feels a bit 'under the hood' might lead them to know what else is possible (which is a pitch that is coming really soon), I'm happy with this.
I had everything in line to start the constant velocity model unit: stop watches, meter sticks, measuring tape. All I had to do was find the set of working battery operated cars that I had used last year. I found one of them right where I left it. Upon finding another one, I remembered that didn't work last year either, and I hadn't gotten a replacement. The two other cars were LEGO robot cars that I had built specifically for this task, and all I would need would be to build those cars, program them to run their motors forward, and I was ready to go.
Then I remembered that my computer had been swapped for a new model over the summer, so my old LEGO programming applications were gone. Install software nowhere to be found, I went to the next option: buying new ones.
I made my way to a couple stores that sold toys and had sold me one of the cars from last year. They only had remote control ones, and I didn't want to add the variable of taping the controllers to the on position so they would run forward. Having a bunch of remote control cars in class is a recipe for distraction. In a last ditch effort to try to improve the one working car that I had, I ended up snapping the transmission off of the motor. I needed another option.
John Burk's post about using some programming in this lab and ending it in a virtual race had me thinking how to address the hole I had dug myself into. I have learned that the challenge of running the Python IDE on a class of laptops in various states of OSX make it tricky to have students use Visual Python or even the regular Python environment.
When it came to actually running the class, I asked students to generate a table of time (in seconds) and position data (in meters) for the car from the video. The goal was to be able to figure out when the car would reach the white line. I found the following:
Students were using a number of different measuring tools to make their measurements. Some used rulers in centimeters or inches, others created their own ruler in units of car lengths. The fact that they were measuring a virtual car rather than a real one made no difference in terms of the modeling process of deciding what to measure, and then measuring it.
Students asked for the length of the car almost immediately. They realized that the scale was important, possibly as a consequence of some of the work we did with units during the preceding class.
By the time it came to start generating position data, we had a realization about the difficulty arising from groups lacking a common origin. Students tended to agree on velocity as was expected, but their inability This was especially the case when groups were transitioning to the data from Car 2.
Some students saw the benefit of a linear regression immediately when they worked with the constant velocity model data generator. They saw that they could use the information from their regression in the initial values for position, time, and velocity. I didn't have to say a thing here - they figured it out without requiring a bland introduction to the algebraic model in the beginning.
I gave students the freedom to sketch a graph of their work on a whiteboard, on paper, or using Geogebra. Some liked different tools. Our conversation about the details afterwards was the same.
I wish I had working cars for all of the groups, but that's water under the bridge. I've grown to appreciate the flexibility that computer programming has in providing full control over different aspects of a simulation. It would be really easy to generate and assign each group a different virtual car, have them analyze it, and then discuss among themselves who would win in a race. Then I hit play and we watch it happen. This does get away from some of the messiness inherent in real objects that don't drive straight, or slow down as the batteries die, but I don't think this is the end of the world when we are getting started. Ignoring that messiness forever would be a problem, but providing a simple atmosphere for starting exploration of modeling as a philosophy doesn't seem to be a bad way to introduce the concept.