Monthly Archives: April 2013

Assessing assessment over time - similar triangles & modeling

I've kept a question on my similar triangles unit exam over the past three years. While the spirit has generally been the same, I've tweaked it to address what seems most important about this kind of task:
Screen Shot 2013-04-30 at 3.27.28 PM

My students are generally pretty solid when it comes to seeing a proportion in a triangle and solving for an unknown side. A picture of a tree with a shadow and a triangle already drawn on it is not a modeling task - it is a similar triangles task. The following two elements of the similar triangles modeling concept seem most important to me in the long run:

  • Certain conditions make it possible to use similar triangles to make measurements. These conditions are the same conditions that make two triangles similar. I want my students to be able to use their knowledge of similarity theorems and postulates to complete the statement: "These triangles in the diagram I drew are similar because..."
  • Seeing similar triangles in a situation is a learned skill. Dan Meyer presented on this a year ago, and emphasized that a traditional approach rushes the abstraction of this concept without building a need for it. The heavy lifting for students is seeing the triangles, not solving the proportions.

If I can train students to see triangles around them (difficult), wonder if they are similar (more difficult), and then have confidence in knowing they can/can't use them to find unknown measurements, I've done what I set out to do here. What still seems to be missing in this year's version is the question of whether or not they actually are similar, or under what conditions are they similar. I assessed this elsewhere on the test, but it is so important to the concept of mathematical modeling as a lifestyle that I wish I had included it here.

(Students) thinking like computer scientists

It generally isn't too difficult to program a computer to do exactly what you want it to do. This requires, however, that you know exactly what you want it to do. In the course of doing this, you make certain assumptions because you think you know beforehand what you want.

You set the thermostat to be 68ยบ because you think that will be warm enough. Then when you realize that it isn't, you continue to turn it up, then down, and eventually settle on a temperature. This process requires you as a human to constantly sense your environment, evaluate the conditions, and change an input such as the heat turning on or off to improve them. This is a continuous process that requires constant input. While the computer can maintain room temperature pretty effectively, deciding whether the temperature is a good one or not is something that cannot be done without human input.

The difficulty is figuring out exactly what you want. I can't necessarily say what temperature I want the house to be. I can easily say 'I'm too warm' or 'I'm too cold' at any given time. A really smart house would be able to take those simple inputs and figure out what temperature I want.

I had an idea for a project for exploring this a couple of years ago. I could try to tell the computer using levels of red, green, and blue exactly what I thought would define something that looks 'green' to me. In reality, that's completely backwards. The way I recognize something as being green never has anything to do with RGB, or hue or saturation - I look at it and say 'yes' or 'no'. Given enough data points of what is and is not green, the computer should be able to find the pattern itself.

With the things I've learned recently programming in Python, I was finally able to make this happen last night: a page with a randomly selected color presented on each load:
Screen Shot 2013-04-18 at 9.51.51 PM

Sharing the website on Twitter, Facebook, and email last night, I was able to get friends, family, and students hammering the website with their own perceptions of what green does and does not look like. When I woke up this morning, there were 1,500 responses. By the time I left for school, there were more then 3,000, and tonight when my home router finally went offline (as it tends to do frequently here) there were more than 5,000. That's plenty of data points to use.

I decided this was a perfect opportunity to get students finding their own patterns and rules for a classification problem like this. There was a clearly defined problem that was easy to communicate, and I had lots of real data data to use to check a theoretical rule against. I wrote a Python program that would take an arbitrary rule, apply it to the entire set of 3,000+ responses from the website, and compare its classifications of green/not green to that of the actual data set. A perfect rule for the data set would correctly predict the human data 100% of the time.

I was really impressed with how quickly the students got into it. I first had them go to the website and classify a string of colors as green or not green - some of them were instantly entranced b the unexpected therapeutic effect of clicking the buttons in response to the colors. I soon convinced them to move forward to the more active role of trying to figure out their own patterns. I pushed them to the http://www.colorpicker.com website to choose several colors that clearly were green, and others that were not, and try to identify a rule that described the RGB values for the green ones.

When they were ready, they started categorizing their examples and being explicit in the patterns they wanted to try. As they came up with their rules (e.g. green has the greatest level) we talked about writing that mathematically and symbolically - suddenly the students were quite naturally thinking about inequalities and how to write them correctly. (How often does that happen?) I showed them where I typed it into my Python script, and soon they were telling me what to type.

rgbwork

In the end, they figured out that the difference of the green compared to each of the other colors was the important element, something that I hadn't tried when I was playing with it on my own earlier in the day. They really got into it. We had a spirited discussion about whether G+40>B or G>B+40 is correct for comparing the levels of green and blue.

In the end, their rule agreed with 93.1% of the human responses from the website, which beat my personal best of 92.66%. They clearly got a kick out of knowing that they had not only improved upon my answer, but that their logical thinking and mathematically defined rules did a good job of describing the thinking of thousands of people's responses on this question. This was an abstract task, but they handled it beautifully, both a tribute to the simplicity of the task and to their own willingness to persist and figure it out. That's perplexity as it is supposed to be.

Other notes:

  • One of the most powerful applications of computers in the classroom is getting students hands on real data - gobs of it. There is a visible level of satisfaction when students can talk about what they have done with thousands of data points that have meaning that they understand.
  • I happened upon the perceptron learning algorithm on Wikipedia and was even more excited to find that the article included Python code for the algorithm. I tweaked it to work with my data and had it train using just the first 20 responses to the website. Applying this rule to the checking script I used with the students, it correctly predicted 88% of the human responses. That impresses me to no end.
  • A relative suggested that I should have included a field on the front page for gender. While I think it may have cut down on the volume of responses, I am hitting myself for not thinking to do that sort of thing, just for analysis.
  • A student also indicated that there were many interesting bits of data that could be collected this way that interested her. First on the list was color-blindness. What does someone that is color blind see? Is it possible to use this concept to collect data that might help answer this question? This was something that was genuinely interesting to this student, and I'm intrigued and excited by the level of interest she expressed in this.
  • I plan to take a deeper look at this data soon enough - there are a lot of different aspects of it that interests me. Any suggestions?
  • Anyone that can help me apply other learning algorithms to this data gets a beer on me when we can meet in person.

Building a need for math - similar polygons & mobile devices

The focus of some of my out-of-classroom obsessions right now is on building the need for mathematical tools. I'm digging into the fact that many people do well on a daily basis without doing what they think is mathematical thinking. That's not even my claim - it's a fact. It's why people also claim the irrelevance of math because what they see as math (school math) almost never enters the scene in one's day-to-day interactions with the world.

The human brain is pretty darn good at estimating size or shape or eyeballing when it is safe to cross the street - there's no arithmetic computation there, so one could argue that there's no math either. The group of people feeling this way includes many adults, and a good number of my own students.

What interests me these days is spending time with them hovering around the boundary of the capabilities of the brain to do this sort of reasoning. What if the gut can't do a good enough job of answering a question? This is when measurement, arithmetic, and other skills usually deemed mathematical come into play.

We spend a lot of time looking at our electronic devices. I posed this question to my Geometry and Algebra 2 classes on Monday:
Screen Shot 2013-04-10 at 2.45.41 PM

The votes were five for A, 5 for B, and 14 for C. There was some pretty solid debate about why they felt one way or another. They made sure to note that the corners of the phone were not portrayed accurately, but aside from that, they immediately saw that additional information was needed.

Some students took the image and made measurements in Geogebra. Some measured an actual 4S. Others used the engineering drawing I posted on the class blog. I had them post a quick explanation of their answers on their personal math blogs as part of the homework. The results revealed their reasoning which was often right on. It also showed some examples of flawed reasoning that I didn't expect - something I now know I need to address in a future class.

At the end of class today when I had the Geometry class vote again, the results were a bit more consistent:
Screen Shot 2013-04-10 at 3.56.40 PM

The students know these devices. Even those that don't have them know what they look like. It required them to make measurements and some calculations to know which was correct. The need for the mathematics was built in to the activity. It was so simple to get them to make a guess in the beginning based on their intuition, and then figure out what they needed to do, measure, or calculate to confirm their intuition through the idea of similarity. As another chance at understanding this sort of task, I ended today's class with a similar challenge:

Screen Shot 2013-04-10 at 4.04.31 PM

My students spend much of their time staring at a Macbook screen that is dimensioned slightly off from standard television screen. (8:5 vs. 4:3). They do see the Smartboard in the classroom that has this shape, and I know they have seen it before. I am curious to see what happens.

Volumes of Revolution - Using This Stuff.

As an activity before our spring break, the Calculus class put its knowledge of finding volumes of revolution to, well, find volumes of things. It was easy to find different containers to use for this - a sample:
DSC_0164

IMG_0573

We used Geogebra to place points and model the profile of the containers using polynomials. There were many rich discussions about wise placement of points and which polynomials make more sense to use. One involved the subtle differences between these two profiles and what they meant for the resulting volume through calculus methods:

Screen Shot 2013-04-08 at 4.19.33 PM

The task was to predict the volume and then use flasks and graduated cylinders to accurately measure the volume. Lowest error wins. I was happy though that by the end, nobody really cared about 'winning'. They were motivated themselves to theorize why their calculated answer was above or below, and then adjust their model to test their theories and see how their answer changes.

As usual, I have editorial reflections:

  • If I had students calculating the volume by hand by integration every time, they would have been much more reluctant to adjust their answers and figure out why the discrepancies existed. Integration within Geogebra was key to this being successful. Technology greases the rails of mathematical experimentation in a way that nothing else does.
  • There were a few many lessons that needed to happen along the way as the students worked. They figured out that the images had to be scaled to match the dimensions in Geogebra to the actual dimensions of the object. They figured out that measurements were necessary to make this work. The task demanded that the mathematical tools be developed, so I showed them what they needed to do as needed. It would have been a lot more boring and algorithmic if I had done all of the presentation work up front, and then they just followed steps.
  • There were many opportunities for reinforcing the fundamentals of the Calculus concepts through the activity. This is a tangible example of application - the actual volume is either close to the calculated volume or not - there's a great deal more meaning built up here that solidifies the abstraction of volume of revolution. There were several 'aha' moments and I saw them happen. That felt great.