A year ago, I wrote about my attempt to integrate Angry Birds as part of my quadratic modeling unit. I was certainly not the first, and there have been many others that have taken this idea and run with it. This is definitely a great way of using the concept of fitting parabolas to a realistic task that the students can have fun completing.

As I said a year ago, however, the bigger picture skill that is really powerful with modeling is making do with less information. I incentivized my students last year to come up with a model that predicts the final location of the collision of a bird earlier than everyone else. In other words, if Thomas is able to predict the correct final location with ten seconds of data, while Nick is able to do so with only seven, Nick has done the better job of modeling. I did this by asking the students to try to do this with the earliest possible frame in the video.

This time, I have found a better way to do this. Five videos, all of them cut short.
I'm asking the students to complete this table:

The impact ratio is defined as the ratio of the orange line to the yellow line, as shown in this image:

Each group of students will calculate the ratio for each video using Geogebra. Some videos reveal more about the path than others. I'll sum the errors, rank the student groups based on cumulative error, and then we'll have a great discussion about what made this difficult.

The sensitivity of a quadratic (or any fit) fit to data points that are close together is what I'm targeting here. I've tried other techniques to flesh this out in students before - I still get students 'fitting' a table of data by choosing the first two or three points. I'm hoping this will be a bit more interesting and successful than my previous attempts.

I wrapped up grading final exams today. Feels great, but it was also difficult seeing students coming in to get their final checklists signed before they either graduate or move on to a different school. Lots of bittersweet moments in the classroom today.

I decided after trying my standards based grading (SBG) experiment that I wanted to compare different students' overall performances among the grading categories to their quiz percentages. In a previous post, I wrote about my experimentation doing my quiz assessments on very specific skill standards that the students were given. As I plan to change my grading to be more SBG based for the fall, I figured it would be good to have some comparison data to be able to argue the many reasons why this is a good idea.

In my geometry and algebra two classes, there are 28 total students. I removed two students from the data set that came in the last month of school, and one outlier in terms of overall performance.

The table below shows the names of each grading category, as well as the overall grade weight for the category used in calculating the grade. The numbers are the correlation coefficient in the data between the variable listed in the row and column. For example, the 0.47 is the correlation between the HW data and the Quizzes/Standards data for each student.

Homework (8%)

Quizzes/Standards (12%)

Test Average (48%)

Semester Exam (20%)

Final Grade (100%)

HW

1

0.47

0.41

0.36

0.48

Quiz/Stndrds

1

0.89

0.91

0.95

Tests

1

0.88

0.97

Sem. Exam

1

0.95

Final Grade

1

Some notes before we dive in:

The percentages do not add up to 100% because I am leaving out the portfolio grade (8%) and classwork grades (4%) which are not performance based.

Homework is graded on turning it in and showing work for answers. I collect and look at homework as a regular way to assess how well they are learning the material.

The empty cells are to save ink; they are just the mirror image of the results in the upper half of the matrix since correlation is not order dependent.

I know I only have 25 samples - certainly not ready for a research publication.

So what does this mean? I don't know that this is very surprising.

Students doing HW is not what makes learning happen. I've always said that and this continues to support that hypothesis. It can help, it is evidence that students are doing something with the material between classes, but simply completing it is not enough. I'm fine with this. I get enough information from looking at the homework to create activities that flesh out misunderstanding the next time we meet. The unique thing about homework is that it is often the first time students look at the material on their own rather than with their peers in class.

My tests include some direct assessments of skills, but also include a lot of new applications of concepts and questions requiring students to explain or show things to be true. It's very gratifying to see such a strong connection between the quiz scores and the test scores.

I always wonder about the students that say "I get it in class, but then on the tests I freeze up." If there's any major lesson that SBG has confirmed for me, it's that student self-awareness of proficiency is generally not great without some form of external feedback. If this were the case, there would be more data with high quiz scores and low exam scores. That isn't the case here. My students need real and correct feedback on how they are doing, and the skills quizzes are a formalized way to do this.

I find it really interesting how close the quiz average and the semester exam percentages are. The semester exam was cumulative and covered a lot of ground, but it didn't necessarily hit every single skill that was tested on quizzes. There were also not quizzes for every single skill, though I tried to hit a number of key ones.

This leads me to believe that it is possible to have several key standards to focus on for SBG purposes, and also to dedicate time to work on other concepts during class time through project based learning, explorations, or independent work. It's feasible to assess these other concepts as mathematical process standards that are assessed throughout the semester. It strikes a good balance between developing skills according to curriculum but not making classes a repetitive process of students absorbing procedures for different types of problems. I want to have both. My flipping experiments have worked well to approaching that ideal, but I'm not quite there yet.

I'll have more to say about the details of what I will change as I think about it during the summer. I think a combination of using BlueHarvest for feedback, extending SBG to my Calculus and Physics classes, and less emphasis on grading and collecting homework will be part of it. Stay tuned.

In the Algebra 2 class, we started our unit on solving systems of equations. From a teaching perspective, this provides all sorts of opportunities for students to conceptualize what solutions to systems mean from a graphical, algebraic, and numerical perspective. Some students seem to like the topic because it tends to be fairly straight forward, is algorithmic, and has many ways to check and confirm whether it has been done correctly.

I used this as my warm-up activity today:

a) Estimate the solution of the system.

b) Write an equation for each line in standard form.

c) In Geogebra, select CAS view and type the following using your two equations: Solve[{7x+3y=6,3x-4y=12},{x,y}]

d) Use your calculator and convert these values to decimals. How close are these to your estimate?

We had some great discussions about the positives and negatives of graphical solutions to equations. Weaker students got some much needed practice writing equations for lines. For all students, this led to some good conversations about choosing two points that the lines clearly pass through for writing equations (if possible) rather than guessing at the y-intercept. The students also got the idea of how Geogebra can solve a system of equations exactly as a quick check for their algebra, an improvement over substituting (which is at times more trouble than it's worth for students with poor arithmetic) and slightly faster than solving for y on a graphing calculator and finding the intersection.

I also like the unit, though I don't tend to like the word problems. It's hard to convince students about the large scale importance of coin problems (especially in an international school with everyone used to different currency) or finding how many tickets were sold at the door or advance since anyone with a brain would just ask the person tallying the tickets.

I also found myself thinking about Dan Meyer's post over the summer about how many word problems are made up for the purposes of math, rather than using mathematics to analyze cool situations and create problems out of the situations. Getting students to figure out how to use the math to do this is ultimately what we want them to learn to do anyway. Figuring out when trains pass each other is not exciting to students, but I realized this morning while brushing my teeth that doing this problem with real robots either crashing into each other or racing adds a neat dimension to this problem. The question of figuring out both when they will crash or catch up to each other, and also where they will do so is a clear motivation for finding a solution to a system of equations describing their positions as functions of time.

So I gave the students the two robots (videos of them posted at http://bit.ly/vIs0lu and http://bit.ly/u9jSPB) . I told them I was going to set them apart a certain distance that was tentatively 80 centimeters, but said I wanted the ability to change that at any time. I wanted them to predict when and where they would collide.

The rules:

No, you can't just run the experiment and see where they crash. That not only defeats the purpose of this exercise, but we will be doing this sort of activity in a couple different ways during the unit, so being able to do this analytically is important. You also can't run both robots at the same time - that's for those of you that are going to try to be lawyers and break that first rule.

You can measure anything you want using any units that you want using either robot individually.

At some point, you should be able to show me how you are modeling the position of each robot as a function of time.

And I set them off to figure things out. Despite the fact there were only two robots, the 12 kids naturally divided themselves up into a couple teams to characterize each robot, and there was some good sharing of data amidst some whining about how annoying it was to actually measure things. In the end, most students at least had some idea of how they were going to put together their models, and some had actually written out what they were. As one would hope for these types of activities, there were plenty of examples of students helping others to understand what they were doing. The engagement was clearly there, as confirmed by students visibly excited to run the robot and time how long it took for it to move around.

It was a fun exercise that I plan to return to in a few ways during this unit - perhaps some interrobo-species interaction (my iCreate robot is charging up as we speak). Fun times.

UPDATE: This is the video of the next day's class when students solved their functions. I set the robots apart from each other and the students did the rest.

I've been making an effort to look for as much WCYDWT material as possible on a regular basis. This is not so much because I've had students asking 'when are we going to use this' though that is always brewing under the surface. Instead, I've been making an effort this year to spend less time in class plodding through curriculum, and more time getting students to get their hands dirty with real data, real numbers, and using their brains to actually figure things out. By recording screencasts, doing demos, and using Geogebrs, I've made some progress in getting the students to see the benefit of learning the routine skills-based stuff on their own for HW so we can use class time to do more interesting things. I've quizzed and am feeling pretty good about this thus far, but we'll see.

During my trip with the ninth graders to Shandong and my week off due to the national holiday when my parents visited, I've kept my eyes open on reasonable, non-contrived problems that might serve as applications of linear functions. I've wanted some problems with non-trivial answers along with some low-hanging fruit that might give all of the students in the class a way in.

I'm pretty happy with how things have ended up with the top three contenders. There are some other things in the works, but I'm hoping to keep those under wraps for the moment. Click on the links to read the details.

This one I already started talking about in a previous post, but I spiced it up just a bit by putting images together and throwing the head image I've now used in a few places to be cute.

Ms. Josie and the 180 Days
I like this one especially since it has a good story behind it. My students know my wife, and I defer to her awesomeness quite a bit in class. Students certainly love it when their teacher is willing to knock him/herself down a few pegs, especially when it's for their entertainment and for comedic effect in class. I think this challenge is a good combination of mathematical reasoning and drama - I don't think I can lose!

Moving on up at the Intercontinental Hotel
I was looking for a third one that really jumped out as kinda cool and visually stunning since the others, though cool, weren't particularly impressive visually. On the last day my parents were in town, we went to the Intercontinental hotel in Hangzhou and the problem smacked me in the face.

The videos aren't all up yet - in addition to the two outside videos, the more enlightening videos (which I will post tomorrow before class) have a view of the elevator doors and the digital floor display as the elevator moves up and down. In addition, there is a nice reflection of the view out the glass wall of the elevator, beautiful in its own right, but perhaps a wee bit distracting from the really useful stuff in this problem. If I wanted to go the full-eye-candy route, I suppose I could have gotten a reflection of the elevator doors and floor display in the glass wall of the elevator. Maybe next time.

My plan is to let students choose which of the three projects they want to work on, and then give them tomorrow's class (and finishing up for HW) to put something together. I plan to grade according to this rubric:

I think it gives them enough detail on what I want them to do, without being overly difficult to grade. I am even thinking of giving them a chance to grade each other since they will all be posting their work (from groups) on the wiki page.

I've had these things in my mind for a little while - I admit, after how this particular class made an impressive effort I am really excited to see what happens next.