Monthly Archives: November 2013

Just shut up and work with us, Weinberg

I have an issue with talking too much in class. I think many of us do.

I've already done some focused work identifying what my students need me to show them for a given topic, and it's a lot less than I initially think. After a conversation with some smart educators, I decided to commit this week to not do whole class instruction unless it was absolutely necessary.

Sometimes I confuse necessity with convenience. The problem is that it's always convenient to do whole class instruction. You look out and see eyes staring at you, and it seems at the moment to be maximally efficient to communicate to the entire group at once. The quality of that attention is never what it seems.

In my biggest class, I've been continuing to put direct instruction into videos. As I've written previously, these are videos (three minutes or so) that have the information distilled down to small chunks. In doing this, I get around to every student and make sure they are somehow engaging with that video through writing down important information, trying the problem being demonstrated, or completing the challenge I usually put at the end. It's impossible for me to be instructing at the front of the class (or anywhere for that matter) and be aware of what every student is doing. With the video at every student's seat, I can be there. I can ask them questions one-on-one to see what they understand. I can make notes of the students that are struggling. I can assess every student at some point while I walk around, leave alone those that are doing just fine without my dictating their attention, and focus on those that need more guidance.

This increased time away from blabbing at the front of the room means more assessment time. The class starts with a quick quiz (1-2 questions) that I can get back to students during the period. I can give every student some bit of feedback, and it ensures that I have a conversation with every single student during the class. That is awesome. It means I can ask higher level questions of the stronger students and push them forward. It means I can see what students are writing down within seconds of doing so.

Though I occasionally think to myself that the reason this works is because my students are well behaved and will stay on task when I am not directly focused on them, I don't think this is why it has been successful. I'm in the middle of my students (rather than in one location) the whole time. I can see what they are all doing. If they do get off task, they know that I know if because chances are I'll be there in a minute or so. The class is noticeably less structured, and I don't feel as productive as I think I would if I was marching through a lesson plan. This is more a reflection of how I now have a more realistic awareness of how my students are doing with the material, rather than in ten minute chunks of independent work between lecture.

The students benefit most from interacting with each other. They do occasionally need help from me one-on-one, but the nature of that help varies greatly between students. I can give that help when I'm not spending so much time talking. The inverse is more powerful there - I can't give that help if I'm talking too much.

I decided to give students a quick exit survey on whether they liked the new format, whether they wanted to go back, or whether they wanted something different from both classroom structures. Here's what they said:

Screen Shot 2013-11-15 at 5.04.00 PM

I've gotten this sort of strong message before, but I unfortunately go back to the old ways, for the old reasons. It's easier to talk. It's easier to do a developmental lesson. It's easier to ask a question and conclude from a one or two student non-random sample that the class gets it. It just isn't necessarily what works best for students. I need to keep that in mind.

Proofs in Geometry - The Modification Continues...

Two statements of interest to me:

  • I get more consistent daily hits on my blog for teaching geometry proofs than anything else. Shiver.
  • Dan Meyer's recent post on proofs in Geometry gets to the heart of what bothers me about teaching proofs at all. Double shiver.

These statements have made me think about my approach in doing proofs with students in my 9th grade course, which has previously been a geometry course, but is morphing into something slightly different in anticipation of our move to the IB program. I like the concept of teaching proofs because I force students to confront the idea that there's a difference between things they know must be true, might be true, and will never be true. I started the unit asking the class the following questions:

  • Will the sun rise tomorrow?
  • Will student A always be older than her younger sister?
  • Will the boys volleyball team win the tournament this weekend>

The clear difference between these questions was also clear to my students. The word 'obviously' came up at least once, as expected.

The idea of proving something that is obvious is certainly an exercise of questionable purpose, mostly because it confines student thinking in the mould of classroom mathematics. As geometry teachers, we do this as a scaffold to help students learn to write proofs of concepts that are not so obvious. The downside is the inherent lack of perplexity in this process, as Dan points out in his post. The rules of math that students routinely apply to solve textbook or routine problems already fit in this 'obvious' category either from tradition ('I've done this since, like, forever') or from obedience ('My teacher/textbook says this is true, and that's good enough for me.')

I usually go to Geogebra to have students discover certain properties to be true, or give a quick numerical example showing why two angles supplementary to the same angle are congruent. They get this, but have a sense of detachment when I then ask them to prove it using the properties we reviewed in previous lessons. It seems to be very much related to what Kate Nowak pointed out in her comment to Dan's post. Geometry software or numerical examples show something to be so obvious that proof isn't necessary, so why circle back to then use the rules of mathematics to prove it to be true?

I had an idea this afternoon that I plan to try tomorrow to close this gap.
I wrote earlier about using spreadsheets with students to take some of the abstraction out of translating algebraic expressions. Making calculations with variables in the way a spreadsheet does shows very clearly the concept of variables, and also doing arithmetic with them. My idea here is to use a spreadsheet this way:

Screen Shot 2013-11-10 at 5.35.43 PM

Screen Shot 2013-11-10 at 5.37.39 PM

My students know that they should be able to change what is in the black cells, and enter formulas in the red cells so that they change based on what is in the black cells only. In doing this, they will be using their algebraic rules and geometric definitions to complete a formula. This hits the concrete examples I mentioned above - a 25 degree angle complementary to an angle will always be congruent to a 25 degree angle complementary to that same angle. It also uses the properties (definition of a complementary angle, subtraction property of equality, definition of congruence) to suggest the relationship between those angles using the language and structure of proof, which comes next in class.

Here is the spreadsheet file I've put together:
02 - SPR - Congruent Angles

I plan to have them complete the empty cells in this spreadsheet and then move on to filling in some reasons for steps of more formal proofs of these theorems afterwards, as I have done previously. I'd like to think that doing this will make it a little more clear how the observations students have relate to the properties they then use to prove the theorems.

I'd love you to hack away at my idea with feedback in the comments.

Reassessment Web-App Update

I wrote last May about the difficulties I was having with doing reassessments efficiently. The short story - collecting reassessment requests, printing them, and distributing them to students was not scaling well. I had shared my progress on making a web application using Python to do this and was really excited to continue working on it.

After a summer of work and putting it into action this fall, Phases 1 and 2 are pretty much complete. I'm really happy with how it has been working for me. I host it on my personal laptop in the classroom and share its IP address with students so they can access their quizzes.


You can check out a mildly sandboxed version here:
http://quiz.evanweinberg.org/main/

UPDATE Mar. 2016: I've taken down the application to save memory on my hosting server. Write me if you are interested in learning more.

and the code is posted at Github:
https://github.com/emwdx/reassess

I took out a number of my questions (since students do occasionally read my blog) and made it so images can't be uploaded. I hear that might be a security risk.

Some highlights:

  • Questions (with or without images) can be added, edited, and browsed all through a web interface.
  • Students can be assigned quizzes individually or through a page for the class they are in. They can also all be given different questions, which helps in my class that has students fairly close together.
  • Students each have their own url that can be bookmarked for easy access later.
  • The teacher view of the entire class has a link for each student that shows the quiz questions (and answers, if they are in the database) for easy grading.

What hasn't been done:

  • Authentication for students and the admin side. Right now it's all open, which bothers me a little, but my access log doesn't show that this is being abused.
  • A way to capture their work digitally for each retake. I still have a pile of half-size A4 papers on my desk, and have to grade them while also having the answer page open. That isn't the end of the world, but after my recent obsession with collecting as much student work as I can through a web interface, it's something I'd like to have as an option. Students tend to lose these papers, and these are the formative assessment moments I'd love for them to include in their portfolios. Digital is clearly the way to go.
  • Randomization (phase 3 of my master plan), but in two different ways. I'm still manually choosing questions for students. I kind of want to keep it that way, since some students I do want to give different questions. But then I sometimes don't - I'd rather it just choose questions from specific standards and students get the luck of the draw. I need an option that lets me waffle on this.
  • Question history - i.e. knowing which questions a student has been assigned, and integrating this into the program smoothly. This function is built into the database already, and won't require a lot of work to make it happen, but I haven't done it. Sorry.

There are a number of bugs features that still need to be worked out, but I'm aware of them all and know how to work through them when I have a bunch of students taking quizzes.

The most powerful aspect of having this working is that I can easily assess the whole class at the whole time on different questions if I want them to be different. I've been doing this at the beginning of the class this semester, and it increases the amount of time I spend talking to each student about their work regularly. Since student initiated reassessment still isn't as widespread as I want it to be, I've started having students request which quiz they want to have in class the night before. They know it's coming, and can get help or prepare in advance, rather than using their valuable lunch or after school time. More on that later.

Let me know if you're interested in using this with your own class - it's pretty portable and can be adapted without too much of a headache to different situations.

Computation & CAPM - From Models to Understanding

I wrote last spring about beginning my projectile motion unit with computational models for projectiles. Students focused on using the computer model alone to solve problems, which led into a discussion of a more efficient approach with less trial and error. The success of this approach made me wonder about introducing the much more simpler particle model for constant acceleration (abbreviated CAPM) using a computational model first, and then extending the patterns we observed to more general situations

We started the unit playing around with the Javascript model located here and the Geogebra data visualizer here.

The first activity was to take some position data for an object and model it using the CAPM model. I explained that the computational model was a mathematical tool that generated position and velocity data for a particle that traveled with constant acceleration. This was a tedious process of trial and error by design.

The purpose here was to show that if position data for a moving object could be described using a CAPM model, then the object was moving with constant acceleration. The tedium drove home the fact that we needed a better way. We explored some different data sets for moving objects given as tables and graphs and ´┐╝discussed the concepts of acceleration and using a linear model for velocity. We recalled how we can use a velocity vs. time graph to find displacement. That linear model for velocity, at this point, was the only algebraic concept in the unit.

In previous versions of my physics course, this was where I would nudge students through a derivation of the constant acceleration equations using what we already understood. Algebra heavy, with some reinforcement from the graphs.

This time around, my last few lessons have all started using the same basic structure:

  1. Here's some graphical or numerical data for position versus time or a description of a moving object. Model it using the CAPM data generator.
  2. Does the CAPM model apply? Have a reason for your answer.
  3. If it does, tell me what you know about its movement. How far does it go? What is its acceleration? Initial velocity? Tell me everything that the data tells you.

For our lesson discussing free fall, we started using the modeling question of asking what we would measure to see if CAPM applies to a falling object. We then used a spark timer (which I had never used before, but found hidden in a cabinet in the lab) to measure the position of a falling object.Screen Shot 2013-11-01 at 5.03.23 PM

They took the position data, modeled it, and got something similar to 9.8 m/s2 downward. They were then prepared to say that the acceleration was constant and downwards while it was moving down, but different when it was moving up. They quickly figured out that they should verify this, so they made a video and used Logger Pro to analyze it and see that indeed the acceleration was constant.

The part that ended up being different was the way we looked at 1-D kinematics problems. I still insisted that students use the computer program to model the problem and use the results to answer the questions. After some coaching, the students were able to do this, but found it unsatisfying. When I assigned a few of these for students to do on their own, they came back really grumpy. It took a long time to get everything in the model to work just right - never on the first try did they come up with an answer. Some figured out that they could directly calculate some quantities like acceleration, which reduced the iteration a bit, but it didn't feel right to them. There had to be a better way.

This was one of the problems I gave them. It took a lot of adjustment to get the model to match what the problem described, but eventually they got it:
Screen Shot 2013-11-01 at 5.11.47 PM

Once the values into the CAPM program and it gave us this data, we looked at it together to answer the question. Students started noticing things:

  • The maximum height is half of the acceleration.
  • The maximum height happens halfway through the flight.
  • The velocity goes to zero halfway through the flight.

Without any prompting, students saw from the data and the graph that we could model the ball's velocity algebraically and find a precise time when the ball was at maximum height. This then led to students realizing that the area of the triangle gave the displacement of the ball between being thrown and reaching maximum height.

This is exactly the sort of reasoning that students struggle to do when the entire treatment is algebraic. It's exactly the sort of reasoning we want students to be doing to solve these problems. The computer model doesn't do the work for students - it shows them what the model predicts, and leaves the analysis to them.

The need for more accuracy (which comes only from an algebraic treatment) then comes from students being uncomfortable with an answer that is between two values. The computation builds a need for the algebraic treatment and then provides some of the insight for a more generalized approach.

Let me also be clear about something - the students are not thrilled about this. I had a near mutiny during yesterday's class when I gave them a standards quiz on the constant acceleration model. They weren't confident during the quiz, most of them wearing gigantic frowns. They don't like the uncertainty in their answers, they don't like lacking a clear roadmap to a solution, they don't like being without a single formula they can plug into to find an answer. They said these things even after I graded the quizzes and they learned that the results weren't bad.

I'm fine with that. I'd rather that students are figuring out pathways to solutions through good reasoning than blindly plugging into a formula. I'd rather that all of the students have a way in to solving a problem, including those that lack strong algebraic skills. Matching a model to a problem or situation is not a complete crap shoot. They find patterns, figure out ways to estimate initial velocity or calculate acceleration and solidify one parameter to the model before adjusting another.

Computational models form one of the only ways I've found that successfully allows students of different skill levels to go from concrete to abstract reasoning in the context of problem solving in physics. Here's the way the progression goes up the ladder of abstraction for the example I showed above:

  1. The maximum height of the ball occurred at that time. Student points to the graph.
  2. The maximum height of the ball happened when the velocity of the ball went to zero in this situation. I'll need to adjust my model to find this time for different problems.
  3. The maximum height of the ball always occurs when the velocity of the ball goes to zero. We can get this approximate time from the graph.
  4. I can model the velocity algebraically and figure out when the ball velocity goes to zero exactly. Then we can use the area to find the maximum height.
  5. I can use the algebraic model for velocity to find the time when the ball has zero velocity. I can then create an algebraic model for position to get the position of the ball at this time.

My old students had to launch themselves up to step five of that progression from the beginning with an algebraic treatment. They had to figure out how the algebraic models related to the problems I gave them. They eventually figured it out, but it was a rough slog through the process. This was my approach for the AP physics students, but I used a mathematical approach for the regular students as well because I thought they could handle it. They did handle it, but as a math problem first. At the end, they returned to physics land and figured out what their answers meant.

There's a lot going on here that I need to process, and it could be that I'm too tired to see the major flaws in this approach. I'm constantly asking myself 'why' algebraic derivations are important. I still do them in some way, which means I still see some value, but the question remains. Abstracting concepts to general cases in physics is important because it is what physicists do. It's the same reason we should be modeling the scientific method and the modeling process with students in both science and math classes - it's how professionals work within the field.

Is it, however, how we should be exposing students to content?