Tag Archives: physics

Analyzing IB Physics Exam Language Programmatically

I just gave my IB physics students an exam consisting entirely of IB questions. I've styled my questions after IB questions on other exams and on homework. I've also looked at (and assigned) plenty of example questions from IB textbooks.

Just before the exam, students came to me with some questions on vocabulary that had never come up before. It could be that they hadn't looked at the problems as closely as they had before this exam. What struck me was that their questions were not on physics words. They were on regular English words that, used in a physics context, can have a very different meaning than otherwise. For these students that often use online translators to help in decoding problems, I suddenly saw this to be a bigger problem than I had previously imagined. An example: a student asked what it meant for an object to be 'stationary'. This was easily explained, but the student shook her head and smiled because she had understood its other meaning. On the exam, I saw this same student making mistakes because she did not understand the word 'negligible', though we had talked about it before in the context of multiple ways to say that energy was conserved. Clearly, I need to do more, but I need more information about vocabulary.

It got me wondering - what non-content related vocabulary does occur frequently on IB exams to warrant exposing students to it in some form?

I decided to use a computational solution because I didn't have time to go through multiple exams and circle words I thought students might not get. I wanted to know what words were most common across a number of recent exams.

Here's what I did:

  • I opened both paper 1 and paper 2 from May 2014, 2013, 2012 (two time zones for each) as well as both papers from November 2013. I cut and pasted the entire text from each test into a text file - over 25,000 words.
  • I wrote a Python script using the pandas library to do the heavy lifting. It was my first time using it, so no haters please. You can check out the code here. The basic idea is that the pandas DataFrame object lets you count up the number of occurrences of each element in the list.
  • Part of this process was stripping out words that wouldn't be useful data. I took out the 100 most common words in English from Wikipedia. I also removed some other exam specific words like instructions, names, and artifacts from cutting and pasting from a PDF file. Finally, I took out the command terms like 'define','analyze','state', and the like. This would leave the words I was looking for.
  • You can see the resulting data in this spreadsheet, the top 300 words sorted by frequency. On a quick run through, I marked the third column if a word was likely to appear in development of a topic. This list can then be sorted to identify words that might be worth including in my problem sets so that students have seen them before.

Screen Shot 2014-12-18 at 6.41.30 PM
There are a number of words here that are mathematics terms. Luckily, I have most of these physics students for mathematics as well, so I'll be able to make sure those aren't surprises. The physics related words (such as energy, which appeared 177 times) will be practiced through doing homework problems. Students tend to learn the content-specific vocabulary without too much trouble, as they learn those words in context. I also encourage students to create glossaries in their notebooks to help them remember these terms.

The bigger question is what to do with those words that aren't as common - a much more difficult one. My preliminary ideas:

  • Make sure that I use this vocabulary repeatedly in my own practice problems. Insist that students write out the equivalent word in their own language, once they understand the context that it is used in physics.
  • Introduce and use vocabulary in the prerequisite courses as well, and share these words with colleagues, whether they are teaching the IB courses or not.
  • Share these words with the ESOL teachers as a list of general words students need to know. These (I think) cut across at least math and science courses, but I'm pretty sure many of them apply to language and social studies as well.

I wish I had thought to do this earlier in the year, but I wouldn't have had time to do this then, nor would I have thought it would be useful. As the semester draws to a close and I reflect, I'm finding that the free time I'll have coming up to be really valuable moving forward.

I'm curious what you all think in the comments, folks. Help me out if you can.

Computation & CAPM - From Models to Understanding

I wrote last spring about beginning my projectile motion unit with computational models for projectiles. Students focused on using the computer model alone to solve problems, which led into a discussion of a more efficient approach with less trial and error. The success of this approach made me wonder about introducing the much more simpler particle model for constant acceleration (abbreviated CAPM) using a computational model first, and then extending the patterns we observed to more general situations

We started the unit playing around with the Javascript model located here and the Geogebra data visualizer here.

The first activity was to take some position data for an object and model it using the CAPM model. I explained that the computational model was a mathematical tool that generated position and velocity data for a particle that traveled with constant acceleration. This was a tedious process of trial and error by design.

The purpose here was to show that if position data for a moving object could be described using a CAPM model, then the object was moving with constant acceleration. The tedium drove home the fact that we needed a better way. We explored some different data sets for moving objects given as tables and graphs and discussed the concepts of acceleration and using a linear model for velocity. We recalled how we can use a velocity vs. time graph to find displacement. That linear model for velocity, at this point, was the only algebraic concept in the unit.

In previous versions of my physics course, this was where I would nudge students through a derivation of the constant acceleration equations using what we already understood. Algebra heavy, with some reinforcement from the graphs.

This time around, my last few lessons have all started using the same basic structure:

  1. Here's some graphical or numerical data for position versus time or a description of a moving object. Model it using the CAPM data generator.
  2. Does the CAPM model apply? Have a reason for your answer.
  3. If it does, tell me what you know about its movement. How far does it go? What is its acceleration? Initial velocity? Tell me everything that the data tells you.

For our lesson discussing free fall, we started using the modeling question of asking what we would measure to see if CAPM applies to a falling object. We then used a spark timer (which I had never used before, but found hidden in a cabinet in the lab) to measure the position of a falling object.Screen Shot 2013-11-01 at 5.03.23 PM

They took the position data, modeled it, and got something similar to 9.8 m/s2 downward. They were then prepared to say that the acceleration was constant and downwards while it was moving down, but different when it was moving up. They quickly figured out that they should verify this, so they made a video and used Logger Pro to analyze it and see that indeed the acceleration was constant.

The part that ended up being different was the way we looked at 1-D kinematics problems. I still insisted that students use the computer program to model the problem and use the results to answer the questions. After some coaching, the students were able to do this, but found it unsatisfying. When I assigned a few of these for students to do on their own, they came back really grumpy. It took a long time to get everything in the model to work just right - never on the first try did they come up with an answer. Some figured out that they could directly calculate some quantities like acceleration, which reduced the iteration a bit, but it didn't feel right to them. There had to be a better way.

This was one of the problems I gave them. It took a lot of adjustment to get the model to match what the problem described, but eventually they got it:
Screen Shot 2013-11-01 at 5.11.47 PM

Once the values into the CAPM program and it gave us this data, we looked at it together to answer the question. Students started noticing things:

  • The maximum height is half of the acceleration.
  • The maximum height happens halfway through the flight.
  • The velocity goes to zero halfway through the flight.

Without any prompting, students saw from the data and the graph that we could model the ball's velocity algebraically and find a precise time when the ball was at maximum height. This then led to students realizing that the area of the triangle gave the displacement of the ball between being thrown and reaching maximum height.

This is exactly the sort of reasoning that students struggle to do when the entire treatment is algebraic. It's exactly the sort of reasoning we want students to be doing to solve these problems. The computer model doesn't do the work for students - it shows them what the model predicts, and leaves the analysis to them.

The need for more accuracy (which comes only from an algebraic treatment) then comes from students being uncomfortable with an answer that is between two values. The computation builds a need for the algebraic treatment and then provides some of the insight for a more generalized approach.

Let me also be clear about something - the students are not thrilled about this. I had a near mutiny during yesterday's class when I gave them a standards quiz on the constant acceleration model. They weren't confident during the quiz, most of them wearing gigantic frowns. They don't like the uncertainty in their answers, they don't like lacking a clear roadmap to a solution, they don't like being without a single formula they can plug into to find an answer. They said these things even after I graded the quizzes and they learned that the results weren't bad.

I'm fine with that. I'd rather that students are figuring out pathways to solutions through good reasoning than blindly plugging into a formula. I'd rather that all of the students have a way in to solving a problem, including those that lack strong algebraic skills. Matching a model to a problem or situation is not a complete crap shoot. They find patterns, figure out ways to estimate initial velocity or calculate acceleration and solidify one parameter to the model before adjusting another.

Computational models form one of the only ways I've found that successfully allows students of different skill levels to go from concrete to abstract reasoning in the context of problem solving in physics. Here's the way the progression goes up the ladder of abstraction for the example I showed above:

  1. The maximum height of the ball occurred at that time. Student points to the graph.
  2. The maximum height of the ball happened when the velocity of the ball went to zero in this situation. I'll need to adjust my model to find this time for different problems.
  3. The maximum height of the ball always occurs when the velocity of the ball goes to zero. We can get this approximate time from the graph.
  4. I can model the velocity algebraically and figure out when the ball velocity goes to zero exactly. Then we can use the area to find the maximum height.
  5. I can use the algebraic model for velocity to find the time when the ball has zero velocity. I can then create an algebraic model for position to get the position of the ball at this time.

My old students had to launch themselves up to step five of that progression from the beginning with an algebraic treatment. They had to figure out how the algebraic models related to the problems I gave them. They eventually figured it out, but it was a rough slog through the process. This was my approach for the AP physics students, but I used a mathematical approach for the regular students as well because I thought they could handle it. They did handle it, but as a math problem first. At the end, they returned to physics land and figured out what their answers meant.

There's a lot going on here that I need to process, and it could be that I'm too tired to see the major flaws in this approach. I'm constantly asking myself 'why' algebraic derivations are important. I still do them in some way, which means I still see some value, but the question remains. Abstracting concepts to general cases in physics is important because it is what physicists do. It's the same reason we should be modeling the scientific method and the modeling process with students in both science and math classes - it's how professionals work within the field.

Is it, however, how we should be exposing students to content?

Electric Circuits - starting at the end.

We only have a couple weeks of class left, and there's not enough time to do the traditional Physics B sequence that I've used for electricity with my seniors that asked for a non-AP physics course at the beginning of the year. Normally I do electrostatics for a couple of weeks, talk about electric fields and potential, and then use these concepts to motivate a treatment of electric circuits. I could have stretched that out, but given my freedom in pace and curriculum, I decided to switch everything around.

This year, I started at the end of my sequence to address a pretty big issue I've always seen with my students. As much as they talk about charging (mobile devices, laptops) and basic energy conservation such as turning lights off, they have a pretty fuzzy understanding of electricity and the origins of the energy they use everyday. Some of the last topics in my traditional sequence involve real voltage sources, batteries and internal resistance - the "real" electronics that you need to know if you want to actually build a circuit. You know, the actually interesting part.

There's nothing interesting in looking at a circuit and calculating what current is going through an arbitrary resistor in a given circuit.  It took me a while to come to this realization because I still have some brain cells clinging to the "theory first, application second" philosophy, the same brain cells I've been working to silence this year. These are the sorts of things I want my students to learn to do:

  • Build a charger for an iPod using a solar panel and some circuit components. What is involved in charging a battery in a way that the battery will actually charge up without blowing Nickel and Cadmium all over the classroom?
  • Create a circuit that lights up an LED with the right current so it can outlast an incandescent bulb.
  • Look at an AC adapter that isn't made for a given device, and modify it so that it does work. The fact that it only costs $5 to buy a new one is irrelevant when you compare it to the feeling you get when you realize this is not hard to do. (Thanks Dad!)
  • Generate electricity. Figure out how hard you have to physically work to run your laptop.

This is what we did on day one:

I gave them a solar panel, some small DC motors and LEGO motors, a stripped down version of our FIRST Tech Challenge robot, some lemons, clip leads, and different kinds of wire, and said I wanted them to use these tools to generate the highest voltage they could. There was also a bag of green LEDs on the table there for them to play with. There was a flurry of activity among my five students as they remembered something vaguely from chemistry about sticking different metals into a lemon, and needing to connect one to another in a certain way. They did so and saw that there was a bit of a voltage from the lemons they had connected together, but that there wasn't much there.

I then showed them one of the LEGO motors and had them see what happened on a connected voltmeter when the axle was rotated. They were amazed that this also generated an electrical potential. This turned immediately into a contest of rotating the motor as quickly as possible and seeing the result on the voltmeter. One grabbed an LED and hooked it up and saw that it lit up.

They then turned to the robot and its big beefy motors. They found I had a set of LED lights in my parts box and asked to use it. Positive results:

The solar panel was also a big hit as it resulted in us going outside. They were impressed with how "much" electricity was generated after seeing the voltmeter display over 15 volts - they were surprised then to see that it worked to turn on the LED display, but not any of the motors they tried.

At this point it was the end of the class block, so we put everything away and went on with our day.

Some of the reasons I finished the day with a smile:

  • There was never a moment when I had to tell any of the students to pay attention and get involved in the activity.  The variety of objects on the table and the challenge were enough to get them playing and interacting with each other.
  • While I did show them how to play with one of the tools (i.e. DC motor acting as generator) , they quickly figured out how they might transfer this idea to the other items I made available.
  • They made bits of progress toward the understanding that voltage alone was not what made things work. This is a big one.

The next day's class used the PHet circuit construction kit to explore these ideas further in the context of building and exploring circuits. We had some fantastic conversations about voltage of batteries, conventional vs. electron current, and eventually connected the idea of Ohm's law (which was floating around in their heads from middle school science) to the observations they made.

I was struggling for a while about how to approach electricity because I have always followed the traditional sequence. In the end, I realized that I really didn't want to go through electrostatics - I wasn't excited to teach it this time around.  I also realized that I didn't need to do so, either in order to teach my students what I really wanted them to learn about electricity.

I think this approach will help them realize that electricity is not magic. They can learn to control it. I admit that doing so can be dangerous and expensive if one doesn't know what he or she is doing. That said, a little basic knowledge goes a long way, even in today's world of nanometer sized transistors.

Tomorrow we attempt the LED lighting assignment - feel free to share your comments or suggestions!

Geometric Optics - hitting complexity first

I started what may end up being the last unit in physics with the idea that I would do things differently compared to my usual approach. I taught optics as part of Physics B for a few years, and as many things end to be in that rushed curriculum, it was fairly traditional. Plane mirrors, ray diagrams, equations. Snell's law, lenses, ray tracing, equations. This was followed by a summary lesson shamefully titled "Mirrors and lenses are both similar and different" , a tribute to the unfortunate starter sentence for many students' answers to compare and contrast questions that always got my blood boiling.

This time, given the absence of any time pressure, there has been plenty more space to play. We played with the question of how big a plane mirror must be to see one's whole body with diagrams and debate. We messed with a quick reflection diagram of a circular mirror I threw together in Geogebra to show that light seems to be brought to a point under certain conditions. Granted, I did make suggestions on the three rays that could be used in a ray diagram to locate an image - that was a bit of direct instruction - but today when the warm up involved just drawing some diagrams, they had an entry point to start from.

After drawing diagrams for some convex and concave mirrors, I put a set of mirrors in front of them and asked them to set up the situation described by their diagrams. They made the connection to the terms convex and concave by the labels printed on the flimsy paper envelopes they were shipped in - no big introduction of the vocabulary first was needed, and it would have broken the natural flow of their work. They observed images getting magnified and minefied, and forming inverted or upright. They gasped when I told them to hold a blank sheet of paper above a concave mirror pointed at one of the overhead lights and saw the clear edges of the fluorescent tubes projected on the paper surface. They poked and stared, mystified, while moving their faces forward and backward at the focal point to find the exact location where their face shifted upside down.

After a while with this, I took out some lenses. Each got two to play with. They instantly started holding them up to their eyes and moving them away and noticing the connections to their observations with the mirrors. One immediately noticed that one lens flipped the room when held at arms length but didn't when it was close, and that another always made everything smaller like the convex mirror did. I asked them to use the terms virtual and real, and they were right on. They were again amazed when the view outside was clearly projected through the convex lens was held in front of a student's notebook.

I hope I never take for granted how great this small group of students is - I appreciate their willingness to explore and humor me when I am clearly not telling them everything that they need to know to analyze a situation. That said, there is really something to the backwards model of presenting complexity up front, and using that complexity to motivate students to want to understand the basics that will help them explain what they observe. Now that my students see that the lenses are somehow acting like mirrors, it is so much easier to call upon their curiosity to motivate exploring why that is. Now there is a reason for Snell's law to be in our classroom.

Without planting a hint of why anyone aside from over excited physics teachers would give a flying fish about normals and indices of refraction, it becomes yet one more fact to remember. There's no mystery. To demand that students go through the entire process of developing physics from basic principles betrays the reality that reverse engineering a finished product can be just as enlightening. I would wager that few people read an instruction manual anymore. Even the design of help in software has changed from a linear list of features in one menu after another to a web of wiki-style tidbits of information on how to do things. Our students are used to managing complexity to do things that are not school related, things that are a lot more real world to them. There is no reason school world has to be different from real world in how we explore and approach learning new things.

Relating modeling & abstraction on two wheels.

Over the course of my vacation in New Zealand, I found myself rethinking many things about the subjects I teach. This wasn't really because I was actively seeing the course concepts in my interactions on a daily basis, but rather because the sensory overload of the new environment just seemed to shock me into doing so.

One of these ideas is the balance between abstraction and concrete ideas. Being able to physically interact with the world is probably the best way to learn. I've seen it myself over and over again in my own classes and in my own experience. There are many situations in which the easiest way to figure something out is to just go out and do it. I tried to do this the first time I wanted to learn to ride a bicycle - I knew there was one in the garage, so I decided one afternoon to go and try it out. I didn't need the theory first to ride a bicycle - the best way is just to go out and try it.

Of course, my method of trying it was pretty far off - as I understood the problem , riding a bicycle first required that you get the balancing down. So I sat for nearly an hour rocking from side to side trying to balance.

My dad sneaked into the garage to see what I was up to, and pretty quickly figured it out and started laughing. He applauded my initiative in wanting to learn how to do it, but told me there is a better way to learn. In other words, having just initiative is not enough - a reliable source of feedback is also necessary for solving a problem by brute force. That said, with both of these in hand, this method will often beat out a more theoretical approach.

This also came to mind when I read a comment from a Calculus student's portfolio. I adjusted how I presented the applications of derivatives a bit this year to account for this issue, but it clearly wasn't good enough. This is what the student said:

Something I didn't like was optimisation. This might be because I wasn't there for most of
the chapter that dealt with it, so I didn't really understand optimisation. I realise that optimisation applies most to real life, but some of the examples made me think that, in real life, I would have just made the box big enough to fit whatever needed to fit inside or by the time I'd be done calculating where I had to swim to and where to walk to I could already be halfway there.

Why sing the praises of a mathematical idea when, in the real world, no logical person would choose to use it to solve a problem?

This idea appeared again when reading The Mathematical Experience by Philip J. Davis and Reuben Hersh during the vacation. On page 302, they make the distinction between analytical mathematics and analog mathematics. Analog math is what my Calculus student is talking about, using none of "the abstract symbolic structures of 'school' mathematics." The shortest distance between two points is a straight line - there is no need to prove this, it is obvious! Any mathematical rules you apply to this make the overall concept more complex. On the other hand, analytic mathematics is "hard to do...time consuming...fatiguing...[and] performed only by very few people" but often provides insight and efficiency in some cases where there is no intuition or easy answer by brute force. The tension between these two approaches is what I'm always battling in my mind as a swing wildly from exploration to direct instruction to peer instruction to completely constructivist activities in my classroom.

Before I get too theoretical and edu-babbly, let's return to the big idea that inspired this post.

I went mountain biking for the first time. My wife and I love biking on the road, and we wanted to give it a shot, figuring that the unparalleled landscapes and natural beauty would be a great place to learn. It did result in some nasty scars (on me, not her, and mostly on account of the devilish effects of combining gravity, overconfidence, and a whole lot of jagged New Zealand mountainside) but it was an incredible experience. As our instructors told us, the best way to figure out how to ride a mountain bike down rocky trails is to try it, trust intuition, and to listen to advice whenever we could. There wasn't any way to really explain a lot of the details - we just had to feel it and figure it out.

As I was riding, I could feel the wind flowing past me and could almost visualize the energy I carried by virtue of my movement. I could look down and see the depth of the trail sinking below me, and could intuitively feel how the potential energy stored by the distance between me and the center of the Earth was decreasing as I descended. I had the upcoming unit on work and energy in physics in the back of my mind, and I knew there had to be some way to bring together what I was feeling on the trail to the topic we would be studying when we returned.

When I sat down to plan exactly how to do this, I turned to the great sources of modeling material for which I have incredible appreciation of being able to access , namely from Kelly O'Shea and the Modeling center at Arizona State University. In looking at this material I have found ways this year to adapt what I have done in the past to make the most of the power of thinking and students learning with models. I admittedly don't have it right, but I have really enjoyed thinking about how to go through this process with my students. I sat and stared at everything in front of me, however - there was conflict with the way that I previously used the abstract mathematical models of work, kinetic energy, and potential energy in my lessons and the way I wanted students to intuitively feel and discover what the interaction of these ideas meant. How much of the sense of the energy changes I felt as I was riding was because of the mathematical model I have absorbed over the years of being exposed to it?

The primary issue that I struggle with at times is the relationship between the idea of the conceptual model as being distinctly different from mathematics itself, especially given the fact that one of the most fundamental ideas I teach in math is how it can be used to model the world. The philosophy of avoiding equations because they are abstractions of the real physics going on presumes that there is no physics in formulating or applying the equations. Mathematics is just one type of abstraction.

A system schema is another abstraction of the real world. It also happens to be a really effective one for getting students to successfully analyze scenarios and predict what will subsequently happen to the objects. Students can see the objects interacting and can put together a schema to represent what they see in front of them. Energy, however, is an abstract concept. It's something you know is present when observing explosions, objects glowing due to high temperature, baseballs whizzing by, or a rock loaded in a slingshot. You can't, however, observe or measure energy in the same way you can measure a tension force. It's hard to really explain what it is. Can a strong reliance on mathematics to bring sense to this concept work well enough to give students an intuition for what it means?

I do find that the way I have always presented energy is pretty consistent with what is described in some of the information on the modeling website - namely thinking about energy storage in different ways. Kinetic energy is "stored" in the movement of an object, and can be measured by measuring its speed. Potential energy is "stored" by the interaction of objects through a conservative force. Work is a way for one to object transfer energy to another through a force interaction, and is something that can be indicated from a system schema. I haven't used energy pie diagrams or bar charts or energy flow diagrams, but have used things like them in my more traditional approach.

The main difference in how I have typically taught this, however, is that mathematics is the model that I (and physicists) often use to make sense of what is going on with this abstract concept of energy. I presented the equation definition of work at the beginning of the unit as a tool. As the unit progressed, we explored how that tool can be used to describe the various interactions of objects through different types of forces, the movement of the objects, and the transfer of energy stored in movement or these interactions. I have never made students memorize equations - the bulk of what we do is talk about how observations lead to concepts, concepts lead to mathematical models, and then models can then be tested against what is observed. Equations are mathematical models. They approximate the real world the same way a schema does. This is the opposite of the modeling instruction method, and admittedly takes away a lot of the potential for students to do the investigating and experimentation themselves. I have not given this opportunity to students in the past primarily because I didn't know about modeling instruction until this past summer.

I have really enjoyed reading the discussions between teachers about the best ways to transition to a modeling approach, particularly in the face of the knowledge (or misinformation) they might already have . I was especially struck by a comment I read in one of the listserv articles by Clark Vangilder (25 Mar 2004) on this topic of the relationship between mathematical models and physics:

It is our duty to expose the boundaries between meaning, model, concept and representation. The Modeling Method is certainly rich enough to afford this expense, but the road is long, difficult and magnificent. The three basic modeling questions of "what do you see...what can you measure...and what can you change?" do not address "what do you mean?" when you write this equation or that equation...The basic question to ask is "what do you mean by that?," whatever "that" is.

Our job as teachers is to get students to learn to construct mental models for the world around them, help them test their ideas, and help them understand how these models do or do not work. Pushing our students to actively participate in this process is often difficult (both for them and for us), but is inevitably more successful in getting them to create meaning for themselves on the content of what we teach. Whether we are talking about equations, schema, energy flow diagrams, or discussing video of objects interacting with each other, we must always be reinforcing the relationship between any abstractions we use and what they represent. The abstraction we choose should be simple enough to correctly describe what we observe, but not so simple as to lead to misconception. There should be a reason to choose this abstraction or model over a simpler one. This reason should be plainly evident, or thoroughly and rigorously explored until the reason is well understood by our students.

Rubrics & skill standards - a rollercoaster case study.

  • I gave a quiz not long ago with the following question adapted from the homework:

The value of 5 points for the problem came from the following rubric I had in my head while grading it:

  • +1 point for a correct free body diagram
  • +1 for writing the sum of forces in the y-direction and setting it equal to may
  • +2 for recognizing that gravity was the only force acting at the minimum speed
  • +1 for the correct final answer with units

Since learning to grade Regents exams back in New York, I have always needed to have some sort of rubric like this to grade anything. Taking off  random quantities of points without being able to consistently justify a reason for a 1 vs. 2 point deduction just doesn't seem fair or helpful in the long run for students trying to learn how to solve problems.

As I move ever more closely toward implementing a standards based grading system, using a clearly defined rubric in this way makes even more sense since, ideally, questions like this allow me to test student progress relative to standards. Each check-mark on this rubric is really a binary statement about a student relative to the following standards related questions:

  • Does the student know how to properly draw a free body diagram for a given problem?
  • Can a student properly apply Newton's 2nd law algebraically to solve for unknown quantities?
  • Can a student recognize conditions for minimum or maximum speeds for an object traveling in a circle?
  • Does a student provide answers to the question that are numerically consistent with the rest of the problem and including units?

It makes it easy to have the conversation with the student about what he/she does or does not understand about a problem. It becomes less of a conversation about 'not getting the problem' and more about not knowing how to draw a free body diagram in a particular situation.

The other thing I realize about doing things this way is that it changes the actual process of students taking quizzes when they are able to retake. Normally during a quiz, I answer no questions at all - it is supposed to be time for a student to answer a question completely on their own to give them a test-like situation. In the context of a formative assessment situation though, I can see how this philosophy can change. Today I had a student that had done the first two parts correctly but was stuck.


Him: I don't know how to find the normal force. There's not enough information.


Me: All the information you need is on the paper. [Clearly this was before I flip-flopped a bit.]


Him: I can't figure it out.

I decided, with this rubric in my head, that if I was really using this question to assess the student on these five things, that I could give the student what was missing, and still assess on the remaining 3 points. After telling the student about the normal force being zero, the student proceeded to finish the rest of the problem correctly. The student therefore received a score of 3/5 on this question. That seems to be a good representation about what the student knew in this particular case.

Why this seems slippery and slopey:

  • In the long term, he doesn't get this sort of help. On a real test in college, he isn't getting this help. Am I hurting him in the long run by doing this now?
  • Other students don't need this help. To what extent am I lowering my standards by giving him information that others don't need to ask for?
  • I always talk about the real problem of students not truly seeing material on their own until the test. This is why there are so many students that say they get it during homework, but not during the test - in reality, these students usually have friends, the teacher, example problems, recently going over the concept in class on their side in the case of 'getting it' when they worked on homework.

Why this seems warm and fuzzy, and most importantly, a good idea in the battle to helping students learn:

  • Since the quizzes are formative assessments anyway, it's a chance to see where he needs help. This quiz question gave me that information and I know what sort of thing we need to go over. He doesn't need help with FBDs. He needs help knowing what happens in situations where an object is on the verge of leaving uniform circular motion. This is not a summative assessment, and there is still time for him to learn how to do problems like this on his own.
  • This is a perfect example of how a student can learn from his/her mistakes.  It's also a perfect example of how targeted feedback helps a student improve.
  • For a student stressed about assessments anyway (as many tend to be) this is an example of how we might work to change that view. Assessments can be additional sources of feedback if they are carefully and deliberately designed. If we are to ever change attitudes about getting points, showing students how assessments are designed to help them learn instead of being a one-shot deal is a really important part of this process.

To be clear, my students are given one-shot tests at the end of units. It's how I test retention and the ability to apply the individual skills when everything is on the table, which I think is a distinctly different animal than the small scale skills quizzes I give and that students can retake. I think those are important because I want students to be able to both apply the skills I give them and decide which skills are necessary for solving a particular problem.

That said, it seems like a move in the right direction to have tried this today. It is yet one more way to start a conversation with students to help them understand rather than to get them points. The more I think about it, the more I feel that this is how learning feels when you are an adult. You try things, get feedback and refine your understanding of the problem, and then use that information to improve. There's no reason learning has to be different for our students.

From projectile motion to orbits using Geogebra

I was inspired last night while watching the launch of the Mars Science Laboratory that instead of doing banked curve problems (which are cool, but take a considerable investment of algebra to get into) we would move on to investigating gravity.

The thing that took me a long time to wrap my head around when I first studied physics in high school was how a projectile really could end up orbiting the Earth. The famous Newton drawing of the cannon with successively higher launch velocities made sense. I just couldn't picture what the transition looked like. Parabolas and circles (and ellipses for that matter) are fundamentally different shapes, and at the time the fact that they were all conic sections was too abstract of a concept for me. Eventually I just accepted that if you shoot a projectile fast enough tangentially to the surface of the Earth, it would never land, but I wanted to see it.

Fast forward to this afternoon and my old friend Geogebra. There had to be a way to give my physics students a chance to play with this and perhaps discover the concept of orbits without my telling them about it first.

You can download the sketch I put together here.

The images below are the sorts of things I am hoping my students will figure out tomorrow. From projectile motion:

...to the idea that it is still projectile motion when viewed along with the curvature of the planet:

Continuing to adjust the values yields interesting results that suggest the possibility of how an object might orbit the Earth.


If you open the file, you can look at the spreadsheet view to see how this was put together. This uses Newton's Law of Gravitation and Euler's method to calculate the trajectory.You can also change values of the variable deltat to predict movement of the projectile over longer time intervals. There is no meaning to the values of m, v0, or height - thankfully the laws of nature don't care about units.

As is always the case, feel free to use and adjust this, as well as make it better. My only request - let me know what you do with it!

Testing physics models using videos & Tracker

I've gotten really jealous reading about how some really great teachers have stepped up and used programming as learning tools in their classes. John Burk's work on using vPython to do computational modeling with his students is a great way to put together a virtual lab for students to test their theories and understand the balanced force model. I also like Shawn Cornally's progression of tasks using programming in Calculus to ultimately enable his students to really understand concepts and algorithms once they get the basic mechanics.

I've been looking for ways to integrate simple programming tasks into my Algebra 2 class, and I think I'm sold on Python. Many of my students run Chrome on their laptops, and the Python Shell app is easily installed on their computers through the app store. It would be easy enough to ask them to enter code I post on the wiki and then modify it as a challenge at the end of beginning of class.. It's not a formal programming course at all, but the only way I really got interested in programming was when I was using it to do something with a clear application. I'm just learning Python now myself, so I'm going to need a bit more work on my own before I'll feel comfortable troubleshooting student programs. I want to do it, but I also need some more time to figure out exactly how I want to do it.

In short, I am not ready to make programming more than just a snack in my classes so far. I have, however, been a Tracker fan for a really long time since I first saw it being used in a lab at the NASA Glenn Research Center ten years ago. Back then, it was a simple program that allowed you to import a video, click frame by frame on the location of objects, and export a table of the position values together with numerically differentiated velocity and acceleration. The built-in features have grown considerably since then, but numerical differentiation being what it is, it's really hard to get excellent velocity or acceleration data from position data. I had my students create their own investigations a month ago and was quite pleased with how the students ran with it and made it their own. They came to this same conclusion though - noisy data does not a happy physics student make.

I wanted to take the virtual laboratory concept of John's vPython work (such as the activities described here) for my students, but not have to invest the time in developing my students' Python ability because, as I mentioned, I barely qualify myself as a Python novice. My students spent a fair amount of time with Tracker on the previous assignment and were comfortable with the interface. It was at this point that I really decided to look into one of the most powerful capabilities of the current version of Tracker: the dynamic particle model.

My students have been working with Newton's laws for the past month. After discovering the power of the dynamic model in Tracker, I thought about whether it could be something that would make sense to introduce earlier in the development of forces, but I now don't think it makes sense to do so. It does nothing for the notion of balanced forces. Additionally, some level of intuition about how a net force affects an object is important for adjusting a model to fit observations. I'm not saying you couldn't design an inquiry lab that would develop these ideas, but I think hands-on and actual "let me feel the physics happening in front of me" style investigation is important in developing the models - this is the whole point of modeling instruction. Once students have developed their own model for how unbalanced forces work, then handing them this powerful tool to apply their understanding might be more meaningful.

The idea behind using the dynamic particle model in Tracker is this: any object being analyzed in video can be reduced to analyzing the movement of a particle in response to forces. The free body diagram is the fundamental tool used to analyze these forces and relate them to Newton's laws. The dynamic particle model is just a mathematical way to combine the forces acting on the particle with Newton's second law. Numerical integration of acceleration then produces velocity and positions of the particle as functions of time. Tracker superimposes these calculated positions of the particle onto the video frames so the model and reality can be compared.

This is such a powerful way for students to see if their understanding of the physics of a situation is correct. Instead of asking students to check order of magnitude or ask about the vague question "is it reasonable", you instead ask them whether the model stops in the same point in the video as the object being modeled. Today, I actually didn't even need to ask this question - the students knew not only that they had to change something, but they figured out which aspect of the model (initial velocity or force magnitude) they needed to change.

It's actually a pretty interesting  progression of things to do and discuss with students.

  • Draw a system schema for the objects shown in the video.
  • Identify the object(s) that you want to model from the video. Draw a free body diagram.
  • Decide which forces from the diagram you CAN model. Forces you know are constant (even if you don't know the magnitude) are easy to model. If there are other forces, you don't have to say "ignore them" arbitrarily as the teacher because you know they aren't important. Instead, you encourage students start with a simple model and adjust the parameters to match the video.
  • If the model cannot be made to match the video, no matter what the parameter values, then they understand why the model might need to be adjusted.  If the simple model is a close enough match, the discussion is over. This way we can stop having our students say "my data is wrong because..." and instead have them really think about whether the fault is with the data collection or with the model they have constructed!
  • Repeat this process of comparing and adjusting the model to match the observations until the two agree within a reasonable amount.

Isn't the habit of comparing our mental models to reality the sort of thing we want our students to develop and possess long after they have left our gradebook?

It's so exciting to be able to hand students this new tool, give them a quick demo on how to make it work, and then set them off to model what they observe. The feedback is immediate. There's some frustration, but it's the kind of frustration that builds intuition for other situations. I was glad to be there to witness so we could troubleshoot together rather than over-plan and structure the activity too much.

Here is the lab I gave my students: Tracker Lab - Construction of Numerical models If you are interested in an editable version, let me know. I have also posted the other files at the wiki page. Feel free to use anything if you want to use it with your students.

I am curious about the falling tissue video and what students find - I purposely did not do that part myself. Took a lot of will-power to not even try. How often do we ask students to answer questions we don't know the answer to? Aren't those the most interesting ones?

I promise I won't break down and analyze it myself. I've got some Python to learn.

Dare to be silent.

I made a promise to myself today - I was going to force the physics class to speak. It isn't that they don't answer questions and participate, it's that usually they seem to do that to please me. Sometimes they will explain ideas to each other and compare answers, but it never works as beautifully as I want it to.

So today I told them I wasn't going to talk about a problem I gave them. They were. And then I sat on an empty table and waited. It was really difficult for me. Eventually someone asked someone else for an answer. I stayed quiet. Then another person nodded and agreed and then said nothing. I stayed quiet. Then someone disagreed.

Full disclosure - at this point I gestured wildly, but still stayed quiet.

After about five minutes of awkward silence punctuated with half explanations that trailed off, something happened - I don't know what the trigger was because if I did I would bottle it and sell it at educational conferences - a full discussion was suddenly underway. I was so amazed that I almost didn't think to capture it - thankfully I did get the following part:

Especially cool to see this knowing that English is not the first language of the students speaking.

I'm going to try to do this more often, though I again must point out that it was incredibly difficult working through the silence. The students in the end decided they had something to say, so they shared their thoughts with each other. I did nothing but wait for it to happen.

Physics #wcydwt - Indirect Measurement

While cleaning up after robotics class today, I noticed a statics problem involving an object hanging from a couple wires that was poking out from under one of my many piles of papers. We had looked at this question earlier during the week in class. A couple students were out for a volleyball tournament in Beijing, so I wanted to do something hands on and multimedia-esque that the missing students wouldn't feel too upset about missing, but could somehow still be involved and connected with the class work from today.

I realized that we hadn't yet used the spring scales during our discussion of forces. My obsession with #wcydwt lately has been on using the novelty of a minimum amount of information to get students to see a problem jump off the page/screen. I also wanted the students in class to get the joy of holding back information from their classmates to see if they could figure out the missing info. Lastly, I wanted there to be a simple physics problem that would serve to assess whether all of the students understood how to solve a 2D equilibrium problem.

So I grabbed the spring scales, some string, slotted weights, and told the students to put together a few pictures using these materials. We briefly discussed what information could be given, and what they wanted to leave out for the athletes to figure out on their own. I admit - I pushed them along, and given more time I would have given them more choice, but I don't think my selfishness and excitement in doing this was too much. The other factor - the vice principal had given us an extra pizza to share - they were also really pushing for efficiency. It wasn't all me.

And thus the spring scale picture project was born, thanks to one student's iPhone and Geogebra:

The complete link of the assignment is at http://wiki.hischina.org/groups/gealgerobophysiculus/wiki/e495b/Unit_2__Spring_Scale_Challenge.html.

I'm sure I am not the first to do this, but it was so simple to execute that I had to give it a shot, and I am sharing it because I'm trying to share everything I can these days. We will see what happens when the results come in next week.