Tag Archives: first day

Fail Early, Fail Often: Learning Names

Learning names this year was a bigger challenge this time around in comparison to the past few years. The first reason is that my new school is substantially bigger than my previous school, as are the class sizes. Another major reason: I'm the new guy.

The students generally know each other, so I decided the first day wasn't actually about them learning each other's names. I still included activities that got them interacting with each other, but I was the one that needed to learn their names. I decided the quick forty minute block on the first day was an opportunity to model my class credo: fail early, fail often.

When they walked in, I asked them their names, and what they wanted to be called. I've learned that these are not necessarily the same. These names were noted on my clipboard. I made a big show out of going around to each student, looking them in the eyes, and saying their name. Taking attendance then became my first opportunity to assess what I remembered. The order on the roster definitely didn't match the order that the students entered the classroom.

I then had them line up alphabetically along the back wall. I had them all say their names one in a row. I had my reference material on the clipboard and went reverse alphabetical order. I publicly made mistakes, lots of them. Then I had them say the name of the person immediately to their left. For me learning the names, this meant that the voice saying the name was different, but the name was the same. I narrated that I wasn't actually looking at the person saying the name - my attention was on the person whose name was being said.

I then had them get in line in order of birthday, but without any words. Once they figured out their order, I went down the line and tried to get names. I looked at my clipboard if I needed to, and I often did, but often had them just say their names back. I explained that I made them move around because I didn't want to learn names based on who each person was next to - I needed to connect the name to the face. This ensured I was learning the right information, not an arbitrary order.

Then I had them get into two or three random orders. If there was time, I had a student go down the line reciting names. Then I went again myself, now trying not to look at the clipboard unless it was absolutely necessary. The mistakes continued to come, but I generally was having more success at this stage. I again told them that I had quizzes myself enough - it was time to let my brain do connecting behind the scenes. I emphasized that this was why cramming doesn't tend to work: the brain is really good at organizing the information if it has the time to do so.

It was great putting myself in the position of not knowing answers and having to ask students for help. The students appeared to enjoy my genuine attempt to demonstrate how I learn information efficiently, and how essential failure is to being successful in the end.

Day 1 in Physics - Models vs. Explanations

One of my goals has always been to differentiate my job from that of a paid explainer. Good teaching is not explaining exclusively - though it can be part of the process. This is why many people seek a great video or activity that thoroughly explains a concept that puzzles them. The process of learning should be an interactive one. An explanation should lead into another question, or an activity that applies the concept.

For the past two years, I've done a demo activity to open my physics class that emphasizes the subtle difference between a mental model for a phenomenon and having just a good explanation for it. A mental model makes predictions and is therefore testable. An explanation is the end of a story.

The demo equipment involves a cylindrical neodymium magnet and an aluminum tube of diameter slightly larger than the magnet. It is the standard eddy current/Lenz's law/electromagnetic induction demo showing what happens when a magnet is dropped into a tube that is of a non-magnetic material. What I think I've been successful at doing is converting the demo into an experience that opens the course with the creation of a mental model and simultaneous testing of that model.

IMG_1016

I walk into the back of the classroom with the tube and the magnet (though I don't tell them that it is one) and climb on top of a table. I stand with the tube above the desk and drop the magnet concentrically into the tube.

Students watch what happens. I ask for them to share their observations. A paraphrased sample:

  • The thing fell through the tube slowly than it should have
  • It's magnetic and is slowing down because it sticks to the side
  • There's so much air in the tube that it slows down the falling object.

I could explain that one of them is correct. I don't. I first ask them to turn their observation into an assertion that should then be testable by some experiment. 'The object is a magnet' becomes 'if the object is a magnet, then it should stick to something made out of steel.' This is then an experiment we can do, and quickly.

When the magnet sticks strongly to the desk, or paper clips, or that something else happens that establishes that the object is magnetic, we can further develop our mental model for what is happening. Since the magnet sticks to steel, and the magnet seems to slow down when it falls, the tube must be made of some magnetic metal. How do we test this? See if the magnet sticks to the tube. The fact that it doesn't stick as it did to the steel means that our model is incomplete.

Students then typically abandon the magnet line of reasoning and go for air resistance. If they went for this first (as has happened before) I just reverse the order of these experiments with the above magnetic discussion. If the object is falling slowly, it must be because the air is slowing it down. How do we test this? From the students: drop another object that is the same size as the first and see if it falls at the same speed. I have a few different objects that I've used for this - usually an aluminum plug or part from the robotics kit works - but the students also insist on taping up the holes that these objects have so that it is as close to the original object as possible. It doesn't fall at the same speed though. When students ask to add mass to the object, I oblige with whatever materials I have on hand. No change.

The mental model is still incomplete.

We've tried changing the object - what about the tube? Assertion from the students: if the material for the tube matters, then the object should fall at a different speed with a plastic tube. We try the experiment with a PVC pipe and see that the magnet speeds along quite unlike it did in the aluminum tube. This confirms our assertion - this is moving us somewhere, though it isn't clear quite where yet.

Students also suggest that friction is involved - this can still be pushed along with the assertion-experiment process. What would you expect to observe if friction is a factor? Students will say they should hear it scraping along or see it in contact with the edges of the tube. I invited a student to stare down the end of the tube as I dropped the magnet. He was noticeably excited by seeing it hover lightly down the entire length of the tube, only touching its edges periodically.

Students this year asked to change the metal itself, but I unfortunately didn't have a copper tube on hand. That would have been awesome if I had. They asked if it would be different if the tube was a different shape. Instead of telling them, I asked them what observation they would expect to make if the tube shape mattered. After they made their assertion, I dropped the magnet into a square tube, and the result was very similar to with the circular tube.

All of these experiments make clear that the facts that (a) the object is a magnet and (b) the tube is made of metal are somehow related. I did at this point say that this was a result of a phenomenon called electromagnetic induction. For the first time during the class, I saw eyes glaze over. I wish I hadn't gone there. I should have just said that we will eventually develop some more insight into why this might happen, but for now, let's be happy that we've developed some understanding of what factors are involved.

All of these opportunities to get students making assertions and then testing them is the scientific method as we normally teach it. The process is a lot less formal than having them write a formal hypothesis, procedure, and conclusion in a lab report - appropriate given that it was the first day of the class - and it makes clear the concept of science as an iterative process. It isn't a straight line from a question to an answer, it is a cyclical process that very often gets hidden when we emphasize the formality of the scientific method in the form of a written lab report. Yes, scientists do publish their findings, but this isn't necessarily what gets them up in the morning.

Some other thoughts:

  • This process emphasizes the value of an experiment either refuting or supporting our hypothesis. There is a consequence to a mental model when an experiment shows what we expected it to show. It's equally instructive when it doesn't.I asked the students how many times we were wrong in our exploration of the demo. They counted more than five or six. How often do we provide opportunities for students to see how failure is helpful? We say it. Do we show how?
  • I finally get why some science museums drive me nuts. At their worst, they are nothing more than clusters of express buses from observation/experiment to explanation. Press the button/lift the flap/open the window/ask the explainer, get the answer. If there's not another step to the exhibit that involves an application of what was learned, an exhibit runs the risk of continuing to perpetuate science as a box of answers you don't know. I'm not saying there isn't value in tossing a bunch of interesting experiences at visitors and knowing that only some stuff will stick. I just think there should be a low floor AND a high ceiling for the activities at a good museum.
  • Mental models must be predictive within the realm in which they are used. If you give students a model for intangible phenomena - the lock and key model for enzymes in biology for example - that model should be robust enough to have students make assertions and predictions based on their conception of the model, and test them. The lock and key model works well to explain why enzymes can lose effectiveness under high temperature because the shape of the active site changing (real world) matches our conception of a key being of the wrong shape (model). Whenever possible, we should expose students to places where the model breaks down, if for no other reason, to show that it can. By definition, it is an incomplete representation of the universe.

Standards Based Grading - All in, for the new year

I've written previously about wanting to be part of the Standards Based Grading crowd. My quiz policy was based in the idea - my quizzes cover skills only and in isolation, the idea being that if students could show proficiency on the quizzes, then I would know for sure that they had really developed those skills. If they had demonstrated proficiency, but then failed on tests to perform, it was an indication that the problem was seeing all the skills in one place. This is the "I get it in class, but on tests I mess it up" mantra that I've heard ever since I first started teaching. My belief has always been that the first clause of that sentence is never as true as the student thinks it is. The quiz grades have typically shown that to be the case.

The thing I haven't been able to get at is why I can't get my students to retake quizzes as I thought it compelled them to do. I told them they can get 100%. I reminded them that they just needed to look at each quiz, recognize what they got wrong, and work with me on those specific skills to improve. Then, when they were ready, they could retake and get a better score. Sometimes they do it, but they are always missing either one of those three things. They would retake without looking at the quiz. They would take it knowing what they got wrong, but never asked me to go over the things they didn't get. There were exceptions, but curiously not enough to impress me.

After really committing to reshaping the quiz grade as a real SBG grade for a unit last year, I saw the differences pretty clearly in how the students went about this aspect of their grade. The standards I expected students to demonstrate were clearly listed in the grade book (fine, Powerschool). The students knew what they needed to work on, and were directly linked to examples and short videos I had created to help them with those specific skills. Class time was spent working around developing those skills, along with some bigger picture ideas to explore separately from the routine skills the standards were centered around for the unit, which was on exponential and logarithmic functions. I was impressed in this short time with how changing this small (15%) portion of the grade changed the overall attitude my students had while they were working with me. It was one step closer to the Montessori style classroom I have always wanted to have while working within the structure of a more traditional program - students walk in knowing what they need to work on, and they get to work. My role becomes more to push them in the way I think they can and need to be pushed. Some need to work on skills, others need to attack context problems and the challenging 'why is this so' threads that are usually all teacher driven, but don't need to be in many cases.

I did some thinking over the last couple of weeks on how I wanted to do things differently, so I wrote up a new grading policy and posted it online. I had renamed my quiz grade to be 'Learning Standards', bumped up the percentage by 10% (to 25%), and reduced the homework and classwork components to 5% each, with a portfolio at 10%, and tests to 55%.  In sharing my new grading policy with people through Twitter, there were some key comments that really guided my thinking.

Kelly O'Shea pointed out the fact that even with the change, the standards were not a huge part of the grade. Even by cutting classwork and homework into the standards, it still wasn't good enough:

A few other people made similar suggestions. John Burk probably put the final nail in the SBG-lite version I thought was safe with this comment:

One problem for getting buy in on SBG is that if it isn't a big part of the grade, and there are still so many non-sbg things, they might not really understand the rationale for SBG.

If I really believe in the power for Standards Based Grading to transform how learning happens in my classroom, I need to demonstrate its importance and commit to it.

The final result? My grades for Algebra 2/Advanced Algebra, Geometry, Calculus 12, and Physics are going to be 90% Learning Standards, 10% portfolio. I am going to give unit tests, but they are opportunities to demonstrate proficiency on the learning standards. In the case of my AP Calculus students, the grades are still 60% unit tests, 30% standards, and 10% portfolio, primarily because I still will be giving tests that are similar to the AP exam with multiple choice, and free response sections. I also had my first class last year with 100% fives, and am admittedly a bit nervous tweaking what worked last year. That said, I am accepting that this, too, could become a thing of the past.

I am a bit nervous, but that's mostly because change isn't always easy. From a teaching perspective, the idea feels right, but it's not what I'm used to doing. The students sounded pretty cool with it on the first days of class when I introduced the idea though, and that is a major positive. I'll keep writing as things proceed and my implementation develops - it feels great to know I'm not alone.

I really appreciate all of the kind words and honest feedback from the people that challenged me to think this through and go all in. If I can do nothing else, I'll pay that advice forward. Cool?