Monthly Archives: December 2011

Getting started with LEGO Robotics - the book and the real thing.

This week I got a special early holiday present in my mailbox from my friend Mark Gura. Mark had invited me a couple years ago to be part of a book for helping teachers new to the field of LEGO robotics get started with their students. We had a great conversation one evening after school over the phone during which we discussed the educational goldmine that building with LEGO is for students.

Mark did this with a number of people with a range of LEGO robotics experiences, wrote up our conversations, and then combined them with a set of resources that could be immediately useful to novices in the book.

The book, Getting Started with LEGO Robotics, is published by the International Society for Technology Education. If you, or anyone you know, is just getting started with this exciting field, you will find some great stuff in here to help you work with students and get organized.

It was a particularly perfect time for the book to arrive because we have a new group of students at my school getting started themselves with building and programming using the NXT. My colleague Doug Brunner teaches fifth graders across the hall. He volunteered (or more realistically, his students volunteered him) to take on coaching a group of students in the FIRST LEGO League program for the first time. After building the field for this year's challenge, today the fifth graders actually got their hands on the robots and programming software. I had the robots built from the middle school exploratory class available so the students could immediately start with some programming tasks.

We started with my fall-back activity for the first time using the software - program the robot to drive across the length of the table and stop before falling over the edge. The students were into it from the start.  I stepped back to take pictures and Doug took over directing the rest of the two hours. He is a natural - he came up with a number of really great challenges of increasing difficulty and wasn't afraid to sit down with students to figure out how the software worked. By the end of the session, the students were programming their robots to grab, push, and navigate around obstacles by dead reckoning. It was probably the most impressively productive single session I've ever seen.

It's always interesting to see how different people manage groups of students and LEGO. Some want to structure things within tight guidelines and teach step by step how to do everything. Others do a mini lesson on how to do one piece of the challenge, and then send the students off to figure out the rest. Some show by example that it's perfectly fine to get something wrong in the process of solving a challenge by working alongside students. It was impressive to see Doug think on his feet and create opportunities for his students to work at different paces but all feel accomplished by the end of the day. It is also really unique to have to tell a bunch of students  at 5:15 PM on a Friday to go home from school, and yet this has now been the norm in the classroom for a couple straight weeks.

Having robotics in my teaching load means that I am thinking about these ideas interspersed among planning activities for my regular content classes. There's no reason why the philosophies between them can't be the same, aside from the very substantial fact that you don't have to tell students how to play with LEGO but you do often have to tell them how to play with mathematics or physics concepts. It's easy for these robotics students to fail at a challenge twenty times and keep trying because they are having fun figuring it out. The holy grail of education is how to pose other content and challenge problems in the right way so it is equally compelling and motivating.

Note that I am not saying making content relevant to the real world. One of my favorite education bloggers, Jason Buell, has already made this point about why teaching for "preparation for the real world" as a reason to learn in the classroom is a flawed concept to many students that have a better idea than we do about their reality. I never tell people asking about the benefits of robotics that students are learning to make a robot push a LEGO brick across a table now because later on they will build bigger robots that push a real brick across the floor. Instead I cite the fact that seeing how engaged students are solving these problems is the strongest level engagement I have ever seen. The skills they develop in the process are applicable to any subject. The self esteem (and humility) they develop by comparing their solutions to others is incredible.

This sort of learning needs to be going on in every classroom. I used to believe that students need to learn the simple stuff before they are even exposed to the complex. I used to think that the skills come first, then learning the applications. Then I realized how incongruous this was with my robotics experiences and with the success stories I've had working with students.

Since this realization, I've been working to figure out how to bridge the gap. I am really appreciative that in my current teaching home, I am supported in my efforts to experiment by my my administrators. My students thankfully indulge my attempts to do things differently. I appreciate that while they don't always enthusiastically endorse my methods, they are willing to try.

Rubrics & skill standards - a rollercoaster case study.

  • I gave a quiz not long ago with the following question adapted from the homework:

The value of 5 points for the problem came from the following rubric I had in my head while grading it:

  • +1 point for a correct free body diagram
  • +1 for writing the sum of forces in the y-direction and setting it equal to may
  • +2 for recognizing that gravity was the only force acting at the minimum speed
  • +1 for the correct final answer with units

Since learning to grade Regents exams back in New York, I have always needed to have some sort of rubric like this to grade anything. Taking off  random quantities of points without being able to consistently justify a reason for a 1 vs. 2 point deduction just doesn't seem fair or helpful in the long run for students trying to learn how to solve problems.

As I move ever more closely toward implementing a standards based grading system, using a clearly defined rubric in this way makes even more sense since, ideally, questions like this allow me to test student progress relative to standards. Each check-mark on this rubric is really a binary statement about a student relative to the following standards related questions:

  • Does the student know how to properly draw a free body diagram for a given problem?
  • Can a student properly apply Newton's 2nd law algebraically to solve for unknown quantities?
  • Can a student recognize conditions for minimum or maximum speeds for an object traveling in a circle?
  • Does a student provide answers to the question that are numerically consistent with the rest of the problem and including units?

It makes it easy to have the conversation with the student about what he/she does or does not understand about a problem. It becomes less of a conversation about 'not getting the problem' and more about not knowing how to draw a free body diagram in a particular situation.

The other thing I realize about doing things this way is that it changes the actual process of students taking quizzes when they are able to retake. Normally during a quiz, I answer no questions at all - it is supposed to be time for a student to answer a question completely on their own to give them a test-like situation. In the context of a formative assessment situation though, I can see how this philosophy can change. Today I had a student that had done the first two parts correctly but was stuck.


Him: I don't know how to find the normal force. There's not enough information.


Me: All the information you need is on the paper. [Clearly this was before I flip-flopped a bit.]


Him: I can't figure it out.

I decided, with this rubric in my head, that if I was really using this question to assess the student on these five things, that I could give the student what was missing, and still assess on the remaining 3 points. After telling the student about the normal force being zero, the student proceeded to finish the rest of the problem correctly. The student therefore received a score of 3/5 on this question. That seems to be a good representation about what the student knew in this particular case.

Why this seems slippery and slopey:

  • In the long term, he doesn't get this sort of help. On a real test in college, he isn't getting this help. Am I hurting him in the long run by doing this now?
  • Other students don't need this help. To what extent am I lowering my standards by giving him information that others don't need to ask for?
  • I always talk about the real problem of students not truly seeing material on their own until the test. This is why there are so many students that say they get it during homework, but not during the test - in reality, these students usually have friends, the teacher, example problems, recently going over the concept in class on their side in the case of 'getting it' when they worked on homework.

Why this seems warm and fuzzy, and most importantly, a good idea in the battle to helping students learn:

  • Since the quizzes are formative assessments anyway, it's a chance to see where he needs help. This quiz question gave me that information and I know what sort of thing we need to go over. He doesn't need help with FBDs. He needs help knowing what happens in situations where an object is on the verge of leaving uniform circular motion. This is not a summative assessment, and there is still time for him to learn how to do problems like this on his own.
  • This is a perfect example of how a student can learn from his/her mistakes.  It's also a perfect example of how targeted feedback helps a student improve.
  • For a student stressed about assessments anyway (as many tend to be) this is an example of how we might work to change that view. Assessments can be additional sources of feedback if they are carefully and deliberately designed. If we are to ever change attitudes about getting points, showing students how assessments are designed to help them learn instead of being a one-shot deal is a really important part of this process.

To be clear, my students are given one-shot tests at the end of units. It's how I test retention and the ability to apply the individual skills when everything is on the table, which I think is a distinctly different animal than the small scale skills quizzes I give and that students can retake. I think those are important because I want students to be able to both apply the skills I give them and decide which skills are necessary for solving a particular problem.

That said, it seems like a move in the right direction to have tried this today. It is yet one more way to start a conversation with students to help them understand rather than to get them points. The more I think about it, the more I feel that this is how learning feels when you are an adult. You try things, get feedback and refine your understanding of the problem, and then use that information to improve. There's no reason learning has to be different for our students.

Geogebra for Triangle Congruence Postulates

It has been busy-ville in gealgerobophysicsulus-town, so I have barely had time to catch my breath over the last few days of music performances, school events, and preparations for the end of the semester.

My efforts over the  past couple days in Geometry have focused on getting in a bit of understanding of congruent triangles. We have used some Geogebra sketches I designed to have them build a triangle with specific requirements. With some feedback from some Twitter folks (thanks a_mcsquared!) and students after doing the activities, I've got these the way I want them.

Constructing a 7-8-9 triangle: Download here. (For discovering SSS)

Constructing a 3-4-45 degree triangle: Download here. (For discovering SAS)

Looking for an ASA postulate. Download here. (Clearly for ASA explorations.) - This one I made a quick change before class to making it so that the initial coordinates of the base of the triangle are randomized when loading the sketch. This almost guarantees that every student will have a differently oriented triangle. This makes for GREAT conversations in class. Here are three of the ones students created this afternoon:

I'm doing a lot of thinking about making these sorts of activities clearly driven by simple, short instructions. This is particularly in light of a few of the students in my class with limited English proficiency. Creating these simple activities is also a lot more fun than just asking students to draw them by hand, guess, or just listen to me tell them the postulates and theorems. Having a room full of different examples of clearly congruent triangles calls upon the social aspect of the classroom. Today they completed the activity and showed each other their triangles and had good interactions about why they knew they had to be congruent.

Last year I had them construct the triangles themselves, but the power of the end message was weakened by the written steps I included in the activity. Giving them clear instructions made the final product, a slew of congruent (or at least approximately in the case of 7-8-9) triangles a nice "coincidence" to lead to generalizing the idea.