## Clicking Useless Buttons and Exponential Models

Last fall, when I was teaching my web design students about jQuery events, I included an example page that counted the number of times a button was clicked and displayed the total. As a clear indicator of their strong engagement in what I asked them to do next, my students competed with each other to see who could click the most number of clicks in a given time period. With the popularity of pointless games like Cookie Clicker , I knew there had to be something there to use toward an end that served my teaching.

Shortly afterwards, I made a three-act video activity that used this concept – you can get it yourself here.

This was how I started a new unit on exponential functions with my Math 10 class this week. The previous unit was about polynomials, and had polynomial regression for modeling through Geogebra as a major component. One group went straight to Geogebra to solve this problem to figure out how many clicks. For the rest, the solutions were analog. Here’s a sample:

When we watched the answer video, there was a lot of discouragement about how nobody had the correct answer. I used this as an opportunity to revisit the idea of mathematics as a set of different models. Polynomial models, no matter what we do to them, just don’t account for everything out there in the universe. There was a really neat interchange between two students sitting next to each other, one who added 20 each time, and another who multiplied by 20 each time. Without having to push too much, these students reasoned that the multiplication case resulted in a very different looking set of data.

This activity was a perfect segue into exponential functions, the most uncontrived I think I’ve set up in years. It was based, however, on a useless game with no real world connections or applications aside from other also useless games. No multiplying bacteria or rabbits, no schemes of getting double the number of pennies for a month.

I put this down as another example of how relevance and real world don’t necessarily go hand in hand when it comes to classroom engagement.

## Students and Working With Big Data

I happened upon this tweet today:

I hadn’t heard of the Oceans of Data Institute before, but a quick look at its website revealed some interesting areas of focus:

• Designing interfaces to let students interact with large sets of data
• Defining the skills profile of big data scientists explicitly

As an example of their projects, the page includes a link to http://oceantracks.org, which allows students to visualize the movement of different animals in the ocean. In the image below, red is the track of an elephant seal, yellow is a blue-fin tuna, and turquoise is a white shark.

I like the idea of students getting large data sets and learning to play with them. I agree with the idea that students need to understand the role of data in the world given how frequently it is used to guide decisions. Having students collect, manage, model, and understand data is key to the scientific method and the learning process. Feeling comfortable drawing conclusions from data is crucial to being considered quantitatively literate today. I really like that ODI is putting in the effort to make this sort of exploration possible, while also acknowledging that there is a lot of work to be done.

Here is an example of the curation they are doing to share best practices:
http://oceansofdata.org/instructional-sequences-are-thought-scaffold-students-exploration-data

All that being said, here’s one quote from an executive summary about the skills profile for big data specialists that surprised me:

Unexpectedly, “soft skills” such as analytical thinking, critical thinking, and problem solving dominated the 20+ big data skill and knowledge requirements identified by the panel and endorsed by experts who completed the validation survey.

As a teacher, I find that this isn’t unexpected. The skills in the profile (which can be downloaded here) include skills that I’m interested in cultivating in my students. These soft skills are the key to students being successful in any field, not just big data. These are the truly transportable skills that I hope my students have long after they have left my classroom. The executive summary also identifies “defining problems and articulating questions” as one of the key tasks that are essential to the work of data scientists. I also believe this to be a focus of my time with students, and a focus of the work of most K-12 teachers.

The site also links to this article, which suggests that the conclusions drawn in the executive summary are more declarative and alarmist than I interpret them to be:

The skills necessary for the data analytics jobs of tomorrow aren’t being taught in K–12 schools today, according to a new report released by the Education Development Center, Inc.’s (EDC) Oceans of Data Institute.

I’m not sure how the Oceans of Data Institute feels about the comparison, but they do link to the article in their page about the project. I’m a big believer in teaching computational thinking skills. I acknowledge that getting more data scientists is an obvious goal for an organization with ‘data’ in their name. I think that using data is a nice way to tick off the ‘real-world relevance’ box along the way to the bigger picture skills that students need to develop.

I just don’t think we need another bold statement about a skill set that is missing from today’s curriculum. I want more tools that get students interacting with data, the creation of which ODI states and has demonstrated is its goal. That’s certainly a better way to get educators on board.

## Coding For The Classroom: SubmitMe

For more than a year now, my process of sharing student work involves me going around the class, snapping pictures on my phone, and uploading the results through a web page to my laptop. It’s a lot smoother than using a document camera, and also enables students themselves to upload pictures of their work if they want, or if I ask them to. This is much smoother and faster than using a native application in iOS or Android because it’s accessed through a web page, and is hosted locally on my laptop in the classroom.

I’ve written about my use of this tool before, so this is more of an update than anything else. I have cleaned up the code to make it easier for anyone to run this on their own computers. You can download a ZIP file of the code and program here:
submitMe

Unzip the file somewhere convenient on your computer, and make a note of where this is on your computer. You need to have a Python compiler installed for this to run, so make sure you get that downloaded and running first. If you have a Mac, you already have it on your computer.

Here’s what you need to do:

1. Edit the submit.py file in the directory containing the uncompressed files using a text editor.
2. Change the address in the line with HOST to match the IP address of your computer. You can obtain this in Network Preferences.
3. Change the root_path line to match the directory containing the uncompressed files. In the zip file, the line refers to where I have these files on my own computer. These files are located in the /Users/weinbergmath/Sites/submitMePortable directory. This needs to be the absolute address on your file system.
4. Run the submit.py file using Python. If you are on a Mac, you can do this by opening a terminal using Spotlight, going to the directory containing these files, and typing python submit.py .  Depending on your fire-wall settings, you might need to select ‘Allow’ if a window pops up asking for permission for the Python application.
5. In a web browser, enter the IP address you typed in Step 2 together, port 9000. (Example: http://192.168.0.172:9000). This is how students will access the page on their computers, phones, or tablets. Anyone on the same WiFi network should be able to access the page.

That should be it. As students upload images, they will be located in the /images directory where you unzipped the files. You can browse these using Finder or the File Browser. I paste these into my class notes for use and discussion with students.

Let me know if you need any help making this work for you. If needed, I can throw together a screen cast at some point to make it more obvious how to set this up.

If you haven’t seen Dan Meyer’s talk on using the structures of video games to make math class resemble things students like, you need to do so now. You could wait until after Christmas, I guess, but not too much longer.

There’s an interesting mix of comments on that blog post. The thread that interests me most is that on the relationship between the story telling aspect of video games, and the equivalent story telling that happens in good math problems. I’m not convinced that there needs to be a good backstory for a game to be compelling, just as a real world context doesn’t tend to be sufficient to get most students enthusiastic about a particular problem.

One comment from Kevin Hall, however, tapped into an idea I’ve also been mulling over since finding one of my own Choose-Your-Own-Adventure books during a trip back home in November. Here’s Kevin writing in a comment on Dan’s post:

I’ve thought about embedding videos in a Google Form so students can choose their own adventure and see the consequence of their choices. For example, if each pizza is $6 and delivery is$1.50, you could ask how much it would cost to get 2 pizzas delivered. If a student selected $13.50, you’d take them to a video of a single delivery guy bringing 2 pizzas. If the student said$15, you’d show a guy bringing 1 pizza, driving back to the pizza place, and bringing the other pizza separately. But it’s a lot of work and, I think, a critical aspect of making math more like video games.

The work of putting together such a task is not to be ignored. I do think though that getting students thinking about their thinking in a way that doesn’t require whole class discussion is worth investigating. Some carefully crafted questions, ones that we might ask the entire class based on student responses, might also have some power for individual students to go through before sharing thoughts with others.

I also recently learned about an online tool called Twine that takes away some of the difficulty of putting these together. You can edit your adventure in the browser, link pages together without too much hassle, and add links to pictures or videos online using standard HTML. If you know Javascript, you can use it to add even more interactivity to the story. The tool allows you to piece together a truly individualized path.

I’m interested in piecing together some activities using Twine as a starting point for some explorations next semester. I’ve done things like this on paper before, but the limitations of paper are such that it’s impossible to progressively reveal questions based on student responses. The way that Twine reduces the friction for doing this seems just enough to make this an option to explore. I’m writing this out now as a way to get some of you to push me to actually do it.

I’d love to see what happens when the math-twitter-blog-o-sphere gives Choose Your Own Adventure a try. Give it a go, and let me know what you create. I’ll be here.

## Releasing my IB Physics & IB Mathematics Standards

Our school is in its first year of official IB DP accreditation. This happened after a year of intense preparation and a school visit last March. In preparation for this, all of us planning to teach IB courses the next year had to create a full course outline with details of how we would work through the full curriculum over the two years prior to students taking IB exams.

One of the difficulties I had in piecing together my official course outline for my IB mathematics and IB physics courses was a lack of examples. There are outlines out there, but they were either for the old version of the course (pre-2012) or from before the new style of IB visitation. The IB course documents do have a good amount of detail on what will be assessed, but not the extent to which it will be assessed. The math outline has example problems in the outline which are helpful, but this does not exist for every course objective. The physics outline also has some helpful details, but it is incomplete.

The only way I’ve found to fill in the missing elements is to communicate directly with other teachers with more experience and understanding of IB assessment items. While some of this has been through official channels (i.e. the OCC forums), most has been through my email and Twitter contacts. Their help has been incredible, and I appreciate it immensely.

At the end of the first semester for Mathematics SL, Mathematics HL (one combined class for both), and Physics SL/HL (currently only SL topics for the first semester), I now have the full set of standards that I’ve used for these courses in my standards based grading (SBG) implementation. I hope these get shared and accessed as a starting point for other teachers that might find them useful.

For my combined Mathematics SL/HL class:
Topics 1 – 2, IB Mathematics SL/HL

For my combined Physics SL/HL class:
Topics 1 – 2, IB Physics SL/HL

The third column in these spreadsheets has the heading ‘IB XXXX Learning Objective’ – these indicate the connection between the unit standard (e.g. Standard 3.1 is standard 1 of unit 3) to the IB Curriculum Standard (e.g. 2.3 is Topic 2, content item #3). Some of these have sub-indices that correspond with the item in the list of understandings in the IB document. IB Mathematics SL objective 1.3.2 refers to IB Topic 1, content item #3, sub-topic item #2.

If you need more guidance there, please let me know.

## If you are a new IB Mathematics/Physics teacher accessing these…

…please understand that this is my first year doing the IB curriculum. There will be mistakes here. In some cases, I also know that I’ll be doing things differently in the future. If these are helpful, great. If not, check the OCC forums or teacher provided resources for more materials that might be helpful.

## If you are an experienced IB Mathematics/Physics teacher accessing these…

…I’d love to get your feedback given your experience. What am I missing? What do I emphasize that I shouldn’t? What are the unspoken elements of the curriculum that I might not be aware of as a first year? Let me know. I’d love it if you could give me the information you wish you had (or may have had) to be maximally successful.

I’ve benefited quite a bit from sharing my materials and getting feedback from people around the world. I’ve also gotten some great help from other teachers that have shared their resources. Consider this instance of sharing to be another attempt to pay that assistance forward.

## Direct Instruction Videos – What’s your Workflow?

I’ve written before about my experience recording my direct instruction into short, Udacity style videos and having students watch them during class. This enables me to circulate and have a lot more conversations with students as they are learning than when I’m talking at the front of the room. It also puts me in a position to see how my students are engaging with this material since I’m walking around and see what they are writing down, where they are stopping the videos, and can listen to their conversations. The quality of my interactions (and the student-to-student interactions) is so much higher with this approach.

The main obstacle to my doing this more, however, is the investment of time in creating the videos. With a consultant meeting with us this week and asking us to examine our technology practices, I’m wondering whether others have cracked the code and found ways to be efficient.

Most of my time is spent editing. I do one video at a time for each piece of what I want my students to watch before they try something on their own. I also want my videos to be short (ideally less than 3 minutes each), so I find I’m editing out spoken flubs, unclear descriptions, extra pauses, and time spent writing by hand to reach that ideal. Camtasia is my tool of choice. I know there are videos out there that I could assign rather than recording my own, but I’m convinced I can still work on my efficiency with some good advice.

I wonder if one of the following would work better:

• Record all of the writing with no narration first. Add voiceover second to match the text.
• Record all of the direct instruction for an entire class. Edit out flubs, writing, then split into multiple videos for a lesson.
• Write out all of the written parts before recording. Cut and paste them in the video frame one by one as I speak on top of the video. Gesture and highlight as needed.

I’ve sacrificed perfection for getting my ratio of recording time to video time down to about four to one. That’s still a sizable investment of time, and it certainly benefits my students, but as is, I’m leaving the classroom after 5 PM pretty regularly.

Any experienced flipped classroom folks care to weigh in on this?

## Computational Thinking in Teaching and Learning (Re-post)

A modified version of this post appeared on the Techsmith Blog here and in their quarterly newsletter, the Learning lounge. I appreciate their interest in my perspective. I hope to continue this important discussion here with my readers.

The idea of computational thinking has radically changed my approach to teaching over the past few years. This term, first coined by Jeanette Wing, a professor of computer science at Carnegie Mellon University, refers to several key ideas of thinking that are essential to computer science. The paper clearly identifies the reality that there are some tasks that computers do extremely well, and others that are better suited to the human brain. Traditionally, computer scientists have worked to outsource the calculating, organizing, searching, and processing work for task X to a computer so that they can focus on the more complex, challenging, and engaging aspects of the same task. According to Wing, one of the most essential skills we should develop in students is sorting tasks into these two groups.

My classroom, at its best, is a place where maximum time is spent with students wrestling with an engaging task. They should be working together to develop both intuition and understanding for required content. I can read the smiles or frowns and know whether I should step in. I can use my skills to nudge students in the right direction when I think they need it. Knowing precisely when they need it can’t easily be determined by an algorithm. For some students, this moment comes early on after encountering a new concept. Others require just one more minute of struggle before the idea clicks and it’s in their brains for good. Knowing the difference comes from the very human experience of time in classrooms with learners.

This is the human side of teaching. It is easy to imitate and approximate using technology, but difficult to produce authentically. Ideally, we want to maximize these personal opportunities for learning, and minimize the obstacles. For me, the computer has been essential to doing both, specifically, identifying the characteristics of tasks that a computer does better. If a computer can perform a task better than me or my students alone, I’m willing to explore that potential.

The most consistent application of this principle has been in the reduction of what I call ‘dead time’. I used to define this as time spent on tasks required for learning to be possible, but not actually a learning task itself. Displaying information on the board, collecting student answers, figuring out maximum and minimum guesses for an estimation problem – these take time. These sorts of tasks – displaying, collecting, processing – also happen to be the sort at which computers excel. I wrote a small web application that runs from my classroom computer that allows students to snap a picture of their work and upload it to my computer, anonymously if they choose. We can then browse student answers as a class and have discussions about what we see. The end result is equivalent to the idea of students writing their work on the board. The increased efficiency of sharing this work, archiving it, and freeing up class time to build richer activities on top of it makes it that much more valuable to let the computer step in.

I’ve also dabbled in making videos of direct instruction, but I have students watch and interact with them while they are in the classroom. During whole class instruction, I can’t really keep track of what each student is and isn’t writing down because I am typically in a static location in the classroom. With videos simultaneously going throughout the classroom, I can see what students write down, or what they might be breezing through too quickly. I get a much better sense of what students are not understanding because I can read their faces. I can ask individualized questions of students to assess comprehension. The computer distributes and displays what I’ve put together or curated for my students – one of its strengths. My own processing power and observation skills are free to scan the room and figure out what the next step should be.

Letting the computer manage calculation (another of its strengths) enables students to focus on the significance of calculations, not the details of the calculations themselves. This means that students can truly explore and gain intuition on a concept through use of software such as Geogebra or a spreadsheet before they are required to manage the calculations themselves. For students that struggle with arithmetic operations, this enables them to still make observations of mathematical objects, and observe how one quantity affects another. This involvement has the potential to inspire these same students to then make the connections that underlie their skill deficiencies.

Full disclosure though: I don’t have a 100% success rate in doing this correctly. I’ve invested time in programming applications that required much more effort than an analog solution. For instance, I spent a week writing all of my class handouts in HTML because the web browser seemed like a solution that was more platform independent than a PDF. That ended when I realized the technology was getting in the way of my students making notes on paper, a process I respect for its role in helping students make their own learning tools. There are some tasks that work much more smoothly (or are just more fun) using paper and a marker.

I value my student’s time. I value their thoughts. I want to spend as much class time as is possible building a community that values them as well. Where technology gets in the way of this, or adds too much friction to the process, I set it aside. I sit with students and tell stories. I push them to see how unique it is to be in a room for no other reason but to learn from each other. When I can write a program to randomize groups or roll a pair of dice a thousand times to prove a point about probability, I do so.

Knowing which choice is better is the one I wish I could write an algorithm to solve. That would take a lot of the fun out of figuring it out for myself.

## Revising My Thinking: Repetition

Traveling with students has always been one of the most rewarding parts of the teaching job. Seeing students out of their normal classroom setting draws out their character much more than content alone can. One particular experience on a trip last week forced me to rethink aspects of my classroom as I never could have predicted it would.

On our second day of the trip, students experienced the lives of Chinese farmers. For breakfast, we paced a series of stalls cookies of sizzling noodles, Chinese pancakes, and tea eggs – students could spend no more than ten Yuan on their breakfast. After leaving the market and driving for an hour, we arrived at a village surrounded by tea hills. Here, the farmer experience began. The students were divided into three groups and set out to compete for first, second, and third place in a series of tasks; their place determined how much the group would be paid in order to purchase dinner that night.

Students tilled the ground with hand tools to plant vegetables, with a seasoned farmer showing them what to do, and then judging them on their efforts. The farmer’s wife gave a dumpling making lesson, and then had students make their lunch of dumplings according to her example. The third task involved collecting corn from a nearby field and putting it into a woven sack. Teams were judged by both quantity and quality. Many students tossed out corn that had shriveled kernels and silk from beetle larvae around the stalks. Students at this point guarded their yellow post-it notes (where the guide recorded their earnings) carefully, chasing them down when they flew away in the wind.

In the final task, students were to earn money by assembling plastic pens. For every one hundred pens put together, the group would earn 1 Yuan, or about 16 cents. Our guide said we would work on this for three hours. I prepared her for the likelihood that the students might not last that long. Such a simple task would surely result in disinterest, especially in a group that was already distressed by our insistence that their mobile devices stay put away for the majority of the day. To myself, I questioned whether an investment of three hours into the task was really necessary to get students to appreciate the meaning of a day of hard work or to understand the required input of human energy to create a cheap plastic item. They were already exhibiting signs of fatigue before this, and a repetitive task like this couldn’t make things any better, right?

The first pattern I noticed was that students quickly saw the need for cooperation. Each student felt the inefficiency of building one complete pen, one at a time. Without any input from adults, the students organized themselves into an assembly line. They helped each other with the tricks they discovered to shave off seconds of the process. They defined their own vocabulary for the different parts and stages of assembly. Out of the tedium, they saw a need for innovation, and then proceeded to find better ways on their own. While they worked, they sang songs, told jokes, and made the most of the fact that they could socialize while they worked.

The students were brutally honest with our guide about the value of the work they were doing. They expressed disbelief that they couldn’t be paid more for their time. The guide responded by reminding the students of the real costs of things: 17 Yuan for a chicken, 2 Yuan for bottles of clean water at dinner. The students responded by asking for the price of the pens at the market (“0.8 yuan each” said our guide) and said that without the people working, the pens wouldn’t be made. By the end, students had assembled 3,880 pens, and had smiles on their faces even at that point.

The other outcome of this activity was that each student was permitted to keep one pen as a keepsake of the day. For a group of students that routinely leaves things everywhere, these pens were guarded and treasured as closely as their mobile devices. A couple of them were so attached that they insisted on bringing their pens with them for pre-dinner free time at the creek.

There were so many lessons that came out of the repetitive nature of this task. As I said, I underestimated the level to which students would be engaged by this activity. They took pride in their work. They tested their pens carefully before counting and bundling them together with a rubber band. They took time to understand what they were doing in order to find better ways.

I routinely look for students to have similar discoveries in my class. There is repetition. There is a need for careful reflection on the quality of an answer or clarity of explanation.

I do, however, try to hasten this process because I underestimate the value of repetition during my class period. I’ve argued before that class time should be spent making the most of the social aspect of the classroom for learning. Repetitive drills don’t tend to make the cut by that standard. This is, after all,one of the points I frequently make about the role of computers and computational thinking. I do introduce students to tedious processes, but usually cut out the middle part of students feeling that tedium themselves, because I figure they get it without needing to actually experience it. I do this to save time, but I now think I might be spoiling the punchline of every lesson in which I take this approach.

After seeing the students themselves invent and create on their own and as a group (and with no adult intervention), I now feel the need to rethink this. Perhaps I’m undervaluing the social aspect of repetitive tasks and their potential for building student buy-in. Maybe class time with meaningful repetition is valuable if it results in the community seeking what I have to share from my mathematical bag of tricks. Maybe the students don’t fully believe that my methods are worth their time because I tell them what they should feel instead of let them feel it themselves.

Perhaps I’m also reading too much into what I observed on the trip. I am , however, quite surprised how off the mark I was in predicting the level of engagement and enjoyment the students would have in spending three hours assembling pens. I’m willing to admit my intuition could also be off on the rest.

## Sensors First – A Changed Approach

I presented to some FIRST LEGO League teachers on the programming software for the LEGO Mindstorms EV3 last week. My goal was to present the basics of programming in the system so that these teachers could coach their students through the process of building a program.

The majority of programs that students create are the end product of a lot of iteration. Students generally go through this process to build a program to do a given task:

1. Make an estimate (or measurement) of how far the motors must rotate in order to move the robot to a given location.
2. Program the motors to run for this distance.
3. Run the program to see how close the robot gets to the desired location.
4. Adjust the number in Step 1. Repeat until the robot ends up in the right location.

Once the program gets the robot to the right location, this process is repeated for the next task that the robot must perform. I’ve also occasionally suggested a mathematical approach to calculate these distances, but the reality is that students would rather just try again and again until the robot program works. It’s a great way to introduce students to the idea of programming as a sequence of instructions, as well as familiarity with the idea that getting a program right on the first try is a rarity. It’s how I’ve instructed students for years – a low bar for entry given that this requires a simple program, and a high ceiling since the rest of programming instructions are extensions of this concept.

I now believe, however, that another common complaint that coaches (including me) have had about student programs is a direct consequence of this approach. Most programs (excluding those students with a lot of experience) require the robot to be aimed correctly at the beginning of the program. As a result, students spend substantial time aiming their robot, believing that this effort will result in a successful run. While repeatability is something that we emphasize with students (I have a five in a row success rule before calling a mission program completed) it’s the method that is more at fault here.

The usual approach in this situation is to suggest that students use sensors in the program to help with repeatability. The reason they don’t do so isn’t that they don’t know how to use sensors. It is that the aim and shoot method is, or seems, good enough. It is so much easier in the student’s mind to continue the simpler approach than invest in a new method. It’s like when I’ve asked my math students to add the numbers from 1 to 30, for example. Despite the fact that they have learned how to quickly calculate arithmetic series before, many of them pick up their calculators and enter the numbers into a sum, one at a time, and then hit enter. The human tendency is to stick to those patterns and ideas that are familiar until there is truly a need to expand beyond them. We stick with what works for us.

One of my main points to the teachers in my presentation was that I’m making a subtle change to how I coach my students through this process. I’m calling it ‘sensors first’.

The tasks I give my students in the beginning to learn programming are going to require sensors in order to complete. Instead of telling students to program their robot to drive a given distance and stop, I’ll ask them to drive their robot forward until a sensor on their robot sees a red line. I’ll also require that I start the robot anywhere I want in the test of their program.

It’s a subtle difference, and requires no difference in the programming. In the EV3 software, here’s what it looks like in both cases, using wheels to control the distance, and a sensor:

What am I hoping will be different?

• Students will look to the challenges I give them with the design requirement built in that aim-and-shoot isn’t an option that will result in success. If they start off thinking that way, they might always think how a sensor could be used to make the initial position of the robot irrelevant. FLL games always have a number of printed features on the mat that can be used to help with this sort of task.
• When I do give tasks where the students can start the robot wherever they choose, students will (hopefully) think first whether or not the starting position should matter or not. In cases where it doesn’t, then they might decide to still use a sensor to guide them (hopefully for a reason), or drop down to a distance based approach when it makes sense to do so. This means students will be routinely thinking what tool will best do the job, rather than trying to use one tool to do everything.
• This philosophy might even prompt a more general need for ways to reduce the uncertainty and compound error effect associated with an aim and shoot approach. Using the side of the table as a way to guide straight line driving is a common and simple approach.

These sorts of problem solving approaches are exactly the way successful engineering design cycle works. Solutions should be found that maximize the effectiveness of a design while minimizing costs. I’m hoping this small change to the way I teach my students this year gets them spending more time using the tools built into the robot well, rather than trying to make a robot with high variability (caster wheels, anyone?) do the same thing two times in a row.

## Uncertainty about Uncertainty in IB Science

I have a student that is taking both IB Physics with me and IB Chemistry with another science teacher. The first units in both courses have touched on managing uncertainty in data and calculations, so she has had the pleasure (horror) of seeing how we both handle it. For the most part, our references and procedures have been the same.

Today we worked on propagating error through the calculation $$\Delta x = \frac{1}{2}at^2$$ with uncertainties given for acceleration and time. The procedure I’ve been following (which follows from my experiences in college and my IB textbooks) is to determine relative error like this:

$$\frac{\delta x}{\Delta x} = \frac{\Delta a}{a} + 2 \cdot \frac{\Delta t}{t}$$

In chemistry, they are apparently multiplying uncertainty by 0.5 since it is a constant multiplying quantities with uncertainty. On a quick search, I found this site from the Columbia University physics department that seems to agree with this approach.

My student is struggling to know exactly what she should do in each case. I told her that everything I’ve seen from the IB resources I have in physics supports my approach. The direct application of the formula suggests that an exact number (like 1/2) has zero uncertainty, so it shouldn’t be involved in the calculation of relative error. That said, the different books I’ve used to plan my lessons agree with each other to around 95%. There is uncertainty about uncertainty within the textbooks discussing how to manage uncertainty. Theory of knowledge teachers would love the fact that teachers of a generally objective field (such as science) have to occasionally acknowledge to our students that textbooks don’t tell the entire story.

The reality is that there are a number of ways to handle uncertainty out in the world. Professionals do not always agree on the best approach – this conversation on the Physics Stack Exchange has a number of options and the mathematical basis behind them. For students that are used to having one correct answer, this is a major change in philosophy.

Thus far in my teaching career, I haven’t delved this deeply into uncertainty. The AP Physics curriculum doesn’t require a deep treatment of the concepts and roughly ignores significant figures as well. I talked about some of the issues with uncertainty with students, but I never felt it was necessary to get our hands really dirty with it because it wasn’t being assessed. We also learned error analysis in my experimental design courses in college, and it was part of the discussion there, but it was never the class discussion. It’s really interesting to think about these issues with students, but it’s also really difficult.

It seems that the questions that have resulted both from class and for my own understanding are exactly the style of conflict that the IB organization hopes will result from its programs. The way this student throws her hands up in the air and asks ‘so what do I do’ and managing the frustration that results is the same difficulty that we as adults face in resolving daily problems that are real, and complex.

The philosophy that I shared with the students was to be aware of these issues, but not to fear them. It should be part of the conversation, but not its entirety, especially at the level of students that are new to physics. I’m confident that some of the discomfort will melt away as we do more experimentation and explore physics models that tend to describe the world with some level of accuracy. The frustration will yield to the fact that managing uncertainty is an important element of describing how our universe works.