All posts by Evan Weinberg

From the Archives - Notes Before A Move

It has been quiet over here on the blog. You haven’t missed anything - things are fine. I’ve been running on many cylinders and focusing on some big projects in the works.

I’ve been thinking a lot about the concept of school and why we do what we do. I was reminded while looking through some old files that I had written a long-form article back in the summer of 2010 before moving overseas to teach for the first time. It was an attempt to make sense of the many lines of thought I had about the public school system in general after teaching in the Bronx for six years, and one year at the KIPP NYC College Prep high school. I was not blogging at the time, and did not have a good place to put this beyond emailing it out to some close friends.

I still think it represents my long held belief that we are at our best when we choose to talk and listen to one another. Demonizing one another does little to make progress.


We need a reliable and effective educational system. This fact is obvious to most people. Politicians know that making sound bites that state this fact is necessary to win elections. The difficult part comes when individuals attempt to define precisely what they mean by 'reliable' and 'effective'.

When informing people that I am a teacher, responses are always delivered with a healthy mixture of three main themes: acknowledgment of the difficulty of the job, its importance, and a statement on how teachers are not valued in today's society. What follows is also like clockwork: a sip of a drink, a statement about the “other hand” - at least teachers get summers off and can go home at three. Sometimes I explain other realities of the job (such as grading or the intricacies of lesson planning.) Other times I just nod and accept that most people lack an understanding of how much work is involved in good teaching (both inside and outside of the classroom) or in developing into a better teacher after each day's set of experiences.

As has been said many times before, good teaching is both an art and a science. Teachers will then admit (often when out of hearing range of administrative judgment) that good teaching is an iterative process. There are good days and rough ones, engaging lessons and unintentionally boring activities, and even times when a potentially good lesson fizzles because it meets a particularly fickle developing mind too soon after lunch. While principles of psychology, child development, and principles of cognition can shed significant insight into what should work well in the classroom, teachers are expected to also use a great deal of intuition and experience to figure out what will work best to help students achieve their learning goals and meet standards. Students are, after all, people, not machines.

It is also fairly obvious that the concept of accountability in the educational system is here to stay. This is not in itself a bad idea – given that most people agree about the importance of education, distinguishing an effective educational system from a less effective one is necessary to iteratively reach a system that works well for its students. The devil is again in the details. Teachers, administrators, political leaders, professors, statisticians – they can all be as different in their approaches as there are students in the New York City educational system.

I will now admit one of my own mistakes as a teacher: I have punished an entire class of students for the actions of a few. It never gets me the results I want, and when I have thought about it afterward, it never makes sense. Many students do the right thing on a regular basis – why yield control of the class to the few that least are able to handle it? These individuals often need to be managed in a different way, time, or setting. When I do handle things in this individualized way, as difficult as it can be with a larger class, it always works out in a more positive way for both myself and the involved students.

The logic of a one-size-fits-all solution does not make sense in education. So why is it so common? Our community grapples with the difficulties of reconciling the practical side of accountability with the ultimate goal of educating youngsters to become informed and responsible citizens every day. And yet, we frequently see solutions or policies that attempt to reduce complexity to the singular innovation, classroom structure, or educational program that will fix all of the system's problems.

Furthermore, many people in our field strive on a regular basis to paint a picture of other players as being woefully inadequate, incompetent or immoral, even though these may be a small fraction of the whole. Principals complain about veteran teachers that refuse to try new things and are difficult to fire because of union rules. Teachers that join the profession through alternative routes cry foul when some principals seem concerned only with pass percentages or when a veteran teacher does not take the time to grade nightly homework. A public school parent wonders why his son's new science teacher, who cannot control a class, replaced one with more experience who was fired because he refused to write a whole new curriculum without being paid for the time. A community member might see charter schools as elitist and unfairly funded, but a student attending a charter might just as easily wonder why she could not get the personal attention she needed from her old neighborhood school.

The fact is that the entire spectrum of humanity, from crooks to tragic idealists, are present in our system. There is also a substantial population on the other side of the coin. There are parents that want to help their children with homework but do not know how. There are new teachers that are willing to work long hours to write lesson plans, but do not know that the secret of teaching addition of fractions could be revealed in a minute long conversation with a veteran. Furthermore, there are veteran teachers that have legitimate concerns about policy changes based on their past experience, but their voices are drowned out by others labeling them 'naysayers'.

To frame the debate by assertions from one group on how much another group cares (or does not care) about children and their education is completely unproductive – all of us want the best for the children in our system. There are many innovative, talented, and passionate people that want to work hard in a system to help children make meaningful progress in developing skills for future success. To also claim, however, that moving forward is impossible because of a minority is just as illogical. There are ways to include everyone in the process and discuss how to lead students to develop good character traits and be prepared for their own academic goals.

The primary flaw in the current administrative efforts is in looking for the system that will work for everyone, rather than the people that will make the system work for all. Teachers are often told to differentiate instruction, which means to optimize classroom activities to help and support students of all skill levels to reach specific learning goals. All students are not the same, and neither the paths they follow to to reach their academic and character goals, nor the support they need along the way, will be the same. Why do we look for solutions that are not differentiated in this manner?

The energy crisis will not be solved by just solar power, or wind power, or biofuels. It will be solved by solar power and wind power and biofuels and conservation and the development of new technologies and the adaptation of some old ones. There is no silver bullet; there is, however, a combination of different energy sources that will together bring us a more stable climate, a more stable economy, and a more sustainable lifestyle for people around the world.

Along the same line of reasoning, just creating a system with more charter schools will not solve our problems if the human capital needed to run them is not developed concurrently. Changing the system to one that closes failing schools and replaces them with the same administrators and teachers and conditions that led to its downfall is not the answer. More testing is not the answer, but eliminating testing will not work either.
We need to better support neighborhood schools and the people that work within them and have a system that supports charter schools. We need a union that works to support and protect teachers that might agree with the goals of the administration, but not the methods they use to reach them. We need to study both effective charter schools and effective traditional schools for the successful elements they share. We need to innovate to find ways to emulate the positive aspects of both and invest in people to do the extra work necessary to adapt and run these systems in effective ways. We need to find ways to unite the experience of veterans with the energy of new teachers and alternative certification programs like Teach for America. We need not to spend late nights reinventing the wheel while veteran teachers are eager to be heard.

An earnest effort to invest in and support the people that make these systems work is the other crucial piece that is necessary for the system to improve. Systems that depend on human talent and ingenuity (as education does) cannot be duplicated by simply copying the structures produced by effective educators. One educator may make use of a word wall effectively to improve his or her students understanding, but simply insisting that every classroom have a word wall does not make every educator effective.

A podcast made earlier this year from This American Life, a production of WBEZ in Chicago, described an interesting collaboration between Toyota and GM called New United Motor Manufacturing Incorporated, or NUMMI. Workers from GM traveled to Japan to tour the Toyota plants and explore the structures in place. Many of the workers that traveled were experienced machinists that had spent years doing the same thing over and over again in the plant, and were disillusioned by the mechanical nature of their job. The success of the Japanese system relied on the observations and ideas of individual workers along the assembly line working together and alerting each other when there was a problem. When the assembly line stopped because of a problem, a playful tone would play throughout the factory so that individual workers would know which station needed assistance. Floor managers would work to divert workers to assist the troubled station until the issue was resolved. The end result of this system was a higher productivity, higher quality product, and increased pride on the part of the workers constructing the cars. The American workers were energized by the visit and left Japan inspired and ready to apply the lessons learned to the factory floor back in the United States.
Initially, management was excited to call upon the new energy of the returning workers. These managers attempted to copy the exact structure of the plants in Japan, down to the alignment and arrangement of the individual machines on the shop floor. They opted not to invest the same energy and money in establishing the systems that supported the people in the assembly line. Penalties were instituted for halting the assembly line, and workers would snitch on each other to enhance their reputations with managers. In the end, the same problems experienced by American plants before the collaboration still occurred. What remained was an unhappy workforce, and a factory that looked just like the plant in Japan, but with none of the productivity.

Cookie cutter solutions appeal to the preference for simplicity built into the human mind. They are shortcuts, which are dangerous if used without understanding what is cut out for the sake of speed. Charter schools do a lot of good for the students that attend them, but that does not mean we need many more charter schools - there are both effective and ineffective ones. Research that shows that Teach for America corps members can help students make progress on a level comparable to teachers with more experience, but that does not mean we need more Teach for America corps members and fewer traditional teachers.

These represent individual pieces of the solution to the difficulty facing us in reforming the educational system to work better for its students. The key is to figure out how to make the most of all of the different talents and capabilities of all of the people in our system to do this. This is not about stating which group is the greatest obstacle to progress. The system will only improve if we figure out how best to inspire and support the people working within it. Each of us knows what is at stake.

New Moves: Discretized Grades

Two of the courses I teach, AP Calculus AB and IB Mathematics SL year two, have clear curricula to follow, which is both a blessing an a curse. While I primarily report standards based grades in these courses, I have also included a unit exam component that measures comprehensive performance as well. These are old fashioned summative assessments that I haven't felt comfortable expelling from these particular courses. Both courses end with a comprehensive exam in May. The scores on these exams will be scaled either to a 1 - 5 (AP) or a 1 - 7 (IB). The longer I have taught, the more I have grown to like the idea of reporting grades as one of a limited set of discrete scores.

Over my entire teaching career I have worked within systems that report grades as a percentage, usually to two digit precision. Sometimes these grades are mapped to an A-F scale, but students and parents tend not to pay attention to those. One downside to the percentage reporting system is that it implies that we have measured learning to within a single percentage point. Let's leave out the idea that we should be measuring learning numerically at all for the moment, and talk about why the idea of discrete grades is a better choice.

As a teacher, I need to make sure that I grade assignments consistently across a course, or across a section at a minimum. I'm not sure I can be consistent within a percentage point when you consider the number of my students multiplied by the number of assessment items I give them. I'm likely consistent within five percent, and very likely consistent within ten. I am also confident in my ability to have a conversation with any student about what he or she can do to improve because of the standards based component of my grading system.

One big problem I see with grading scales that map to letter grades is the arbitrary mapping between multiples of ten and the letter grades themselves. As I mentioned before, many don't pay attention to the letter at all when the number is next to it. Students that see a score of 79 wonder what one thing they should have done on the assessment to be bumped up by a percentage point to get an 80, resulting in a letter grade of a B. That one point also becomes that much more consequential than a single point raising a 75 to a 76.

Another issue comes from the imprecise definition of the points for each question. Is that single point increase a result of a sign error or a conceptual issue that is more significant? The single digit precision suggests that we can talk about things this accurately, but it is not common to plan assessments in such a way that these differences are clearly identified. I know I don't have psychometricians on staff.

For all of these reasons and more, I've been experimenting with grading exams in a way that acknowledges this imprecision and attempts to deal with it appropriately.

The simplest way I did this was with final exams for my Precalculus course last year. In this case, all scores were reported after being rounded to the nearest three percentage points. This meant that student scores were rounded roughly to the divisions of the letter grades for plus, regular, or minus (e.g. B-/B/B+).

In the AP and IB courses, this process was more involved. I decided that exam scores would be 97, 93, 85, 75, and 65 which would map to 5-4-3-2-1 for AP and 7-6-5-4-3 for IB. I entered student performance on each question into a spreadsheet. Sometimes before, and sometimes after, I would also go through each question and decide what sort of representative mistakes I would expect a 5 student to make, a 4 student, and so on. I would also do a couple different scenarios of scoring at each level to find how much variation in points might result in a given score. That led me to decide on which cut scores should apply, or at least would suggest what they might be for this particular exam. Here is an example of what this looks like:

At this point I would also look at individual papers again, identify holistically which score I thought the student should earn, and then compared their raw scores to the scores of the representative papers. If there was any clear discrepancy, this would lead to a change in the cut scores. Once I thought most students were graded appropriately, I added the scores into a Google script to scale all of the scores to the discrete scores.

This process of norming the papers took time, but it always felt worth it in the end. I felt comfortable talking to students about their scores and the work that qualified them for that score. The independence of these totals from the standard 90/80/70/60 mapping between percentages and letter grades meant that the scores were appropriate indicators of how they did, regardless of the percentages of points. Students weren't excited to know that they couldn't figure out their total point percentage and know their score, but this was not a major issue for them. Going through this process felt much more appropriate than applying a 10*sqrt(score) type of mapping to the raw scores.

In my end of semester feedback, some students reported their frustration that they would receive the same score as other students that earned fewer points. I understand this frustration in principle, but not in practice. The scores 92.44% and 91.56% also receive the same score under the standard system by rounding to the nearest percentage. I think in the big picture, the grades students received were fair, and students have also reported a feeling of fairness with respect to the grades I give them.

I'm in favor of eliminating the plus and minus designations from letter grades. They are communication marks and nothing more, and I would rather communicate those distinctions through written comments or in person rather than by a symbol. These marks are more numerical consequences of the percentage grade scale than they are intentional comments on student learning, and they do more harm than good.

New Moves: Reassessment

I’ve been a bit swamped over the course of the semester and unfortunately haven’t made the time to write regularly. There were lots of factors converging, and nothing negative, so I accepted that it might be one of the things to slip. This is something I will adjust for semester two.

I’ve written in the past about my reassessment systems and use of WeinbergCloud to manage them. I knew something had to change and thought a lot about what I was going to do to make my system more reasonable, something the old system was not.

At the beginning of the year, I sat down and started to reprogram the site...and then stopped. As much as I enjoyed the process of tweaking its features and solving problems that arose with its use, it was not where I wanted to spend my time. I also knew that I was going to teach a course with a colleague who also was planning to do reassessment, but I was not ready to build my system to manage multiple teachers.

I made an executive decision and stepped away from the WeinbergCloud project. It served me well, but it was time to come up with a different solution. We use Google for Education at my school, and the students are well versed in the use of calendars for school events. I decided to make this the main platform for all sorts of reasons. By putting my full class and meeting schedule into Google calendar, it meant that I could schedule student reassessments by actually seeing what my schedule looked like on a given week. Students last year would sign up to reassess at times when I had lunch duty or an after school meeting because my site didn’t have any way to block out times. This was a major improvement.

I also limited students to one reassessment per week. They needed to email me before the beginning of any given week and tell me what standard they wanted to reassess over. I would then send them an invite to a time they would show up to do their reassessment. This improved both student preparation and my ability to plan ahead for reassessments knowing what my schedule looked like for the day. Students liked it up until the final week of the semester, when they really wanted to reassess multiple times. I think this is a feature, not a bug, and will incentivize planning ahead.

I recorded student reassessments in PowerSchool in the comment tab. Grades with comments appear with a small flag next to them. This meant I could scan across horizontally to see what an individual student had reassessed on. I could also look vertically to see which standards were being assessed most frequently. The visual record was much more effective for qualitative views of the system than what I had previously with WeinbergCloud.

The system above was for my IB and AP classes. For Algebra 2 (for which I teach two sections and share with the other teacher) we had a simpler system. Students would be quizzed on standards, usually two at a time. Exams would be reassessments on all of the standards. Students would then have a third opportunity to be quizzed on up to three of the standards of each unit later in the semester. Students that had less than an 8 were required to reassess. This system worked well for the most part. Some students thought that the type of questions between the quiz and exam were different enough that they were not equivalent assessments of the standards. My colleague and I spent a lot of time talking through the questions, identifying the types of mistakes on individual questions that were indicators of 6 versus 8 versus 10, and also unifying the feedback we gave students after assessments. The system isn’t perfect, but students also were all given up to three opportunities to be assessed on every standard. This equity is not something that I’ve had happen before in my previous manifestations of SBG.

On the whole, both flavors of reassessment systems were much more reasonable and manageable, and I think they are here to stay. I’ll spend some time during the winter break thinking about what tweaks might be needed, if any, for the second half of the year.

A Note on Vertical Planning

Many teachers justify including Topic X and skill Y on a high school syllabus because colleges and universities expect students to have mastered topic X and skill Y for their courses. Not because Topic X is interesting or skill Y is necessary for success at the high school level, but because the next step expects it.

I wonder if the set of X and Y for high school teachers matches the set of X and Y for universities. I wonder how often university professors and high school teachers (and middle school or elementary teachers for that matter) get together to discuss this.

I wonder which of our assumptions about what the other thinks matches reality.

 

 

New Moves: Design Principles and Generosity

During the summer, I attended the academy for the new class of Apple Distinguished Educators in Melbourne, Australia. Among the workshops I attended was one from Stephen Hider on design principles.

Given the obsession I've grown over the past few years with design, much of this was nothing new. Alignment, proximity, repetition, and contrast were all old friends. The one that seemed new, perhaps because of a new name, was generosity. This principle means that an element of a design has been enough space around it such that it is, in Stephen's words, "able to breathe." Removing distracting elements around the focus allows a person to think about it in isolation, and with more clarity than would otherwise be permitted without the added space.

The idea is something that I've been thinking about for a while, inspired principally by Dan Meyer's exploration of ways that digital media provides ample opportunities to do things much differently than when confined by the economic costs of paper. (For more on this, see his talk titled 'Delete Your Textbook', linked here.)  I wrote in my previous post about changing the organization of my course away from a daily handout and toward individual tasks, each a separate linked PDF file. Individual problems or questions are presented on their own with space around them, when appropriate.

Here's an example of the contrast between a handout from last year's Algebra 2 course, and a page from a task this year.

The old:

...and the new:

The amount of paper I use in my classroom is reduced, and is much more deliberate. I still will print out individual pages when I really want to do so. The fact that I have freed myself from the demand that there be a handout for every class means I can be much more thoughtful about this. I can focus more on how I visually present ideas that are connected to each other rather than trying to make sure that everything fits in a manageable area of a page. The intention was not to be paperless, but I am finding that this small change has led to students being more likely to take time to pause between tasks and reflect on the work they have done before moving on. Nothing I have done previously has had such an effect.

New Moves: Course Organization

Ever since switching to standards based grading, many components of my courses and classroom organization have come into alignment with my philosophy of teaching. Ideally, these align perfectly, but the realities of time and professional responsibilities can shift this alignment. My beliefs on assessment, on effective learning activities, and on using the classroom social space effectively have all come into sharp focus when my grade book aligns more closely with the learning that goes on.

There is one notable exception to this alignment.

My class notes and handouts, and therefore much of my courses, have always  been organized around days of class within a unit: Unit 1 Day 2 handout, Unit 3, Day 5 handout, Unit 5 review, etc. This has made it easy for someone that misses day three of unit two to know what precisely was missed during the day. It makes it easy for me to see how I organize the days within a unit. This is how I've done things for the past fourteen years.

In courses organized around standards like mine, a student should be able to see the development of content related to a standard from start to finish. The progression of content within a standard allows students to see ideas grow from simple to complex. A student that wants to review standard 1.1 needs to know which days covered material related to that standard. While identifying this is an important high level task, it doesn't help struggling students know where they should look to know what ideas relate to a given standard.

This was the main reason I have organized all of my course materials this year by standard. Here's a screenshot of a portion of my IB Mathematics SL Year 2 page on Moodle:

Each problem set or activity is organized under the learning standard under which it applies. When I post notes about a given problem or activity, it is put underneath the problem set to which it applies. Some days we work on content related to multiple standards, but I parse that information into different parts and organize it that way. When we do work that spans multiple standards, that work is posted above the standards and identified as such.

In the past, students have consistently asked to know the details of a given standard - now they can look for themselves for what types of problems relate. The materials are also generally organized in increasing level of difficulty or abstraction, so students know that the more challenging content is listed further down below the standard. I've also found that the types of activities I have students do is more diverse. I might send students to watch a video, do a curated list of Khan Academy exercises, or write a response to a prompt. Previously, the class handout was the one source of truth for what students should be doing at any one time. Now the materials have been expanded.

There is still a preferred order or menu of activities that I prescribe for each class. I post this as an agenda and refer students to it when it looks like they need some direction:

Students have reported that they have more freedom to do things at their own pace under this system.  We may not finish all of the material from Unit 2, Day 3 - that just means that the material can be moved to the next day's agenda. Naming the tasks in this different way makes it easy for a student to move ahead or work independently. I can spend my time during the class helping those who need it and challenging those that are making good progress.

I really like how this has transformed the spirit of my classroom. I admit that the organization of the course into standards is artificial - the real world is not organized this way. Being deliberate and communicating how class activities serve the learning standards, and what relates to big picture unit-wide challenges, helps students understand the balance between the two. I know this isn't the final answer, but it does seem to be a step in the right direction for my students.

2016 - 2017 Year In Review: Standards Based Grading

Overview

I've used Standards Based Grading, or SBG, with most of my classes for the past five years. It transformed the way I think about planning, assessment, classroom activities...and pretty much everything else around my teaching practice. I have a difficult time imagining what would happen if I had to go back. I've written a lot about it this year - here are some of the posts:

Scaling up SBG for the New Year
Standards Based Grading and Leveling Up
The How and Why of Standards Based Grading @ Learning 2.0
Too Many Reassessments, Just in Time for Summer

As I wrote in that last post, I still wrestle with the details. I'm fully invested in the philosophy though. I am glad to have my administrators supportive in having me adapt it to work within the more traditional system. I've also had some great conversations with colleagues who are excited by the concept, but that wonder how to make it work in their courses.

Here's the rundown of how it went this year.

What worked:

  • Students really bought into the system. The most common responses on student surveys on what I needed to keep involved the grade being defined by standards and the reassessment system. I found students were often the system's best advocates when other teachers and parents had questions, which made communication much easier.
  • The system was the gateway to many very positive conversations with students around learning, improvement, and the role of feedback. Conversations were around understanding concepts and applying them, not asking for points. Many students would finish a reassessment and tell me that they their grade should stay the same, but that they would keep trying. Other students would try and argue their way to a higher score, but by using the vocabulary I use to define my standard descriptors (linked here). They understood that mistakes are informative, not punitive. Transplanting this understanding to students in my new school was a major success of the year.
  • I developed a better understanding of what I'm looking for at each level on my 5 - 10 scale. Part of this came from being at a new school and needing to articulate this to students, parents, and administrators. The SBG and Leveling up project (linked above) helped refine my definitions of what distinguishes a 9 from a 10, or a 6 from a 7.

What needs work:

  • I had way too many reassessments. Full stop. I wrote about this in my post Too Many Reassessments, Just in Time for Summer and am exhausted just thinking about doing it again. There are a couple elements of this to unpack. One is that my credit system allows for reassessments to occur more frequently than I believe deep learning can really take place. I'm thinking about making it so students are locked out of reassessing on a standard for a set period of time, at least when going for a score of 8 or above where the goal is transfer of skills and flexibility of application. The other thing I am considering is limiting students to a single reassessment per week, or day, or some other interval. I have some time to decide on this, which is good, because both require a rewrite of my online signup tool, WeinbergCloud.
  • Long term retention was still not where it needs to be. I wrote about this already in my post about my IB Mathematics Year 1 course. As I have taught more and more in this system, I have believed ever more strongly that clear communication about what grades signify about a student matters. A lot. Moving from quarters to semester grades is one part of improving this, a change that my administrator team made for this coming year, but a lot of it still sits with me. I need to spiral, I need to reassess on old standards, and still hold students accountable for older material.
  • Communicating the role of semester exams was a major challenge for me this year. In a small school, I found it was easy to communicate with individual students and parents about the role of semester exams. I based much of my outreach on what I understood about these exams and the role of learning standards grades throughout the year. A standards based grade book breaks down the entire topic into bite sized pieces, which makes it easier both to communicate strengths and weaknesses, and for students and teachers to decide what is the best next step. Semester exams are opportunities to put all of these pieces together and assess a students's ability to decide which standards apply in a given problem. Another way of looking at it is a soccer practice versus a soccer game mentality.

    Ultimately, I do want students to be successful across the breath of the content on which a course is based. Semester exams serve as one way to measure that progress in the bigger picture of an entire course, rather than a unit. This also serves as a third scale on which to consider assessment in my course. Quizzes assess a standard, exams assess a unit of standards (with a few older standards thrown in), and semester exams assess mastery of a portion of the course. That different scale is why the 80% quarter grade, 20% exam grade proportions that I've followed for seven years is entirely reasonable.

    A student that aces all of the standards with a 100 but gets a 50 on the final ends up with a 90. This student receives with the same semester grade as someone that has a 90 up until the final, and gets a 90 on the final. I'm fine with this parity in grades. I would have very different conversations with those two students before the next semester of mathematics in their plans.

    The main challenge I found was that students and parents often looked at that final exam grade in isolation from, not together with, the rest of the scores in the grade book. The parent of the first student (100 than 50) that asks me to explain that disparity is certainly justified in doing so. Where I fell short was communicating the reality that in a standards based system, grades usually drop after a semester exam. It's a fundamentally different brand of assessment.

    I'll also point out that the report card presented a semester of assessment in table form as quarter 1 grade, quarter 2 grade, exam grade, and then semester grade. This artificially shows the exam grade as perhaps being more consequential to the grade than it actually is. This isn't in my realm of influence, so I'll stop talking about it. The bottom line is that I need to to a better job of communicating these realities to everyone involved.

Conclusion

I'm glad to be starting another year soon and to continue to make this system do good things for students. Cycle forward.

2016 - 2017 Year In Review: IB Mathematics SL

Overview

This was my third time around teaching the first year of the IB mathematics SL sequence. It was different from my previous two iterations given that this was not an SL/HL combined class. This meant that I had more time available to do explorations, problem solving sessions, and in-class discussions of the internal assessment (also called the exploration). I had two sections of the class with fourteen and twenty students respectively.

I continued to use standards based grading for this course. You can find my standards (which define the curricular content for my year one course) at this link:

IB Mathematics SL Year 1 - Course Standards

What worked:

  • My model of splitting the 80 - 85 minute block into twenty minute blocks of time works well. I can plan what happens in those sub-blocks, and try as hard as I can to keep students doing something for as much of those as I can. The first block is a warm-up, some discussion, check in about homework or whatever, and then usually some quick instruction before the next block, which often involves an exploration activity. Third is summary of explorations or the preceding activities, example problems, and then a fourth of me circulating and helping students work.
  • Buffer days, which I threw in as opportunities for students to work on problems, ask questions, and play catch up, were a big hit. I did little more on these days than give optional sets of problems and float around to groups of students. Whenever I tried to just go over something quick on these days, those lessons quickly expanded to fill more time than intended. It took a lot of discipline to instead address issues as they came up.
  • I successfully did three writing assignments in preparation for the internal assessment, which students will begin writing officially at the beginning of year two. Each one focused on a different one of the criteria, and was given at the end of a unit. Giving students opportunities to write, and get feedback on their writing, was useful both for planning purposes and for starting the conversation around bad habits now.

    I had rolling deadlines for these assignments, which students submitted as Google Docs. I would go through a set of submissions for a class, give feedback to those that made progress, and gentle reminders to those that hadn't. The final grade that went into PowerSchool was whatever grade students had earned by the end of the quarter.

    The principle I applied here (and one to which I have subscribed more fervently with each year of teaching) is that my most valuable currency in the classroom is feedback. Those that waited to get started in earnest with these didn't get the same amount of feedback as students that started early, and the quality of their work suffered dramatically. I'm glad I could have the conversations I had with students now so that I might have a chance in changing their behavior before their actual IA is due.

    An important point - although I did comment on different elements of the rubric, most of my feedback was on the criterion that titled the assignment. For example, in my feedback I occasionally referenced reflection and mathematical presentation in the communication assignment. I gave the most detailed feedback for communication, and graded solely on that criterion.

    These were the assignments:

  • I budgeted some of my additional instruction time for explicit calculator instruction. I've argued previously about the limitations of graphing calculators compared to Geogebra, Desmos, and other tools that have substantially better user experiences. The reality, however, is that these calculators are what students can access during exams. Without some level of fluency accessing the features, they would be unable to solve some problems. I wrote about this in my review of the course last year. This time was well spent, as students were not tripped up by questions that could only be solved numerically or graphically.
  • Students saw many past paper questions, and seem to have some familiarity with the style of questions that are asked.

What needs work:

  • I've come to the conclusion that preemptive advice is ineffective. "Don't forget to [...]" or "You need to be extremely careful when you [...]" is what I'm talking about. It isn't useful for students that don't need the reminder. It doesn't help the students that don't have a context for what you are telling them not to do, not having solved problems on their own. I have found it to be much more effective to address those mistakes after students get burned by them. Some of my success here comes from my students subscribing to a growth mindset, which is something I push pretty hard from the beginning. Standards based grading helps a lot here too.
  • I desperately need a better way to encourage longer retention of knowledge, particularly in the context of a two year IB course. I'll comment more on this in a later post, but standards based grading and the quarter system combined were factors working against this effort. I did some haphazard spaced repetition of topics on assessments in the form of longer form section two questions. The fact that I was doing this did not incentivize enough students to regularly review. I also wonder if my conflicted beliefs on fluency versus understanding of process play a role as well.
  • Students consistently have a lot of questions about rounding, reporting answers, and follow through in using those answers in the context of IB grading. The rules are explicitly stated in the mark schemes for questions - answers should be reported exactly or to three significant figures unless otherwise noted. The questions students repeatedly have relate to multiple part questions. For example, if a student does a calculation in part (a), reports it to three significant figures, and then uses the exact answer to answer part (b), might that result in a wrong answer according to the mark scheme? What if the student uses the three significant figure reported answer in a subsequent part?

    I did a lot of research in the OCC forum and reading past papers to try to fully understand the spirit of what IB tries to do. I'd like to believe that IB sides with students that are doing the mathematics correctly. I am not confident in my ability to explain what the IB believes on this, which means my students are uncertain too. This bothers me a lot.

  • Students still struggle to remember the nuances of the different command terms during assessments. They also will do large amounts of complex calculations and algebraic work in site of seeing that a question is only two or three marks. There is clearly more work to do on that, though I expect that will improve as we move into year two material because, well, it usually does. I wish there was a way to start the self-reflection process earlier.
  • Students struggle to write about mathematics. They also struggle with the reality that there is no way to make it go faster or do it at the last minute without the quality suffering. I still believe that the way you get better is by writing more and getting feedback, and that's the main reason I'm glad I made the changes I did regarding the exploration components. That said, students know how to write filler paragraphs, and I call them out on filler every single time.
  • We spent a full day brainstorming and thinking about possible topics for individual explorations. Surveying the students, only four of them are certain about their topics. The rest have asked for additional guidance, which I am still figuring out how to provide over the summer. I think this process of finding viable topics remains difficult for students.

Conclusion

I'll be following these students to year two. We have the rest of probability to do first thing when we get back, which I'll combine with some dedicated class time devoted toward the exploration. I like pushing the probability and Calculus to year two, as these topics are, by definition, plagued by uncertainty. It's an interesting context in which to work with students in their final year of high school.

2016 - 2017 Year In Review: PreCalculus

Overview

This was the first time I taught a true PreCalculus course in six years. At my current school, the course serves the following functions:

  • Preparing tenth grade students for IB mathematics SL or HL in their 11th grade year. Many of these students were strong 9th grade students that were not yet eligible to enter the IB program since this must begin in grade eleven.
  • Giving students the skills they need to be successful in Advanced Placement Calculus in their junior or senior year.
  • Providing students interested in taking the SAT II in mathematics some guidance in the topics that are covered by that exam.

For some students, this is also the final mathematics course taken in high school. I decided to design the course to extend knowledge in Algebra 2, continue developing problem solving skills, do a bit more movement into abstraction of mathematical ideas, and provide a baseline for further work in mathematics. I cut some topics that I used to think were essential to the course, but did not properly serve the many different pathways that students can follow in our school. Like Algebra 2, this course can be the swiss army knife course that "covers" a lot so that students have been exposed to topics before they really need to learn them in higher level math courses. I always think that approach waters down much of the content and the potential for a course like this. What tools are going to be the most useful to the broadest group of students for developing their fluency, understanding, and communication of mathematical ideas? I designed my course to answer that question.

I also found that this course tended to be the one in which I experimented the most with pedagogy, class structure, new tools, and assessment.

The learning standards I used for the course can be found here:
PreCalculus 2016-2017 Learning Standards

What worked:

  • I did some assessments using Numbas, Google Forms, and the Moodle built-in quizzes to aid with grading and question generation. I liked the concept, but some of the execution is still rough around the edges. None of these did exactly what I was looking for, though I think they could each be hacked into a form that does. I might be too much of a perfectionist to ever be happy here.
  • For the trigonometry units, I offered computer programming challenges that were associated with each learning standard. Some students chose to use their spreadsheet or Python skills to write small programs to solve these challenges. It was not a large number of students, but those that decided to take these on reported that they liked the opportunity to think differently about what they were learning.
  • I explicitly also taught using spreadsheet functions to develop student's computational thinking skills. This required designing some problems that were just too tedious to solve by hand. This was fun.
  • Differentiation in this course was a challenge, but I was happy with some of the systems I used to manage it. As I have found is common since moving abroad, many students are computationally well developed, but not conceptually so. Students would learn tricks in after school academy that they would try to use in my course, often in inappropriate situations. I found a nice balance between problems that started low on the ladder of abstraction, and those that worked higher. All homework assignments for the course in Semester 2 were divided into Level 1, Level 2, and Level 3 questions so that students could decide what would be most useful for them.
  • I did some self-paced lessons with students in groups using a range of resources, from Khan Academy to OpenStax. Students reported that they generally liked when I structured class this way, though there were requests for more direct instruction among some of the students, as I described in mu previous post about the survey results.
  • There was really no time rush in this course since after my decision to cut out vectors, polar equations, linear systems, and some other assorted topics that really don't show up again except in Mathematics HL or Calculus BC where it's worth seeing the topic again anyway. Some students also gave very positive feedback regarding the final unit on probability. I took my time with things there. Some of this was out of necessity when I was out sick for ten days, but there were many times when I thought about stepping up the challenge faster than I really needed to.

What needs work:

  • I wrote about how I did the conic sections unit with no-numerical grades - just comments in the grade book . The decision to do that was based on a number of factors. The downside was that when I switched back to numerical grades for the final unit, the grade calculation for the entire quarter was based only on those grades, and not on the conic sections unit at all. The conic sections unit did appear on the final exam, but for the most part, there wasn't any other consequence for students that did not reassess on the unit.
  • Students did not generally like when I used Trello. They liked the concept of breaking up lessons into pieces and tasks. They did not like the forced timelines and the extra step of the virtual Trello board for keeping track of things. This Medium article makes me wonder about doing this in an analog form if I try it in the future. I also could make an effort to instill the spirit of Scrum early on so that it's less novel, and more the way things are in my classroom.
  • I should have done a lot more assessment at the beginning of units to see what students knew and didn't know. It sounds like the student experiences in the different Algebra 2 courses leading to PreCalculus were quite different, which led to a range of success levels throughout. Actually, I should probably be doing this more often for all my courses.
  • Students could create their own small reference sheet for every exam. I did this because I didn't want students memorizing things like double angle identities and formulas for series. The reason this needs work is that some students are still too reliant on having this resource available to ever reach any level of procedural fluency. I know what students need to be fluent later on in the more advanced courses, sure, but I am not convinced that memorization is the way to get there. Timed drills don't seem to do it either. This challenge is compounded by the fact that not all students need that level of fluency for future courses, so what role does memorization here play? I have struggled with this in every year of my fourteen year career, and I don't think it's getting resolved anytime soon. This is especially the case when Daniel Willingham, who generally makes great points that I agree with, writes articles like this one.

Conclusion

This course was fun on many levels. I like being there to push students to think more abstractly as they form the foundation of skills that will lead to success in higher levels of mathematics. I like also crafting exercises and explorations that engage and equip the students that are finishing their mathematics careers. We should be able to meet the needs of both groups in one classroom at this stage.

I frequently reminded myself of the big picture by reading through Jonathan Claydon's posts on his own Precalc course development over the years. If you haven't checked him out, you should. It's also entertaining to pester him about a resource he posted a few years ago and hear him explain how much things have changed since then.

2016 - 2017 Year In Review: Surveys

Overview

Last year I took Julie Reubach's survey and used it for the students in my final set of classes at my previous school. This year I gave essentially the same survey. Probably the most important thing for me was to compare some of the results to make sure the essential elements of my teaching identity made the transition intact.

The positives:

  • Students responded that the reassessments and the quizzing system were important elements to keep for next year. I'll share more about my reflection on the reassessment system in a later post.
  • Students liked having plenty of time during class to work and get help if they needed it. I tried to strike a balance between this, exploration, and direct instruction. More on that last point below.
  • Students appreciated the structures of class and the materials. They liked having warm-up activities for each class, the organization of documents on Google Drive, and the use of PearDeck for assssment of their ideas during class.
  • The stories, personal anecdotes, and jokes at the start of class apparently go over well with students. I don't think I could stop this completely anyway, so I'm glad students don't necessarily see this as being unfocused or as a waste of class time.
  • Students like structured opportunities to work together and solve problems that are not just sets from the handouts. Explorations got strong reviews, which is good because I think they are good uses of class time too.

What needs work:

  • Students want more example problems. I consistently did some

in each class, but I always struggled with the balance between doing more problems and addressing issues as they came up individually. Some students want a bit more guidance that doesn't necessarily require whole group instruction, but say that the individual group explanations or suggestions aren't meeting their needs completely. This might mean I record some videos or present worked problems as part of the class resources in case students want them.

  • Related to the previous point is the use of homework, Some students want more help on homework, but again don't necessarily want to spend whole class instruction doing it. I admit that I still struggle with the usefulness of going over homework, particularly as a whole class and collecting information on what students struggled with is not smooth. The classroom notebook doesn't solve that problem to my satisfaction either. Short, focused presentations of how to get started on certain problems (and not full solutions) might be all that is needed to meet this shortcoming that many students mentioned in their surveys.
  • Despite my efforts to make learning the unit circle easier, students continue to report their dislike for learning it. I present students a series of approaches to understanding how to evaluate functions around the unit circle. This is also one of the few topics where I encourage both understanding (through creative assessment questions) and accuracy in evaluating functions correctly using whatever means students find necessary. Memorization, if that is what students choose to do, is one way that students could approach this. I think part of the issue is that proficiency in this topic requires more genuine effort than others. There are no shortcuts here, and facility with evaluating trigonometric functions goes a long way in making other topics easier. I'm not sure what the solution is here. This is one area where I think procedural fluency had no valid replacement, particularly in the context of IB, or preparation in Precalculus.
  • The other topic that students reported they found the most difficult was binomial theorem, again surprising given that it is one of the more procedurally straight forward topics of the courses. Do I need to consider teaching these in a more formulaic way so that students are more successful? I wonder if I have swung too far in the wrong direction with respect to avoiding activities that demand fluency or practice.
  • Students want more summary of what we've done each class and where we are going. I think this is a completely valid request, and is perhaps made easier to do with each course defined in terms of learning standards.
  • Conclusion

    I appreciate how consistently students are willing to give feedback about my classes. There were some really useful individual comments that will help me think about how the decisions I make might affect the spectrum of students in each course. I promised students that I wouldn't look at the results until after grades were in, just in case that might encourage more honesty. This was an anonymous survey, and with the larger class sizes this year, I think there was a closer amount of anonymity with respect to individual responses. There is a lot to sift through here, which is why I'm glad I still have the better part of the summer to do so.