Two of the courses I teach, AP Calculus AB and IB Mathematics SL year two, have clear curricula to follow, which is both a blessing an a curse. While I primarily report standards based grades in these courses, I have also included a unit exam component that measures comprehensive performance as well. These are old fashioned summative assessments that I haven't felt comfortable expelling from these particular courses. Both courses end with a comprehensive exam in May. The scores on these exams will be scaled either to a 1 - 5 (AP) or a 1 - 7 (IB). The longer I have taught, the more I have grown to like the idea of reporting grades as one of a limited set of discrete scores.

Over my entire teaching career I have worked within systems that report grades as a percentage, usually to two digit precision. Sometimes these grades are mapped to an A-F scale, but students and parents tend not to pay attention to those. One downside to the percentage reporting system is that it implies that we have measured learning to within a single percentage point. Let's leave out the idea that we should be measuring learning numerically at all for the moment, and talk about why the idea of discrete grades is a better choice.

As a teacher, I need to make sure that I grade assignments consistently across a course, or across a section at a minimum. I'm not sure I can be consistent within a percentage point when you consider the number of my students multiplied by the number of assessment items I give them. I'm likely consistent within five percent, and very likely consistent within ten. I am also confident in my ability to have a conversation with any student about what he or she can do to improve because of the standards based component of my grading system.

One big problem I see with grading scales that map to letter grades is the arbitrary mapping between multiples of ten and the letter grades themselves. As I mentioned before, many don't pay attention to the letter at all when the number is next to it. Students that see a score of 79 wonder what one thing they should have done on the assessment to be bumped up by a percentage point to get an 80, resulting in a letter grade of a B. That one point also becomes that much more consequential than a single point raising a 75 to a 76.

Another issue comes from the imprecise definition of the points for each question. Is that single point increase a result of a sign error or a conceptual issue that is more significant? The single digit precision suggests that we can talk about things this accurately, but it is not common to plan assessments in such a way that these differences are clearly identified. I know I don't have psychometricians on staff.

For all of these reasons and more, I've been experimenting with grading exams in a way that acknowledges this imprecision and attempts to deal with it appropriately.

The simplest way I did this was with final exams for my Precalculus course last year. In this case, all scores were reported after being rounded to the nearest three percentage points. This meant that student scores were rounded roughly to the divisions of the letter grades for plus, regular, or minus (e.g. B-/B/B+).

In the AP and IB courses, this process was more involved. I decided that exam scores would be 97, 93, 85, 75, and 65 which would map to 5-4-3-2-1 for AP and 7-6-5-4-3 for IB. I entered student performance on each question into a spreadsheet. Sometimes before, and sometimes after, I would also go through each question and decide what sort of representative mistakes I would expect a 5 student to make, a 4 student, and so on. I would also do a couple different scenarios of scoring at each level to find how much variation in points might result in a given score. That led me to decide on which cut scores should apply, or at least would suggest what they might be for this particular exam. Here is an example of what this looks like:

At this point I would also look at individual papers again, identify holistically which score I thought the student should earn, and then compared their raw scores to the scores of the representative papers. If there was any clear discrepancy, this would lead to a change in the cut scores. Once I thought most students were graded appropriately, I added the scores into a Google script to scale all of the scores to the discrete scores.

This process of norming the papers took time, but it always felt worth it in the end. I felt comfortable talking to students about their scores and the work that qualified them for that score. The independence of these totals from the standard 90/80/70/60 mapping between percentages and letter grades meant that the scores were appropriate indicators of how they did, regardless of the percentages of points. Students weren't excited to know that they couldn't figure out their total point percentage and know their score, but this was not a major issue for them. Going through this process felt much more appropriate than applying a 10*sqrt(score) type of mapping to the raw scores.

In my end of semester feedback, some students reported their frustration that they would receive the same score as other students that earned fewer points. I understand this frustration in principle, but not in practice. The scores 92.44% and 91.56% also receive the same score under the standard system by rounding to the nearest percentage. I think in the big picture, the grades students received were fair, and students have also reported a feeling of fairness with respect to the grades I give them.

I'm in favor of eliminating the plus and minus designations from letter grades. They are communication marks and nothing more, and I would rather communicate those distinctions through written comments or in person rather than by a symbol. These marks are more numerical consequences of the percentage grade scale than they are intentional comments on student learning, and they do more harm than good.

## New Moves: Reassessment

I’ve been a bit swamped over the course of the semester and unfortunately haven’t made the time to write regularly. There were lots of factors converging, and nothing negative, so I accepted that it might be one of the things to slip. This is something I will adjust for semester two.

I’ve written in the past about my reassessment systems and use of WeinbergCloud to manage them. I knew something had to change and thought a lot about what I was going to do to make my system more reasonable, something the old system was not.

At the beginning of the year, I sat down and started to reprogram the site...and then stopped. As much as I enjoyed the process of tweaking its features and solving problems that arose with its use, it was not where I wanted to spend my time. I also knew that I was going to teach a course with a colleague who also was planning to do reassessment, but I was not ready to build my system to manage multiple teachers.

I made an executive decision and stepped away from the WeinbergCloud project. It served me well, but it was time to come up with a different solution. We use Google for Education at my school, and the students are well versed in the use of calendars for school events. I decided to make this the main platform for all sorts of reasons. By putting my full class and meeting schedule into Google calendar, it meant that I could schedule student reassessments by actually seeing what my schedule looked like on a given week. Students last year would sign up to reassess at times when I had lunch duty or an after school meeting because my site didn’t have any way to block out times. This was a major improvement.

I also limited students to one reassessment per week. They needed to email me before the beginning of any given week and tell me what standard they wanted to reassess over. I would then send them an invite to a time they would show up to do their reassessment. This improved both student preparation and my ability to plan ahead for reassessments knowing what my schedule looked like for the day. Students liked it up until the final week of the semester, when they really wanted to reassess multiple times. I think this is a feature, not a bug, and will incentivize planning ahead.

I recorded student reassessments in PowerSchool in the comment tab. Grades with comments appear with a small flag next to them. This meant I could scan across horizontally to see what an individual student had reassessed on. I could also look vertically to see which standards were being assessed most frequently. The visual record was much more effective for qualitative views of the system than what I had previously with WeinbergCloud.

The system above was for my IB and AP classes. For Algebra 2 (for which I teach two sections and share with the other teacher) we had a simpler system. Students would be quizzed on standards, usually two at a time. Exams would be reassessments on all of the standards. Students would then have a third opportunity to be quizzed on up to three of the standards of each unit later in the semester. Students that had less than an 8 were required to reassess. This system worked well for the most part. Some students thought that the type of questions between the quiz and exam were different enough that they were not equivalent assessments of the standards. My colleague and I spent a lot of time talking through the questions, identifying the types of mistakes on individual questions that were indicators of 6 versus 8 versus 10, and also unifying the feedback we gave students after assessments. The system isn’t perfect, but students also were all given up to three opportunities to be assessed on every standard. This equity is not something that I’ve had happen before in my previous manifestations of SBG.

On the whole, both flavors of reassessment systems were much more reasonable and manageable, and I think they are here to stay. I’ll spend some time during the winter break thinking about what tweaks might be needed, if any, for the second half of the year.

## A Note on Vertical Planning

Many teachers justify including Topic X and skill Y on a high school syllabus because colleges and universities expect students to have mastered topic X and skill Y for their courses. Not because Topic X is interesting or skill Y is necessary for success at the high school level, but because the next step expects it.

I wonder if the set of X and Y for high school teachers matches the set of X and Y for universities. I wonder how often university professors and high school teachers (and middle school or elementary teachers for that matter) get together to discuss this.

I wonder which of our assumptions about what the other thinks matches reality.

## New Moves: Design Principles and Generosity

During the summer, I attended the academy for the new class of Apple Distinguished Educators in Melbourne, Australia. Among the workshops I attended was one from Stephen Hider on design principles.

Given the obsession I've grown over the past few years with design, much of this was nothing new. Alignment, proximity, repetition, and contrast were all old friends. The one that seemed new, perhaps because of a new name, was generosity. This principle means that an element of a design has been enough space around it such that it is, in Stephen's words, "able to breathe." Removing distracting elements around the focus allows a person to think about it in isolation, and with more clarity than would otherwise be permitted without the added space.

The idea is something that I've been thinking about for a while, inspired principally by Dan Meyer's exploration of ways that digital media provides ample opportunities to do things much differently than when confined by the economic costs of paper. (For more on this, see his talk titled 'Delete Your Textbook', linked here.)  I wrote in my previous post about changing the organization of my course away from a daily handout and toward individual tasks, each a separate linked PDF file. Individual problems or questions are presented on their own with space around them, when appropriate.

Here's an example of the contrast between a handout from last year's Algebra 2 course, and a page from a task this year.

The old:

...and the new:

The amount of paper I use in my classroom is reduced, and is much more deliberate. I still will print out individual pages when I really want to do so. The fact that I have freed myself from the demand that there be a handout for every class means I can be much more thoughtful about this. I can focus more on how I visually present ideas that are connected to each other rather than trying to make sure that everything fits in a manageable area of a page. The intention was not to be paperless, but I am finding that this small change has led to students being more likely to take time to pause between tasks and reflect on the work they have done before moving on. Nothing I have done previously has had such an effect.

## New Moves: Course Organization

Ever since switching to standards based grading, many components of my courses and classroom organization have come into alignment with my philosophy of teaching. Ideally, these align perfectly, but the realities of time and professional responsibilities can shift this alignment. My beliefs on assessment, on effective learning activities, and on using the classroom social space effectively have all come into sharp focus when my grade book aligns more closely with the learning that goes on.

There is one notable exception to this alignment.

My class notes and handouts, and therefore much of my courses, have always  been organized around days of class within a unit: Unit 1 Day 2 handout, Unit 3, Day 5 handout, Unit 5 review, etc. This has made it easy for someone that misses day three of unit two to know what precisely was missed during the day. It makes it easy for me to see how I organize the days within a unit. This is how I've done things for the past fourteen years.

In courses organized around standards like mine, a student should be able to see the development of content related to a standard from start to finish. The progression of content within a standard allows students to see ideas grow from simple to complex. A student that wants to review standard 1.1 needs to know which days covered material related to that standard. While identifying this is an important high level task, it doesn't help struggling students know where they should look to know what ideas relate to a given standard.

This was the main reason I have organized all of my course materials this year by standard. Here's a screenshot of a portion of my IB Mathematics SL Year 2 page on Moodle:

Each problem set or activity is organized under the learning standard under which it applies. When I post notes about a given problem or activity, it is put underneath the problem set to which it applies. Some days we work on content related to multiple standards, but I parse that information into different parts and organize it that way. When we do work that spans multiple standards, that work is posted above the standards and identified as such.

In the past, students have consistently asked to know the details of a given standard - now they can look for themselves for what types of problems relate. The materials are also generally organized in increasing level of difficulty or abstraction, so students know that the more challenging content is listed further down below the standard. I've also found that the types of activities I have students do is more diverse. I might send students to watch a video, do a curated list of Khan Academy exercises, or write a response to a prompt. Previously, the class handout was the one source of truth for what students should be doing at any one time. Now the materials have been expanded.

There is still a preferred order or menu of activities that I prescribe for each class. I post this as an agenda and refer students to it when it looks like they need some direction:

Students have reported that they have more freedom to do things at their own pace under this system.  We may not finish all of the material from Unit 2, Day 3 - that just means that the material can be moved to the next day's agenda. Naming the tasks in this different way makes it easy for a student to move ahead or work independently. I can spend my time during the class helping those who need it and challenging those that are making good progress.

I really like how this has transformed the spirit of my classroom. I admit that the organization of the course into standards is artificial - the real world is not organized this way. Being deliberate and communicating how class activities serve the learning standards, and what relates to big picture unit-wide challenges, helps students understand the balance between the two. I know this isn't the final answer, but it does seem to be a step in the right direction for my students.

## Overview

I've used Standards Based Grading, or SBG, with most of my classes for the past five years. It transformed the way I think about planning, assessment, classroom activities...and pretty much everything else around my teaching practice. I have a difficult time imagining what would happen if I had to go back. I've written a lot about it this year - here are some of the posts:

As I wrote in that last post, I still wrestle with the details. I'm fully invested in the philosophy though. I am glad to have my administrators supportive in having me adapt it to work within the more traditional system. I've also had some great conversations with colleagues who are excited by the concept, but that wonder how to make it work in their courses.

Here's the rundown of how it went this year.

## What worked:

• Students really bought into the system. The most common responses on student surveys on what I needed to keep involved the grade being defined by standards and the reassessment system. I found students were often the system's best advocates when other teachers and parents had questions, which made communication much easier.
• The system was the gateway to many very positive conversations with students around learning, improvement, and the role of feedback. Conversations were around understanding concepts and applying them, not asking for points. Many students would finish a reassessment and tell me that they their grade should stay the same, but that they would keep trying. Other students would try and argue their way to a higher score, but by using the vocabulary I use to define my standard descriptors (linked here). They understood that mistakes are informative, not punitive. Transplanting this understanding to students in my new school was a major success of the year.
• I developed a better understanding of what I'm looking for at each level on my 5 - 10 scale. Part of this came from being at a new school and needing to articulate this to students, parents, and administrators. The SBG and Leveling up project (linked above) helped refine my definitions of what distinguishes a 9 from a 10, or a 6 from a 7.

## What needs work:

• I had way too many reassessments. Full stop. I wrote about this in my post Too Many Reassessments, Just in Time for Summer and am exhausted just thinking about doing it again. There are a couple elements of this to unpack. One is that my credit system allows for reassessments to occur more frequently than I believe deep learning can really take place. I'm thinking about making it so students are locked out of reassessing on a standard for a set period of time, at least when going for a score of 8 or above where the goal is transfer of skills and flexibility of application. The other thing I am considering is limiting students to a single reassessment per week, or day, or some other interval. I have some time to decide on this, which is good, because both require a rewrite of my online signup tool, WeinbergCloud.
• Communicating the role of semester exams was a major challenge for me this year. In a small school, I found it was easy to communicate with individual students and parents about the role of semester exams. I based much of my outreach on what I understood about these exams and the role of learning standards grades throughout the year. A standards based grade book breaks down the entire topic into bite sized pieces, which makes it easier both to communicate strengths and weaknesses, and for students and teachers to decide what is the best next step. Semester exams are opportunities to put all of these pieces together and assess a students's ability to decide which standards apply in a given problem. Another way of looking at it is a soccer practice versus a soccer game mentality.

Ultimately, I do want students to be successful across the breath of the content on which a course is based. Semester exams serve as one way to measure that progress in the bigger picture of an entire course, rather than a unit. This also serves as a third scale on which to consider assessment in my course. Quizzes assess a standard, exams assess a unit of standards (with a few older standards thrown in), and semester exams assess mastery of a portion of the course. That different scale is why the 80% quarter grade, 20% exam grade proportions that I've followed for seven years is entirely reasonable.

A student that aces all of the standards with a 100 but gets a 50 on the final ends up with a 90. This student receives with the same semester grade as someone that has a 90 up until the final, and gets a 90 on the final. I'm fine with this parity in grades. I would have very different conversations with those two students before the next semester of mathematics in their plans.

The main challenge I found was that students and parents often looked at that final exam grade in isolation from, not together with, the rest of the scores in the grade book. The parent of the first student (100 than 50) that asks me to explain that disparity is certainly justified in doing so. Where I fell short was communicating the reality that in a standards based system, grades usually drop after a semester exam. It's a fundamentally different brand of assessment.

I'll also point out that the report card presented a semester of assessment in table form as quarter 1 grade, quarter 2 grade, exam grade, and then semester grade. This artificially shows the exam grade as perhaps being more consequential to the grade than it actually is. This isn't in my realm of influence, so I'll stop talking about it. The bottom line is that I need to to a better job of communicating these realities to everyone involved.

## Conclusion

I'm glad to be starting another year soon and to continue to make this system do good things for students. Cycle forward.

## Overview

This was my third time around teaching the first year of the IB mathematics SL sequence. It was different from my previous two iterations given that this was not an SL/HL combined class. This meant that I had more time available to do explorations, problem solving sessions, and in-class discussions of the internal assessment (also called the exploration). I had two sections of the class with fourteen and twenty students respectively.

I continued to use standards based grading for this course. You can find my standards (which define the curricular content for my year one course) at this link:

IB Mathematics SL Year 1 - Course Standards

## What worked:

• My model of splitting the 80 - 85 minute block into twenty minute blocks of time works well. I can plan what happens in those sub-blocks, and try as hard as I can to keep students doing something for as much of those as I can. The first block is a warm-up, some discussion, check in about homework or whatever, and then usually some quick instruction before the next block, which often involves an exploration activity. Third is summary of explorations or the preceding activities, example problems, and then a fourth of me circulating and helping students work.
• Buffer days, which I threw in as opportunities for students to work on problems, ask questions, and play catch up, were a big hit. I did little more on these days than give optional sets of problems and float around to groups of students. Whenever I tried to just go over something quick on these days, those lessons quickly expanded to fill more time than intended. It took a lot of discipline to instead address issues as they came up.
• I successfully did three writing assignments in preparation for the internal assessment, which students will begin writing officially at the beginning of year two. Each one focused on a different one of the criteria, and was given at the end of a unit. Giving students opportunities to write, and get feedback on their writing, was useful both for planning purposes and for starting the conversation around bad habits now.

I had rolling deadlines for these assignments, which students submitted as Google Docs. I would go through a set of submissions for a class, give feedback to those that made progress, and gentle reminders to those that hadn't. The final grade that went into PowerSchool was whatever grade students had earned by the end of the quarter.

The principle I applied here (and one to which I have subscribed more fervently with each year of teaching) is that my most valuable currency in the classroom is feedback. Those that waited to get started in earnest with these didn't get the same amount of feedback as students that started early, and the quality of their work suffered dramatically. I'm glad I could have the conversations I had with students now so that I might have a chance in changing their behavior before their actual IA is due.

An important point - although I did comment on different elements of the rubric, most of my feedback was on the criterion that titled the assignment. For example, in my feedback I occasionally referenced reflection and mathematical presentation in the communication assignment. I gave the most detailed feedback for communication, and graded solely on that criterion.

These were the assignments:

• I budgeted some of my additional instruction time for explicit calculator instruction. I've argued previously about the limitations of graphing calculators compared to Geogebra, Desmos, and other tools that have substantially better user experiences. The reality, however, is that these calculators are what students can access during exams. Without some level of fluency accessing the features, they would be unable to solve some problems. I wrote about this in my review of the course last year. This time was well spent, as students were not tripped up by questions that could only be solved numerically or graphically.
• Students saw many past paper questions, and seem to have some familiarity with the style of questions that are asked.

## What needs work:

• I've come to the conclusion that preemptive advice is ineffective. "Don't forget to [...]" or "You need to be extremely careful when you [...]" is what I'm talking about. It isn't useful for students that don't need the reminder. It doesn't help the students that don't have a context for what you are telling them not to do, not having solved problems on their own. I have found it to be much more effective to address those mistakes after students get burned by them. Some of my success here comes from my students subscribing to a growth mindset, which is something I push pretty hard from the beginning. Standards based grading helps a lot here too.
• I desperately need a better way to encourage longer retention of knowledge, particularly in the context of a two year IB course. I'll comment more on this in a later post, but standards based grading and the quarter system combined were factors working against this effort. I did some haphazard spaced repetition of topics on assessments in the form of longer form section two questions. The fact that I was doing this did not incentivize enough students to regularly review. I also wonder if my conflicted beliefs on fluency versus understanding of process play a role as well.
• Students consistently have a lot of questions about rounding, reporting answers, and follow through in using those answers in the context of IB grading. The rules are explicitly stated in the mark schemes for questions - answers should be reported exactly or to three significant figures unless otherwise noted. The questions students repeatedly have relate to multiple part questions. For example, if a student does a calculation in part (a), reports it to three significant figures, and then uses the exact answer to answer part (b), might that result in a wrong answer according to the mark scheme? What if the student uses the three significant figure reported answer in a subsequent part?

I did a lot of research in the OCC forum and reading past papers to try to fully understand the spirit of what IB tries to do. I'd like to believe that IB sides with students that are doing the mathematics correctly. I am not confident in my ability to explain what the IB believes on this, which means my students are uncertain too. This bothers me a lot.

• Students still struggle to remember the nuances of the different command terms during assessments. They also will do large amounts of complex calculations and algebraic work in site of seeing that a question is only two or three marks. There is clearly more work to do on that, though I expect that will improve as we move into year two material because, well, it usually does. I wish there was a way to start the self-reflection process earlier.
• Students struggle to write about mathematics. They also struggle with the reality that there is no way to make it go faster or do it at the last minute without the quality suffering. I still believe that the way you get better is by writing more and getting feedback, and that's the main reason I'm glad I made the changes I did regarding the exploration components. That said, students know how to write filler paragraphs, and I call them out on filler every single time.
• We spent a full day brainstorming and thinking about possible topics for individual explorations. Surveying the students, only four of them are certain about their topics. The rest have asked for additional guidance, which I am still figuring out how to provide over the summer. I think this process of finding viable topics remains difficult for students.

## Conclusion

I'll be following these students to year two. We have the rest of probability to do first thing when we get back, which I'll combine with some dedicated class time devoted toward the exploration. I like pushing the probability and Calculus to year two, as these topics are, by definition, plagued by uncertainty. It's an interesting context in which to work with students in their final year of high school.

## Overview

This was the first time I taught a true PreCalculus course in six years. At my current school, the course serves the following functions:

• Preparing tenth grade students for IB mathematics SL or HL in their 11th grade year. Many of these students were strong 9th grade students that were not yet eligible to enter the IB program since this must begin in grade eleven.
• Giving students the skills they need to be successful in Advanced Placement Calculus in their junior or senior year.
• Providing students interested in taking the SAT II in mathematics some guidance in the topics that are covered by that exam.

For some students, this is also the final mathematics course taken in high school. I decided to design the course to extend knowledge in Algebra 2, continue developing problem solving skills, do a bit more movement into abstraction of mathematical ideas, and provide a baseline for further work in mathematics. I cut some topics that I used to think were essential to the course, but did not properly serve the many different pathways that students can follow in our school. Like Algebra 2, this course can be the swiss army knife course that "covers" a lot so that students have been exposed to topics before they really need to learn them in higher level math courses. I always think that approach waters down much of the content and the potential for a course like this. What tools are going to be the most useful to the broadest group of students for developing their fluency, understanding, and communication of mathematical ideas? I designed my course to answer that question.

I also found that this course tended to be the one in which I experimented the most with pedagogy, class structure, new tools, and assessment.

The learning standards I used for the course can be found here:
PreCalculus 2016-2017 Learning Standards

## What worked:

• I did some assessments using Numbas, Google Forms, and the Moodle built-in quizzes to aid with grading and question generation. I liked the concept, but some of the execution is still rough around the edges. None of these did exactly what I was looking for, though I think they could each be hacked into a form that does. I might be too much of a perfectionist to ever be happy here.
• For the trigonometry units, I offered computer programming challenges that were associated with each learning standard. Some students chose to use their spreadsheet or Python skills to write small programs to solve these challenges. It was not a large number of students, but those that decided to take these on reported that they liked the opportunity to think differently about what they were learning.
• I explicitly also taught using spreadsheet functions to develop student's computational thinking skills. This required designing some problems that were just too tedious to solve by hand. This was fun.
• Differentiation in this course was a challenge, but I was happy with some of the systems I used to manage it. As I have found is common since moving abroad, many students are computationally well developed, but not conceptually so. Students would learn tricks in after school academy that they would try to use in my course, often in inappropriate situations. I found a nice balance between problems that started low on the ladder of abstraction, and those that worked higher. All homework assignments for the course in Semester 2 were divided into Level 1, Level 2, and Level 3 questions so that students could decide what would be most useful for them.
• I did some self-paced lessons with students in groups using a range of resources, from Khan Academy to OpenStax. Students reported that they generally liked when I structured class this way, though there were requests for more direct instruction among some of the students, as I described in mu previous post about the survey results.
• There was really no time rush in this course since after my decision to cut out vectors, polar equations, linear systems, and some other assorted topics that really don't show up again except in Mathematics HL or Calculus BC where it's worth seeing the topic again anyway. Some students also gave very positive feedback regarding the final unit on probability. I took my time with things there. Some of this was out of necessity when I was out sick for ten days, but there were many times when I thought about stepping up the challenge faster than I really needed to.

## What needs work:

• I wrote about how I did the conic sections unit with no-numerical grades - just comments in the grade book . The decision to do that was based on a number of factors. The downside was that when I switched back to numerical grades for the final unit, the grade calculation for the entire quarter was based only on those grades, and not on the conic sections unit at all. The conic sections unit did appear on the final exam, but for the most part, there wasn't any other consequence for students that did not reassess on the unit.
• Students did not generally like when I used Trello. They liked the concept of breaking up lessons into pieces and tasks. They did not like the forced timelines and the extra step of the virtual Trello board for keeping track of things. This Medium article makes me wonder about doing this in an analog form if I try it in the future. I also could make an effort to instill the spirit of Scrum early on so that it's less novel, and more the way things are in my classroom.
• I should have done a lot more assessment at the beginning of units to see what students knew and didn't know. It sounds like the student experiences in the different Algebra 2 courses leading to PreCalculus were quite different, which led to a range of success levels throughout. Actually, I should probably be doing this more often for all my courses.
• Students could create their own small reference sheet for every exam. I did this because I didn't want students memorizing things like double angle identities and formulas for series. The reason this needs work is that some students are still too reliant on having this resource available to ever reach any level of procedural fluency. I know what students need to be fluent later on in the more advanced courses, sure, but I am not convinced that memorization is the way to get there. Timed drills don't seem to do it either. This challenge is compounded by the fact that not all students need that level of fluency for future courses, so what role does memorization here play? I have struggled with this in every year of my fourteen year career, and I don't think it's getting resolved anytime soon. This is especially the case when Daniel Willingham, who generally makes great points that I agree with, writes articles like this one.

## Conclusion

This course was fun on many levels. I like being there to push students to think more abstractly as they form the foundation of skills that will lead to success in higher levels of mathematics. I like also crafting exercises and explorations that engage and equip the students that are finishing their mathematics careers. We should be able to meet the needs of both groups in one classroom at this stage.

I frequently reminded myself of the big picture by reading through Jonathan Claydon's posts on his own Precalc course development over the years. If you haven't checked him out, you should. It's also entertaining to pester him about a resource he posted a few years ago and hear him explain how much things have changed since then.

## Overview

Last year I took Julie Reubach's survey and used it for the students in my final set of classes at my previous school. This year I gave essentially the same survey. Probably the most important thing for me was to compare some of the results to make sure the essential elements of my teaching identity made the transition intact.

## The positives:

• Students responded that the reassessments and the quizzing system were important elements to keep for next year. I'll share more about my reflection on the reassessment system in a later post.
• Students liked having plenty of time during class to work and get help if they needed it. I tried to strike a balance between this, exploration, and direct instruction. More on that last point below.
• Students appreciated the structures of class and the materials. They liked having warm-up activities for each class, the organization of documents on Google Drive, and the use of PearDeck for assssment of their ideas during class.
• The stories, personal anecdotes, and jokes at the start of class apparently go over well with students. I don't think I could stop this completely anyway, so I'm glad students don't necessarily see this as being unfocused or as a waste of class time.
• Students like structured opportunities to work together and solve problems that are not just sets from the handouts. Explorations got strong reviews, which is good because I think they are good uses of class time too.

## What needs work:

• Students want more example problems. I consistently did some

in each class, but I always struggled with the balance between doing more problems and addressing issues as they came up individually. Some students want a bit more guidance that doesn't necessarily require whole group instruction, but say that the individual group explanations or suggestions aren't meeting their needs completely. This might mean I record some videos or present worked problems as part of the class resources in case students want them.

• Related to the previous point is the use of homework, Some students want more help on homework, but again don't necessarily want to spend whole class instruction doing it. I admit that I still struggle with the usefulness of going over homework, particularly as a whole class and collecting information on what students struggled with is not smooth. The classroom notebook doesn't solve that problem to my satisfaction either. Short, focused presentations of how to get started on certain problems (and not full solutions) might be all that is needed to meet this shortcoming that many students mentioned in their surveys.
• Despite my efforts to make learning the unit circle easier, students continue to report their dislike for learning it. I present students a series of approaches to understanding how to evaluate functions around the unit circle. This is also one of the few topics where I encourage both understanding (through creative assessment questions) and accuracy in evaluating functions correctly using whatever means students find necessary. Memorization, if that is what students choose to do, is one way that students could approach this. I think part of the issue is that proficiency in this topic requires more genuine effort than others. There are no shortcuts here, and facility with evaluating trigonometric functions goes a long way in making other topics easier. I'm not sure what the solution is here. This is one area where I think procedural fluency had no valid replacement, particularly in the context of IB, or preparation in Precalculus.
• The other topic that students reported they found the most difficult was binomial theorem, again surprising given that it is one of the more procedurally straight forward topics of the courses. Do I need to consider teaching these in a more formulaic way so that students are more successful? I wonder if I have swung too far in the wrong direction with respect to avoiding activities that demand fluency or practice.
• Students want more summary of what we've done each class and where we are going. I think this is a completely valid request, and is perhaps made easier to do with each course defined in terms of learning standards.
• ## Conclusion

I appreciate how consistently students are willing to give feedback about my classes. There were some really useful individual comments that will help me think about how the decisions I make might affect the spectrum of students in each course. I promised students that I wouldn't look at the results until after grades were in, just in case that might encourage more honesty. This was an anonymous survey, and with the larger class sizes this year, I think there was a closer amount of anonymity with respect to individual responses. There is a lot to sift through here, which is why I'm glad I still have the better part of the summer to do so.

## Overview

I've always been a pretty heavy user of technology. I've been more careful in the past few years to use it for a reason, not for its own sake though. I also balance that use though with a healthy desire to try new things in a way that I would actually use them in the classroom.

This is also the first year I've been able to take advantage of the Google Tools suite since Vietnam is not subject to the limitations of China's Great Firewall. Though there have been times when the internet connection to the entire country has been subject to shark attacks, connections in general have been smooth. Seeing how effectively some folks use Google in the classroom after being unable to use it for six years make me feel seriously behind the times. Luckily, my colleagues are really eager to share what they do. I might be caught up.

1. A Macbook Pro where I do most of my lesson planning. I connected an external widescreen monitor that mirrored all projected content. The second screen was sent through AirPlay to an Apple TV, which was then connected to the projector.
2. Class worksheets electronically created and stored in Google Docs. These are printed out on A5 size sheets for students to tape into their notebooks for a physical record of what we did.
3. For IB Mathematics SL and PreCalculus, I had two students per class make an additional Google Doc that was a copy of the handout. In this document, students would paste solutions to class work, homework, and whatever else they thought might be important to their classmates. The student responsibility for doing this was on a rotating schedule, similar to what I've used in my previous classes.
4. Notability app for class notes, with a Wacom Tablet for input. I used the wireless accessory kit for around two days, because it disconnected too frequently.
5. iPhone as a document camera for capturing student work for sharing answers or for conversation during the class. I would take pictures of student work and use AirDrop to upload them for inclusion in the notes.
6. Moodle as a repository for all of the above documents and links. I also used it occasionally for distributing quizzes and automatic grading.
7. My WeinbergCloud website for managing, assigning, and recording reassessments throughout the semester.
8. PearDeck on a trial basis for first semester, and then regularly during second. I sometimes used an iPad to manage the class, but every time I regretted it, and just used my computer.
9. Desmos Calculator usually at least once per lesson
10. Desmos Activity Builder about once per unit per course
11. EdPuzzle for self paced lessons, videos, and quizzes in Algebra 2. Most of the videos were produced by my colleague, Scott Hsu.
12. Spreadsheets for building useful calculators (like discriminants for quadratics, arithmetic/geometric series sums, etc)
13. Khan Academy for practice exercises and monitoring of student effort in reviewing material.
14. Geogebra for checking exam questions and demonstrating its use as a work-checking tool for students.
15. Camtasia for recording videos from time to time of solving problems, using
16. Quizster as a way to have students submit specific homework problems for feedback.
17. Wireless keyboard and trackpad, though these lasted about a week and a half.
18. I dabbled with GoFormative, primarily when I was on sick leave for a while. Connection issues that were inconsistent across the class led to my abandoning it for regular use.
19. For two units in PreCalculus, I used Trello as a way to organize units and help students organize their work for each day of class.

## What worked:

• The process of cutting and pasting images of problems or student work into Notability, and then annotating them was great for recording important information during class. These notes were then either pasted into the document created by students for each class, or exported as PDF for posting on Moodle. This felt like a good way to have a record of what went on during a given class block in case students missed a block.
• I liked automated grading of quizzes through Google forms and Moodle. This definitely saved time, but the process of getting feedback to students in response still is awkward. When student work is analog, but answer checking is digital, where should that feedback go? Quizster offers some way of making all of this occur in the same tool, but the workflow never was smooth enough to fully commit to it.
• The combination of PearDeck and Desmos Activity builder, along with photos of student work, made for great sources of understanding (and misunderstanding) that helped me decide how or whether to proceed with other material. These also made for great motivating elements for direct instruction when it needed to happen. The students really liked using these tools, and said they looked forward to them, according to the end of year survey results.
• I don't think Khan Academy exercises work well for assessing students beyond a basic level. I think they can provide the practice some students need on procedural skills like factoring or evaluating trigonometric functions. It's just one tool among many to serve the needs of my students.
• When I provided guidance on how spreadsheets could be used for more than just making charts, students appreciated it. One student went so far as to say that this instruction was "actually useful". I decided not to ask what this student thought about the rest of the class.

## What needs work:

• I posted homework problems on the printed class handout, digital handout, and on a dedicated document for assignments that is an expectation across classes our high school division. Consistently updating all three documents was a challenge, despite my best efforts.
• The student notebook entries for each class were among the least favorite element of the class, as reported by students. I've written previously about the need to have some record of what happens during class, but frustration over students that do not produce their own record through regular use of a dedicated notebook. This isn't the best solution, but I think it's the closest I've come to something that actually reaches the right balance. I just wish I could figure out how to get students to buy into its usefulness.
• I still have not figured out the best way to bring the class back together after letting them work at their own pace through lessons. The only times it makes a lot of sense to do this is at the very beginning of class, and at the very end.
• PearDeck, Desmos Activity Builder, and GoFormative each offer features that I really like. None of them do everything. I'm ok with this, but I wonder whether the fragmentation of activities is good for students, or a problem since their work is distributed across these tools.
• While I liked using Trello, and some students reported that they also appreciated it, many students did not. I'm not sure if it actually is the self-paced lesson tool I'm looking for, but it was better than a static Google document.
• At the end of the year, despite my own research and attempts to improve this, the Apple TV disconnected at least once every class period, if not more frequently.

## Conclusion

My focus continues to be on using technology to free up time for the ways that I can best add value in the classroom. Many students don't need my help in making progress. Some do, and some like having me explain ideas to them. It's hard to simultaneously meet these different needs without technology, which enables me to be in multiple places at once.

Having the range of tools I describe above, and not fully committing to one, is both a blessing and a curse. The fragmentation means the residue of learning is distributed across many web addresses. The variety helps keeps students (and me) from getting into a rut. I don't know if this balance is appropriately tuned yet.