Category Archives: Uncategorized

2016-2017 Year in Review: Being New

Overview

This was my first year since 2010 being the new kid in school. Developing a reputation takes time, so I was deliberate about establishing who I am as a teacher from the beginning. I wrote about learning names at the beginning of the year, for example. My school, surpassing 1,000 students this year, is the second largest of those at which I have worked. The high school division is just over 320 students. There are many systems that are in place to manage the reality of this group of ninth through twelfth graders that has a tremendous diversity of interests, programs (IB and AP), extra-curricular organizations, and obligations outside of the school walls. I walked in admittedly intimidated by the scope of this place and what it aims to accomplish.

After one of our meetings before student orientation, there was a lot of information that had been shared. I asked our high school principal what the priority needed to be in the first quarter in terms of processing all of that information. He put me at ease - the focus should be on figuring out how this place works. He promised (and certainly delivered) on a pledge to remind us of what was important throughout the year, but with the understanding that there would be a learning curve for our group of newbies. The faculty is passionate about teaching and creative in how they go about designing classroom experiences. They were intensely committed to sharing what they do and helping those of us that were new how to prioritize at any given time.

What worked:

  • The beginning of school was was a mix of content and getting to know them/me activities that were deliberately designed for those purposes. This sort of thing is important at the beginning of any year if the composition of a class is new. It's essential if the teacher is new too. Each group is unique and has chemistry that not only is important to suss out in the beginning, but must be regularly assessed as the year proceeds. I thought this series of activities worked really well. I will modify these for the purpose of offering variety for next year's student groups, but not really improvement.
  • I was able to get most of my work preparing lessons at school during my prep periods. Exceptions were after exams, the end of the year, and near reporting deadlines. This required serious levels of prioritization and disciplined decisions around what I actually could accomplish in those blocks of time. While I maintained to-do lists, a major component of my success came from block-scheduling those tasks and sticking to the schedule. This left time after school and at home to spend designing the explorations, experiments, and bigger picture puzzles that were nice, but not necessary.
  • I streamlined many of the administrative procedures I had created in my previous schools. I rebuilt spreadsheets that had been unchanged for several years rather than hacking old ones to work. Part of this was required to address the fact that my class sizes were substantially larger, but I also decided it was time.
  • As I had hoped to do, I spent much of the year watching. I did not want to come in and identify everything that this school did not have that I may have had a hand in organizing in past school years, and then add it myself. That is how I came to feel burnt out every time June came around. I was quite picky with what I involved myself in. I said no to things. When I was ready to design a VEX robotics sprint (more on that later) at the end of the year, however, this meant I had the energy and drive to do so.
  • The level of support I have felt from administrators and colleagues this year has been incredible. Nothing makes you feel so effective as a team that has your back, and that is realistic about what should, what can, and what cannot be accomplished with a given set of resources.

What needs work:

  • I did not get out and visit my colleagues anywhere nearly as frequently as I wanted. This is a seriously impressive group of teachers trying different things. Part of the problem was my commitment to trying to get things done during my prep periods, so I do take responsibility for that. It would not have been too devastating to that structure, however, if I also planned blocks of time when I would visit specific colleagues. I ate in the lunchroom with colleagues fairly regularly, and that was great for learning what everyone was doing. It was not enough. More of that next year.
  • I originally planned on doing outreach with parents more regularly this year. They are incredibly trusting of what we as teachers design for students, and this was evident at parent teacher conference nights during both semesters. I want more than that though. I want them to understand my philosophy for why learning mathematics right now is important. I don't think the parents understand standards based grading, and although the students made solid attempts to explain it during conferences, these conversations don't happen nearly as frequently as they should. I need to think more about what that communication looks like, and why I feel it is important, because I don't think I can fully articulate that here. I do know that there is a lost opportunity when it comes to parents really understanding what we do in the classroom on a regular basis.
  • I now believe that the ease of establishing community and connections with others is inversely related to the ease of living in that place. I often tell the story of how it was easy to rally a group of colleagues in my early days of China to go find cheese, which was difficult to find. Many of my closest bonds were formed during those adventures of first world adversity. Here in District 7 of HCMC, there is no such difficulty. Life is really good and easy here. This means, however, that one must work a little bit harder to leave the comfortable bubble of life to find adventure and make new friends. This is especially the case for me as the father of a now toddling ball of energy. It takes effort and time to build those relationships. That's definitely something that I need to work on deliberately planning more frequently next year.

Conclusion

The second year anywhere is always less scattered than the first. The next few weeks are all about figuring about how to use the time not spent learning the ropes.

Building Models: Drone Intercept and Desmos

I put together an activity using Desmos Activity Builder that was a variation on an older air traffic control task as part of my unit on parametric equations and vectors.

Here's some of the copy for the activity:

Students could only see six seconds of the drone animation before they disappeared from the screen. I had students detail their process of finding the intersection point and intersection time as part of the follow up for this activity.

My favorite product of this activity though came with the superposition of everyone's models on top of the drone footage. Here's that result (click to see the animation):

We had some really productive discussions as part of evaluating this result. The students noticed how most people had the correct initial location, but dramatically different results based on the velocity vectors used to generate the parametric expressions. Some students saw it as cheating to use Desmos to gather data, make calculations to create an approximate solution, and then tweak that solution. I shared that I saw that as a natural role of feedback in the modeling process.

The activity has one slide with some behind-the-scenes Activity Builder features, and I'm not sure I should release that at this point. If you are interested in using this activity with your students, let me know, and I can create a version without that slide.

What Does the Desmos for Probability Look Like?

Desmos, the online graphing calculator, activity builder, and general favorite of the MTBoS does phenomenal work.

I found myself wondering over the past few days about the statistics and probability world and how there isn't a Desmos parallel in that realm for easy experimentation. You can put an algebraic expression in Desmos and graph it in a few keystrokes. You can solve a problem algebraically, and then graph to see if you are correct. There are multiple ways to confirm graphically or numerically what you have solved algebraically, or any other permutation of the three.

It's also easy to put a bunch of marbles in a jar, pick one, replace, and repeat, though this becomes tedious beyond a few trials. Such a small data set often isn't enough to really see long term patterns, particularly in cases where you are trying to test whether the theoretical probability you have calculated is correct or not. For subtle cases that involve replacement versus no replacement, the differences between the theoretical probabilities of events are small if there are enough marbles in the jar.

Creating a simulation and running it half a million times is possible in a spreadsheet or a number of computer languages, but the barrier to entry there is not trivial. I've written simulations myself of various problems and usually make predictions for what I think is going to happen. I then will usually work to find the theoretical probability by hand.

So what would this sort of probability playground look like? There are some examples out there already. Here's one from CPM for small numbers of trials. I haven't done an exhaustive search, but I haven't seen anything that truly allows full experimentation at the level I'm hoping to achieve. Here are some ideas for what I would love to see exist:

  • Natural language definitions for sources of possible outcomes. By this, I mean being able to define outcomes verbally. This might mean "rain" and "no rain", with the assumption that having only two labels means these events are complementary. This might mean we define numbers of items for each possible outcome, or simply enter the probability of each as a decimal. The key thing is that I do not want to require labeling events as A or B, and throwing notation around. Let's see if we can make this as visual and easy to explore
  • Ease of setting up conditional outcomes for compound events.. If event A (I know, I'm breaking the previous rule here) happens, only B and C are possible, and event D is only possible if event A does not occur.
  • Sinks that easily allow for large numbers of trials. I might want to have a single trial be generated a million times - tell me the proportion of all of the different outcomes. Make it easy for me to count up instances of binomial probability and see how many times, out of ten, I get three or more successes. Tell me when I'm not looking at all of the possibilities. For example, give me some visual indication that when I'm picking two marbles from a jar, that if I only have both red or both blue in my possible outcomes, I'm missing outcomes in which there is one of each.
  • Make it easy to tap into existing complex data sets for exploration purposes. Include some data sets that are timely and relevant. The US election comes to mind.

I realize also that this is a tall order, but I've seen how far the Desmos team has explored the algebraic/numerical space. Now that they have expanded into the Geometry space through their beta, I wonder if they (or someone else for that matter) has something like this probability exploration tool on their roadmap.

Building Arguments with Probability and the Clips App

I don't like projects for assessment. I do like in class projects for the purposes of fostering discussion and other forms of interactions. I decided to put together something fun to build time into the unit while students developed their skills in applying binomial probability. From student feedback, they actually said it was fun, so this wasn't just hopeful thinking (this time). This also had the added value of giving students a change to work on Common Core mathematical practice standard 3: Construct viable arguments and critique the reasoning of others.

I gave pairs of groups of students a statement. The center paragraph was the same for both - a statement about probabilities. The paragraphs preceding and following that were different - conflicting contexts for each statement. Here's an example.

I ended up writing four sets of situations to make sure that each class had at least two groups working on the same probability statement, but different arguments.

I asked students to do calculations and write a 100 word abstract stating their argument. After learning that the Clips app, recently released by Apple, made for a really easy way for students creatively describe and document their thinking, I also asked students to create a two minute video documenting the situation and their argument. You can see a selection of the video results below.

Students were really challenged to search for the calculations and results that supported their arguments. Some reported that they felt dishonest doing so.

You can check out all four sets of scenarios and the rubric I used here. The students said that working in teams and working through this task was enjoyable and actually reinforced their understanding of how to use binomial probability. As with a previous unit, this project was graded for completion, not for a grade, a fact I stated up front. So far, the students haven't actually said this was a problem for them, and the quality of what they produced didn't seem to suffer much.

An Experiment: Swapping Numerical Grades for Skill-Levels and Emoji

I decided to try something different for my pre-Calculus class for the past three weeks. There was a mix of factors that led me to do this when I did:

  • The quarter ended one week, with spring break beginning at the end of the next. Not a great time to start a full unit.
  • I knew I wanted to include some conic sections content in the course since it appears on the SAT II, and since the graphs appear in IB and AP questions. Some familiarity might be useful. In addition, conic sections also appear as plus standards within CCSS.
  • The topic provides a really interesting opportunity to connect the worlds of geometry and algebra. Much of this connection, historically, is wrapped up in algebraic derivations. I wanted to use technology to do much of the heavy lifting here.
  • Students were exhibiting pretty high levels of stress around school in general, and I wanted to provide a bit of a break from that.
  • We are not in a hurry in this class.

Before I share the details of what I did, I have to share the other side to this. A long time ago, I was intrigued by the conversation started around the Twitter hashtag #emojigrading, a conversational fire stoked by Jon Smith, among many others. I like the idea of using emoji to communicate, particularly given my frustrations over the past year on how communication of grades as numbers distort their meaning and imply precision that doesn't exist. Emoji can be used communicate quickly, but can't be averaged.

I was also very pleased to find out that PowerSchool comments can contain emoji, and will display them correctly based on the operating system being used.

So here's the idea I pitched to students:

  • Unit 7 standards on conic sections would not be assessed with numerical grades, ever. As a result, these grades would not affect their numerical average.
  • We would still have standards quizzes and a unit exam, but instead of grades of 6, 8, and 10, there would be some other designation that students could help select. I would grade the quizzes and give feedback during the class, as with the rest of the units this year.
  • Questions related to Unit 7 would still appear on the final exam for the semester, where scores will be point based.

I also let students submit some examples of an appropriate scale. Here's what I settled on based on their recommendations:

I also asked them for their feedback before this all began. Here's what they said:

  • Positive Feedback:
    • Fourteen students made some mention of a reduction in stress or pressure. Some also mentioned the benefits of the grade being less specific being a good thing.
    • Three students talked about being able to focus more on learning as a result. Note that since I already use a standards based grading system, my students are pretty aware of how much I value learning being reflected in the grade book.
  • Constructive Feedback:
    • Students were concerned about their own motivation about studying or reassessing knowing that the grades would not be part of the numerical average.
    • Some students were concerned about not having knowledge about where they are relative to the boundaries of the grades. Note: I don't see this by itself as a bad thing, but perhaps as the start of a different conversation. Instead of how to raise my grade, it becomes how I develop the skills needed to reach a higher level.
    • There were also mentions of 'objectivity' and how I would measure their performance relative to standards. I explained during class that I would probably do what I always do: calculate scores on individual standards, and use those scores to inform my decisions on standards levels. I was careful to explain that I wasn't going to change how I generate the standards scores (which students have previously agreed are fair) but how I communicate them.

I asked an additional question about what their parents would think about the change. My plan was to send out an email to all parents informing them of the specifics of the change, and I wanted students to think proactively about how their parents would respond. Their response in general: "They won't care much." This was surprising to me.

So I proceeded with the unit. I used a mix of direct instruction, some Trello style lists of tasks from textbooks, websites, and Desmos, and lots of circulating and helping students individually where they needed it. I tried to keep the only major change to this unit to be the communication of the scores through the grade book using the emoji and verbal designation of beginner, intermediate, expert. As I also said earlier, I gave skills quizzes throughout.

The unit exam was a series of medium level questions that I wanted to use to gauge where students were when everything was together. As with my other units, I gave a review class after the spring break where students could work on their own and in groups, asking questions where they needed it. Anecdotally, the class was as focused and productive as for any other unit this year.

I was able to ask one group some questions about this after their unit test, and here's how they responded:

The fact that the stress level was the same, if not less, was good to see. The effort level did drop in the case of a couple of students here, but for the most part, there isn't any major change. This class as a whole values working independently, so I'm not surprised that none reported working harder during this unit.

I also asked them to give me general feedback about the no-numerical-grades policy. Some of them deleted their responses before I could take a look, but here's some of what they shared:

    • Three students confirmed a lower stress level. One student explained that since there was no numerical grade, she "...couldn't force/motivate [her]self to study."
    • Five students said the change made little to no difference to them. One student summed it up nicely: "It wasn't much different than the numerical grades, but it definitely wasn't worse."
    • One student said this: "The emojis seemed abstract so I wasn't as sure of where I was within the unit compared to numbers." This is one of a couple of the students that had concerns about knowing how to move from one level to the next, so the unit didn't change this particular student's mind.

 

  • This was a really thought-provoking exercise. A move away from numerical grades is a compelling proposition, but a frequent argument against it is that grades motivate students. By no means have I disproven this fact in the results of my small case study. If a move like this can have a minimal effect on motivation, and students get the feedback they need to improve, it offers an opportunity for considering similar experiments in my other classes.

    There are a couple questions I still have on this. Will students choose to reassess on the learning standards from unit 7, given that they won't change the numerical average when we return to numerical grades for unit 8? The second involves the longer term retention of this material. How will students do on these questions when they appear on the final exam?

    I'll return to this when I have more answers.

 

SBG and Leveling Up - Part 2: Machine Learning

In my 100-point scale series last June, I wrote about how our system does a pretty cruddy job of classifying students based on raw point percentages. In a later post in that series, I proposed that machine learning might serve as a way to make sense of our intuition around student achievement levels and help provide insight into refining a rubric to better reflect a student's ability.

In I last post, I wrote about my desire to become more methodical about my process of deciding how a student moves from one standard level to the next. I typically know what I'm looking for when I see it. Observing students and their skill levels relative to a given set of tasks is often required to identify the level of a student students. Defining the characteristics of different levels is crucial to communicating those levels to students and parents, and for being consistent among different groups. This is precisely what we intend to do when we define a rubric or grading scale.

I need help relating my observations of different factors to a numerical scale. I want students to know clearly what they might expect to get in a given session. I want them to understand my expectations of what is necessary to go from a level 6 to a level 8. I don't believe I have the ability to design a simple grid rubric that describes all of this to them though. I could try, sure, but why not use some computational thinking to do the pattern finding for me?

In my last post, I detailed some elements that I typically consider in assigning a level to a student: previously recorded level, question difficulty, number of conceptual errors, and numbers of algebraic, and arithmetic errors. I had the goal of creating a system that lets me go through the following process:

  • I am presented with a series of scenarios with different initial scores, arithmetic errors, conceptual errors, and so on.
  • I decide what new numerical level I think is appropriate given this information. I enter that into the system.
  • The system uses these examples to make predictions for what score it thinks I will give a different set of parameters. I can choose to agree, or assign a different level.
  • With sufficient training, the computer should be able to agree with my assessment a majority of the time.

After a lot of trial and error, more learning about React, and figuring out how to use a different machine learning library than I used previously, I was able to piece together a working prototype.

You can play with my implementation yourself by visiting the CodePen that I used to write this. The first ten suggested scores are generated by increasing the input score by one, but the next ten use the neural network to generate the suggested scores.

In my next post in this series, I'll discuss the methodology I followed for training this neural network and how I've been sharing the results with my students.

Standards Based Grading and Leveling Up

I've been really happy since joining the SBG fan club a few years ago.

As I've gained experience, I've been able to hone my definitions of what it means to be a six, eight, or ten. Much of what happens when students sign up to do a reassessment is based on applying my experience to evaluating individual students against these definitions. I give a student a problem or two, ask him or her to talk to me about it, and based on the overall interaction, I decide where students are on that scale.

And yet, with all of that experience, I still sometimes fear that I might not be as consistent as I think I am. I've wondered if my mood, fatigue level, the time of day affect my assessment of that level. From a more cynical perspective, I also really really hope that past experiences with a given student, gender, nationality, and other characteristics don't enter into the process. I don't know how I would measure the effect of all of these to confirm these are not significant effects, if they exist at all. I don't think I fully trust myself to be truly unbiased, as well intentioned and unbiased as I might try to be or think I am.

Before the winter break, I came up with a new way to look at the problem. If I can define what demonstrated characteristics should matter for assessing a student's level, and test myself to decide how I would respond to different arrangements of those characteristics, I might have a way to better define this for myself, and more importantly, communicate those to my students.

I determined the following to be the parameters I use to decide where a student is on my scale based on a given reassessment session:

  1. A student's previously assessed level. This is an indicator of past performance. With measurement error and a whole host of other factors affecting the connection between this level and where a student actually is at any given time, I don't think this is necessarily the most important. It is, in reality, information that I use to decide what type of question to give a student, and as such, is usually my starting point.
  2. The difficulty of the question(s). A student that really struggled on the first assessment is not going to get a high level synthesis question. A student at the upper end of the scale is going to get a question that requires transfer and understanding. I think this is probably the most obvious out of the factors I'm listing here.
  3. Conceptual errors made by the student during the reassessment. In the context of the previous two, this is key in whether a student should (or should not) advance. Is a conceptual error in the context of basic skills the same as one of application of those skills? These apply differently at a level six versus a level eight. I know this effect when I see it and feel pretty confident in my ability to identify one or more of these errors.
  4. Arithmetic/Sign errors and Algebraic errors. I consider these separately when I look at a student's work. Using a calculator appropriately to check arithmetic is something students should be able to do. Deciding to do this when calculations don't make sense is a sign of a more skilled student in comparison to one that does not. Observing these errors is routinely something I identify as a barrier to advancement, but not necessarily in decreasing a student's level.

There are, of course, other factors to consider. I decided to settle on the ones mentioned above for the next steps of my winter break project.

I'll share how I moved forward on this in my next post in the series.

Unit Circle Practice (#TeachersCoding)

I've always wanted a simple interface to help my students practice the unit circle. I've found Quizlet sites that help with this, as well as the occasional Khan Academy exercise that approaches what I want. The big issue I find with most of these is that the interface and the questions ask much more than what I'm looking for. I want a simple flashcard-like situation with no bells and whistles that gets my students the repetition and opportunity to think through the functions with feedback.

Over the winter break, I decided I needed to build the resource I had in mind. Here's the result:

The live site can be accessed here: http://codepen.io/emwdx/full/bgEJYK/

This is essentially a digital version of a set of flash cards, but they never stop. The angles rotate around the unit circle and the trigonometric function used is randomized. Since I am holding my PreCalculus students responsible for the reciprocal functions, but my IB students don't need them, I added the ability to flip those on and off.

I decided to do this on CodePen in case you want to look under the hood to see how it works. The editor view that contains my code is here. Let me know if you use it for something useful.

Releasing Today: States-n-Plates

I'm excited to share States-n-Plates , a project I built with Dan Meyer.

Dan proposed the idea for this activity a while ago with his typically high level of excitement about activities that provoke interesting and productive classroom conversation. This time, however, it wasn't about mathematics. I was looking for a bigger scale project to help me develop my ReactJS skills, so I took it on. Dan was patient enough to let me hack away at the project in this context. Though I could have certainly done it more quickly using jQuery or another framework, I wanted to try building this project in a particular way.

Specifically:

  • I wanted to be able to play the game myself when I was done. Hard coding everything into a series of HTML pages would have likely resulted in my seeing each plate and the answer over the many times I reloaded during development. By abstracting the behavior of the game to be automated for each group of license plates, I saw most of the plates for the first time during testing.
  • I wanted to experiment with a drag and drop library for React as an exercise for use in future experiments.
  • I also wanted to have a slightly different UI behavior for the desktop and mobile versions. This functionality came from Bootstrap. This led to a bit of wonkiness on small phone displays, but larger tablets work great using touch, and the desktop version works well using drag and drop.
  • I also wanted to experiment with modularity of both files and React component JSX files. I used Webpack. I don't understand Webpack.

As in my past collaborations with Dan, I learned to do a number of things I didn't think I could do. For example, I told Dan 'no' on the fading effect at one point, and then subsequently figured out how to make it happen through lots of searches, StackOverflow, and careful reading of the React documentation.

If you want to play with the code, the Github repository is at https://github.com/emwdx/states-n-plates/. You don't need the big node_modules directory for this to work locally, but it is required if you want to change the bundle.js file.

I have more thoughts on the learning process I went through, but that will be shared soon. Have fun and share with your friends.

Computational Thinking and Spreadsheets, Teacher Edition (#TeachersCoding)

I ran a workshop last week giving some teachers ideas on how to use computational thinking to improve their workflow. I've written in the past about how spreadsheets can serve as a way to get students thinking like programmers, without the intimidation of a text-based development environment. I don't find teachers any different in this regard.

I spent the beginning of this workshop sharing a bit about my views on why teachers should develop their computational thinking skills. I then set them off to work through answering the following questions about each task in the video below:

  • What is the spreadsheet being programmed to do?
  • What commands are being used?
  • How would I use this in my own practice?

I'm reasonably sure that a majority of teachers have a spreadsheet somewhere that contains student data like the one in the video. My hope is that teachers that watch the video and see what I've done with this spreadsheet will have one of a few possible responses:

  • Wow, I do that by hand right now. Now I know there's an easier way that will save me time.
  • That isn't useful to me, but it does give me an idea of how to do some other task that involves iteration, sorting, or another task best suited for a computer.
  • I do that already. Is that computational thinking?

If I elicit any of these responses, and then get someone to then build a tool that is useful to him or her, I think I've done my job. Learning to code for its own sake isn't necessarily worth a teacher's valuable time. Outsourcing tasks that computers do best to a computer can free a teacher to have more time for those tasks that require the expertise, experience, and a personal touch that only a person can provide. If learning a bit of computational thinking can do that, doing so might be worth the time.

Please comment on the video or below to let me know what you think.