2015-2016 Year in Review: IB Mathematics SL/HL

This was my second year working in the IB program for mathematics. For those that don't know, this is a two year program, culminating in an exam at the end of year two. The content of the standard level (SL) and higher level (HL) courses cross algebra, functions, trigonometry, vectors, calculus, statistics, and probability. The HL course goes more into depth in all of these topics, and includes an option that is assessed on a third, one-hour exam paper after the first two parts of the exam.

An individualized mathematics exploration serves as an internally assessed component of the final grade. This began with two blocks at the end of year one so that students could work on it over the summer. Students then had four class blocks spread out over the first month of school of year two two work and ask questions related to the exploration during class.

I taught year one again, as well as my first attempt at year two. As I have written about previously, this was run as a combined block of both SL and HL students together, with two out of every five blocks as HL focused classes.

What worked:

  • I was able to streamline the year 1 course to better meet the needs of the students. Most of my ability in doing this came from knowing the scope of the entire course. Certain topics didn't need to be emphasized as I had emphasized in my first attempt last year. It also helped that the students were much better aware of the demands of higher-level vs. standard level from day one.
  • I did a lot more work using IB questions both in class and on assessments. I've become more experienced with the style and expectations of the questions and was better able to speak to questions about those from students.
  • The two blocks on HL in this combined class was really useful from the beginning of year one, and continued to be an important tool for year two. I don't know how I would have done this otherwise.
  • I spent more time in HL on induction than last year, both on sums and series and on divisibility rules, and the extra practice seemed to stick better than it did last year in year one.
  • For students that were self starters, my internal assessment (IA) schedule worked well. The official draft submitted for feedback was turned in before a break so that I had time to go through them. Seeing student's writing was quite instructive in knowing what they did and did not understand.
  • I made time for open ended, "what-if" situations that mathematics could be used to analyze and predict. I usually have a lot of this in my courses anyway, but I did a number of activities in year one specifically to hint at the exploration and what it was all about. I'm confident that students finished the year having seen me model this process, and having gone through mini explorations themselves.
  • After student feedback in the HL course, I gave many more HL level questions for practice throughout the year. There was a major disconnect between the textbook level questions and what students saw on the HL assessments, which were usually composed of past exam questions. Students were more comfortable floundering for a bit before mapping a path to a solution to each problem.
  • For year two, the exam review was nothing more than extended class time for students to work past papers. I did some curation of question collections around specific topics as students requested, but nearly every student had different needs. The best way to address this was to float between students as needed rather than do a review of individual topics from start to finish.
  • The SL students in year two learned modeling and regression over the Chinese new year break. This worked really well.
  • Students that had marginally more experience doing probability and statistics in previous courses (AP stats in particular) rocked the conditional probability, normal distribution, and distribution characteristics. This applied even to students who were exposed to that material, but did poorly on it in those courses. This is definitely a nod to the idea that earlier exposure (not mastery) of some concepts is useful later on.
  • Furthermore, regarding distributions, my handwaving to students about finding area under the curve using the calculator didn't seem to hurt the approach later on when we did integration by hand.
  • This is no surprise, but being self sufficient and persevering through difficult mathematics needs to be a requirement for being in HL mathematics. Students that are sharp, but refuse to put in the effort, will be stuck in the 1-3 score range throughout. A level of algebraic and conceptual fluency is assumed for this course, and struggling with those aspects in year one is a sign of bigger issues later on. Many of the students I advised this way in year one were happier and more successful throughout the second year.
  • I successfully had students smiling at the Section B questions on the IB exam in the slick way that the parts are all connected to each other.

What needs work:

    For year one:

  • I lean far too hard on computer based solutions (Geogebra, Desmos) than on the graphing calculator during class. The ease of doing it these ways leads to students being unsure of how to use the graphing calculator to do the same tasks (finding intersections and solutions numerically) during an assessment. I definitely need to emphasize the calculator as a diagnostic tool before really digging into a problem to know whether an integer or algebraic solution is possible.
  • Understanding the IB rounding rules needs to be something we discuss throughout. I did more of this in year one on my second attempt, but it still didn't seem to be enough.
  • For year two:

  • Writing about mathematics needs to be part of the courses leading up to IB. Students liked the mini explorations (mentioned above) but really hated the writing part. I'm sure some of this is because students haven't caught the writing bug. Writing is one of those things that improves by doing more of it with feedback though, so I need to do much more of this in the future.
  • I hate to say it, but the engagement grade of the IA isn't big enough to compel me to encourage students to do work that mattered to them. This element of the exploration was what made many students struggle to find a topic within their interests. I think engagement needs to be broadened in my presentation of the IA to something bigger: find something that compels you to puzzle (and then un-puzzle) yourself. A topic that has a low floor, high ceiling serves much more effectively than picking an area of interest, and then finding the math within it. Sounds a lot like the arguments against real world math, no?
  • I taught the Calculus option topics of the HL course interspersed with the core material, and this may have been a mistake. Part of my reason for doing this was that the topic seemed to most easily fit in the context of a combined SL/HL situation. Some of the option topics like continuity and differentiability I taught alongside the definition of the derivative, which is in the core content for both SL and HL. The reason I regret this decision is that the HL students didn't know which topics were part of the option, which appear only on a third exam section, Paper 3. Studying was consequently difficult.
  • If for no other reason, the reason not to do a combined SL/HL course is that neither HL or SL students get the time they deserve. There is much more potential for great explorations and inquiry in SL, and much more depth that is required for success in HL. There is too much in that course to be able to do both courses justice and meet the needs of the students. That said, I would have gone to three HL classes per two week rotation for the second semester, rather than the two that I used throughout year one.
  • The HL students in year two were assigned series convergence tests. The option book we used (Haese and Harris) had some great development of these topics, and full worked solutions in the back. This ended up being a miserable failure due to the difficulty of the content and the challenge of pushing second semester seniors to work independently during a vacation. We made up some of this through a weekend session, but I don't like to depend on out-of-school instruction time to get through material.

Overall, I think the SL course is a very reasonable exercise in developing mathematical thinking over two years. The HL course is an exercise in speed and fluency. Even highly motivated students of mathematics might be more satisfied with the SL course if they are not driven to meet the demands of HL. I also think that HL students must enjoy being puzzled and should be prepared to use tricks from their preceding years of mathematics education outside of being taught to do so by teachers.

Leave a Comment

Filed under IB, reflection, Uncategorized, year-in-review

QuestionBuilder: Create and Share Randomized Questions

I've written previously about my desire to write randomized questions for the purpose of assessment. The goal was never to make a worksheet generator - those exist on the web already. Instead, I wanted to make it easy to create assessment questions that are similar in form, but different enough from each other that the answers or procedures to solve them are not necessarily identical.

Since January, I've been working on a project called QuestionBuilder. It's a web application that does the following:

  • Allows the creation of assessment questions that contain randomized elements, values, and structures.
  • Uses regular Javascript, HTML, and the KaTEX math rendering library to create and display the questions
  • Makes it easy to share questions you create with community members and build upon the work of others to make questions that work for you.

example1

Here's a video in which I convert a question from the June 2016 New York State Regents exam for Algebra 2 Common Core into a randomized question. Without all of my talking, this is a quick process.

I've put a number of questions on the site already to demonstrate what I've been using this to do. These range from simple algebra to physics questions. Some other folks I appreciate and respect have also added questions in their spare time.

For now, you'll need to create an account and log in to see these questions in action. Go to http://question-builder.evanweinberg.org, make an account, and check out the project as it exists at this point.

My hope is to use some time this summer to continue working on it to make it more useful for the fall. I'll also be making some other videos to show how to use the features I've added thus far. Feel free to contact me here, through Twitter (@emwdx), or by email (evan at evanweinberg.com) if you have questions or suggestions.

Leave a Comment

Filed under computational-thinking, Uncategorized

Generality vs. Specificity

We want our students to have problem solving methods that are general enough to work in any situation. If we assign a series of exercises that are too similar to each other, it becomes easy for students to lock onto the wrong pattern, or to use a 'trick' that works just frequently enough to seem worth the effort to learn it.

One thing I tried this year was to prompt students to make themselves aware of the spectrum from generality to specificity. What works for solving specifically this question? What general ideas apply to answering all of the problems on the page?

I used my randomized question generator to help create problems that worked this way. Here's an example:

Screen Shot 2016-07-01 at 11.05.02 PM

I only started a deliberate effort to prompt these conversations at the middle of the second semester. I wish I was doing it all year.

Leave a Comment

Filed under teaching philosophy

Endings and Beginnings

Today, I bid farewell to my home away from home for the past six years.

img_2952-2

When I first moved away from New York, I had shed all doubts that the teaching career was for me. I knew that learning and exploring were important elements of a meaningful existence on this planet, both for me and my students. I knew that few things were more satisfying than spending time with good people around plates of food. I knew that not knowing the local language or the location of the nearest supermarket was a cause for excitement, not fear. Purposely putting one's self into situations with unknown outcomes is not a reckless act. It is precisely these challenges that define and refine who we are so that we are better prepared for those events that we do not expect.

I knew these things already. And yet, I leave China today as a changed teacher. I met students from all around the world. I made connections not just with new people in the same building as me, but with teachers in many distributed time zones. People that I respected and admired for their ideas humbled me as they invited me to join in their conversations and explore ideas with me. I found opportunities to present at conferences and get to know others that had also fallen in love with the international teaching lifestyle. I started this blog, and surprisingly, had people read it with thoughts of their own to share.

I also learned to accept the reality that life continues in twenty four time zones. News from home made it seem more foreign and paradoxically more connected to my own experiences here. When opening my eyes and my various devices in the morning to see what had happened while I slept, I again never knew what to expect. I lost family members both suddenly and over stretches of time. Kids grew up. Our parents sold their houses and apartments. Friends put prestigious letters at the end of their names.

Our world changed as well. We added new countries to our passports and got lost in cities that refused to abide by a grid system. We fell in love with our dog and his aggressive sneezing at harmless bystanders. We tried to address the life and work balance through weeknight dinners and mini vacations. We repeatedly overcommitted to traveling during our summers off and time went too quickly. We became parents.

I write this not because anything I'm saying is especially new. The 'time marches on' canon is well established. That does not invalidate the reality that we're all experiencing life and its passage for the first time ourselves. This is the magic that we, as teachers, witness between the end of one year and the beginning of the next. We tweak our lessons from the previous year with the hope that they prompt more questions and productive confusion on the next iteration. Our students do experience some of the ideas we introduce for the first time in our classrooms, and it is unique that we get to design those experiences ourselves. 

img_2932-2

The best way to understand the rich range of emotions that our students experience while in our care is to live deeply and richly in our own lives. We need to learn to know and love others, explore and make mistakes, and be ready to move forward even when the future is uncertain. My time abroad thus far has given me numerous journeys through these human experiences. I would not give them up for the world, and luckily, I do not have to do so.

I'll write more about my next move in a future post. 
Until then, I wish you all a summer full of good times with good people. 

2 Comments

Filed under Uncategorized

Hacking the 100-point Scale - Part 4: Playing with Neural Networks

First, a review of where we've been in the series:

  • The 100 point scale suffers from issues related to its historical use and difficulties of communicating what it means.
  • It might be beneficial to have a solid link between the 100 point scale (since it likely isn't going anywhere) and the idea of achievement levels. This does not need to be rigidly defined as 90 - 100 = A, 80-89 = B, and so on.
  • I asked for you to help me collect some data. I gave you a made up rubric with three categories, and three descriptors for each, and asked you to categorize these as achievement levels 1 - 4. Thank you to everyone who participated!

This brings us to today's post, where I try to bring these ideas together.

In case you only have time for a quick overview, here's the tl;dr:

I fed the rubric scores you all sent me after the previous post to train a neural network. I then used that neural network to grade all possible rubric scores and generate achievement levels of 1, 2, 3, or 4.

Scroll down to the image to see the results.

Now to the meat of the matter.

Rubric design is not easy. It takes quite a bit of careful thought to decide on descriptors, point values and much of the time we don't have a team of experts on the payroll to do this for us.

On the other hand, we're asked to make judgements on students all the time. These judgements are difficult and subjective at times. Mathematical tools like averages help reduce the workload, but they do this at the expense of reducing the information available.

The data you all gave me was the result of educational judgment, and that judgement comes from what you prioritize. In the final step of my Desmos activity, I asked what you typically use to relate a rubric score to a numerical grade. Here are some of the responses.

From @aknauft:

I need to see a consistent pattern of top rubric scores before I assign the top numerical grade. Similarly, if the student does *not* have a consistent set of low rubric scores, I will *not* give them the low numerical grade.
Here specifically, I was looking for:
3 scores of 1 --> skill level 1
2 scores of 2 or 1 score of 3 --> skill level 2 or more
2 scores of 3 --> skill level 3 or more
3 scores of 3 --> skill level 4

From Will:

Sum 'points'
3 or 4 points= 1
5 or 6 points = 2
7 points= 3
8 or 9 points = 4

From Clara:

1 is 60-70
2 is 70-80
3 is 80-90
4 is 90-100
However, 4 is not achievable based on your image.
Also to finely split each point into 10 gradients feels too subjective.
Equivalency to 100 (proportion) would leave everyone except those scoring 3 on the 4 or scale, failing.

Participant Paul also shared some helpful percentages that directly relate the 1 - 4 scale to percentages, perhaps off of his school's grading policy. I'd love to know more. Dennis (on the previous post) commented that multi-component analysis should be done to set the relative weights of the different categories. I agree with his point that this is important and that it can easily be done in a spreadsheet. The difficulty is setting the weights.

The experience of assigning grades using percentages is a time saver, and is easy because of its historical use. Generating the scales as the contributors above did is helpful for relating how a student did on a task to their level. My suggestion is that the percentages we use for achievement levels should be an output of the rubric design process, not an input. In other words, we've got it all backwards.

I used the data you all gave me and fed it into a neural network. This is a way of teaching a computer to make decisions based on a set of example data. I wanted the network to understand how you all thought a particular set of rubric scores would relate to achievement level, and then see how the network would then score a different set of rubric scores.

Based solely on the six example grades I asked you to give, here are the achievement levels the neural network spit out:

ml-rubric-output

I was impressed with how the network scored with the twenty one (out of 27 possible permutations) that you didn't score. It might not be perfect, and you might not agree with every one. The amazing part of this process, however, is that any results you disagree with could be tagged with the score you prefer, and then the network could retrain on that additional training data. You (or a department of teachers) could go through this process and train your own rubric fairly quickly.

I was also curious about the sums of the scores that led to a given achievement level. This is after all what we usually do with these rubrics and record in the grade book. I graphed the rounded results in Desmos. Achievement level is on the vertical axis, and sum is on the horizontal.

One thing that struck me is the fuzziness around certain sum values. A score of 6, for example, leads to a 1, 2, or a 3. I thought there might be some clear sum values that might serve as good thresholds for the different levels, but this isn't the case. This means that simply taking the percentage of points earned and scaling into the ten point ranges for A, B, C, and D removes some important information about what a student actually did on the rubric.

A better way to translate these rubric scores might be to simply give numerical grades that indicate the levels, and communicate the levels that way as part of the score in the grade book. "A score of 75 indicates the student was a level 2."

Where do we go from here? I'm not sure. I'm not advocating that a computer do our grading for us. Along the lines of many of my posts here, I think the computer can help alleviate some of the busy work and increase our efficiency. We're the ones saying what's important. I did another data set where I went through the same process, but acted like the third category was less important than the other two. Here's the result of using that modified training data:

ml-rubric-output-modified

It's interesting how this changed the results, but I haven't dug into them very deeply.

I just know that something needs to change. I had students come to me after final exam grades were put in last week (which, by the way, were raw percentage grades) and being confused by what their grades meant. The floor for failing grades is a 50, and some students interpreted this to mean that they started with a 50, and then any additional points they earned were added on to that grade. I use the 50 as a floor, meaning that a 30% raw score is listed as a 50% in the final exam grade. We need to improve our communication, and there's a lot of work to do if the scale isn't going away.

I'm interested in the idea of a page that would let you train any rubric of any size through a series of clicks. What thoughts do you have at the end of this exploration?


Technical Details:

I used the Javascript implementation of a neural network here to do the training. The visualizations were all made using the Raphael JS library.

5 Comments

Filed under programming, teaching philosophy

Rubrics and Numerical Grades - Hacking the 100-Point Scale, Part 3

As part of thinking through my 100-point scale redesign, I'd like you to share some of your thoughts on a rubric scenario.

Rubrics are great for how they clearly classify different components of assessment for a given task. They also use language that, ideally, gives students the feedback to know what they did well, and where they fell short on that assessment. Here's an example rubric with three performance levels and three categories for a generic assignment:

Screen Shot 2016-06-13 at 5.23.27 PM

I realize some of you might be craving some details of the task and associated descriptors for each level. I'm looking for something here that I think might be independent of the task details.

The student shown above has scores of 1, 2, and 3 respectively for the three categories on this assignment, and all three categories are equally important. Suppose also that in my assessment system, I need to identify a student as being a 1, 2, 3, or 4 in the associated skills based on this assessment.

More generally, I want to be able to take a set of three scores on the rubric and generate a performance level of the student that earned them. I'd like to get your sense of classifying students into the four levels this way.

Here are the rubrics I'd like your help with:
rubrics1

I've created a Desmos Activity using Activity Builder to collect your thoughts. I chose Activity Builder because (a) Desmos is awesome, and (b) the internet is keeping me from Google Docs.

You can access that activity here.

I'll be using the results as an input for a prototype idea I have to make this process a bit easier for all involved. Thanks in advance!

4 Comments

Filed under teaching philosophy

Hacking the 100-Point Scale - Part 2

My previous post focused on the main weakness of the 100-point scale which is the imprecision with which it is defined. Is it percentage of material mastered? Homework percentage completion? Total points earned? It might be all of these things, or none of them, depending on the details of one person's grade book.

Individual departments or schools might try to define uniformity in grading policies, give common final assessments, or spread grading of final exams amongst all teachers to ensure fairness. This might make it easier to compare two students across a course, but still does not clearly define what the grade means. What, however, does it signify that a student in an AP course has an 80 while a student in a regular section of the same course has a 90?

Part of the answer here is based in curriculum. Understanding what students are learning and in what order defines what is being learned, and would add some needed information to compare the AP and regular students just mentioned. The other part is assessment: a well crafted assessment policy based in learning objectives and communicated to a student helps with understanding his or her progress during the school year. I hope it goes without saying that these two components must be present for a teacher to be able to craft and communicate a measure of the student's learning that students, teachers, parents, and administrators can understand.

At this point, I think the elementary teachers have the right idea. I've been in two different school systems now that use a 1 - 4 scale for different skills, with clear descriptors that signify the meaning of each level. Together with detailed written comments, these can paint a picture of what knowledge, skills, and understanding a student has developed during a block of the school year. These levels might describe the understanding of grade level benchmarks using labels such as limited, basic, good, and thorough understanding. These might classify a student using the state of their progress with terms like novice/beginner/intermediate/advanced. The point is that these descriptors are attached to a student and ideally are assigned after reviewing the learning that the student has done over a period of time. I grant that the language can be vague, but this also demands that a teacher must put time into understanding the criteria at his or her school in order to assign grades to a particular student.

When it comes to the 100 point scale, it's all too easy to avoid this deliberate process. I can report assignments as a series of total point values, and then report a student's grade as a percentage of the total using grade book software. Why is a student failing? He didn't earn enough points. How can he do better? Earn more points. How can he do that? Bonus assignments, improving test scores, or by developing better work habits. The ease of generating grades cheapens the deliberate process that may (or may not) have been involved in generating them. Some of the imprecision of the meaning of this grade comes, ironically, from an assumption that the precision of a numerical grade makes it a better indicator. It actually requires more on the part of the teacher to define components of the grade clearly using numerical indicators, and defining these in a way that avoids unintended consequences requires a lot of work to get right.

Numerical grades inform a student's progress, but don't tell the whole story. The A-B-C-D-F grading system hasn't been in use in any of the schools where I've taught, but it escapes some of the baggage of the numerical grade in that it requires that the school report somehow what each letter grade represents. An A might be mapped from a 90-100% average in the class, or 85-100 depending on the school. As with a verbal description, there needs to be some deliberate conversation and communication about the meaning of those grades, and this process opens the door for descriptors for what grades might represent. Numerical grades on the 100 point scale lack this specificity because grades on this scale can be generated with nothing more than a calculation. That isn't to say that a teacher can't put in the time to make that calculation meaningful, but it does mean it's easy to give the impression of precision that isn't there.

Compounding the challenge of its imprecision is the reality that we use this scale for many purposes. Honor roll or merit roll are often based in having a minimum average over courses taken in a given semester. Students on probation, often measured by having a grade below a cut-off score, might not be able to participate in sports or activities. Students with a given GPA have automatic admission to some universities.

I'm not proposing breaking away from grading, and I don't think the 100 point scale is going away. I want to hack the 100 point scale to do a better job of what it is supposed to do. While technology makes it easier to generate a grade than it used to be, I believe it also provides opportunity to do some things that weren't feasible for a teacher to do in the past. We can improve the process of generating the grade to be a measure of learning, and in communicating that measure to all stakeholders.

Some ideas on this have been brewing as I've started grading finals and packing for the end of the year. Summer is a great time to reflect on what we do, isn't it?

Leave a Comment

Filed under teaching philosophy

Hacking The 100-Point Scale - Part 1

One highlight of teaching at an international school is the intersection of many different philosophies in one place. As you might expect, the most striking of these is that of students comparing their experiences. It's impressive how the experienced students that have moved around quickly learn the system of the school they are currently attending and adjust accordingly. What unites these particularly successful students is their awareness that they must understand the system they are in if they are to thrive there. 

This is the case with teachers, as we share with each other just as much. We discuss different school systems and school structures, traditions, and assessment methods. Identifying the similarities and differences in general is an engaging exercise. In general, these conversations lead to a better understanding of why we do what we do in the classroom. Also, in general, these conversations end with specific ideas for what we might do differently on the next meeting with students.

There is one important exception. No single conversation topic has caused more argument, debate, and unresolved conflict at the end of a staff meeting than the use of the 100-point scale.

The reason it's so prevalent is  that it's easy to use. Multiply the total points earned by 100, and then divide by the total possible points. What could go wrong with this system that has been used for so long by so many?

There a number of conversation threads that have been particularly troublesome in our international context, and I'd like to share one here.

"A 75 isn't a bad score."

For a course that is difficult, this might be true. Depending on the Advanced Placement course, you can earn the top score of 5 on the exam by earning anywhere between around 65% and 100% of the possible points. The International Baccalaureate exams work the same way. I took a modern physics exam during university on which I earned a 75 right on the nose. The professor said that considering the content, that was excellent, and that I would probably end up with an A in the course. 

The difference between these courses and typical school report cards is that the International Baccalaureate Organization (IBO), College Board, and college professor all did some sort of scaling to map their raw percentages to what shows up on the report card. They have specific criteria for setting up the scaling that goes from a raw score to the 1 - 5 or 1 - 7 scores for AP or IB grades respectively.

What are these criteria? The IBO, to its credit, has a document that describes what each score indicates about a student with remarkable specificity. Here is their description of a student that receives score of 3 in mathematics:

Demonstrates some knowledge and understanding of the subject; a basic sense of structure that is not sustained throughout the answers; a basic use of terminology appropriate to the subject; some ability to establish links between facts or ideas; some ability to comprehend data or to solve problems.

Compare this to their description of a score of 7:

Demonstrates conceptual awareness, insight, and knowledge and understanding which are evident in the skills of critical thinking; a high level of ability to provide answers which are fully developed, structured in a logical and coherent manner and illustrated with appropriate examples; a precise use of terminology which is specific to the subject; familiarity with the literature of the subject; the ability to analyse and evaluate evidence and to synthesize knowledge and concepts; awareness of alternative points of view and subjective and ideological biases, and the ability to come to reasonable, albeit tentative, conclusions; consistent evidence of critical reflective thinking; a high level of proficiency in analysing and evaluating data or problem solving.

I believe the IBO uses statistical and norm referenced methods to determine the cut scores between certain score bands. I'm also reasonably sure the College Board has a similar process. The point, however, is that these bands are determined so that a given score matches

The college professor used his professional judgement (or a bell curve, I don't actually know) to make his scaling. This connects the raw score to the 'A' on my report card that indicated I knew what I was doing in physics.

The reason this causes trouble in discussions of grades in our school, and I imagine in other schools as well, is the much more ill-defined definition of what percentage grades mean on the report card. Put quite simply, does a 90% on the report card mean the student has mastered 90% of the material? Completed 90% of the assignments? Behaved appropriately 90% of the time? If there are different weights assigned to categories of assignments in the grade book, what does an average of 90% mean?

This is obviously an important discussion for a school to have. Understanding the meaning of the individual percentage grades and what they indicate about student learning should be clear to administrators, teachers, parents, and most importantly, the students themselves. These is a tough conversation.

Who decided that 60% is the percentage of the knowledge I need to get credit? On a quiz on tool safety in the maker space, is 60% an appropriate cut score for someone to know enough? I say no. On the report card, I'd indicate that a student has a 50 as their grade until they demonstrate he or she can get 100% of the safety questions correct. Here, I've redefined the grade in the grade book as being different from the percentage of points earned, however. In other words, I've done the work of relating a performance measure to a grade indicator. These should not be assumed to be the same thing, but being explicit about this requires a conversation defining this to be the case, and communication of this definition to students and teachers sharing sections of the same course.

Most of this time, I don't think there is time for this conversation to happen, which is the first reason I believe this issue exists. The second is the fact that a percentage calculation is mathematically simple and understood as a concept by students, teachers, and parents alike. Grades have been done this way for so long that a grade on the 100-point scale is generally assumed to be this percentage mastered or completed concept.

This is too important to be left to assumption. I'll share more about the dangers of this assumption in a future post.

5 Comments

Filed under teaching philosophy, Uncategorized

Building Functions - Thinking Ahead to Calculus

My ninth graders are working on building functions and modeling in the final unit of the year. There is plenty of good material out there for doing these tasks as a way to master the Common Core standards that describe these skills.

I had a sudden realization that a great source for these types of tasks might be my Calculus materials. Related rates, optimization, and applications of integrals in a Calculus course generally require students to write models of functions and then apply their differentiation or integration knowledge to arrive at a result. The first step in these questions usually involves writing a function, with subsequent question parts requiring Calculus methods to be applied to that function.

I dug into my resources for these topics and found that these questions might be excellent modeling tasks for the ninth grade students if I simply pull out the steps that require Calculus. Today's lesson using these adapted questions was really smooth, and felt good from a vertical planning standpoint.

I could be late to this party. My apologies if you realized this well before I did.

Leave a Comment

Filed under calculus, teaching philosophy

Problems vs. Exercises

My high school mathematics teacher, Mr. Davis, classified all learning tasks in our classroom into two categories: problems and exercises. The distinction between the two is pretty simple. Problems set up a non-routine mathematical conflict. Once that conflict is resolved once, problems cease to be problems - they become exercises. Exercises tend to develop content skills or application of knowledge - problems serve to develop one's habits of mathematical practice and understanding.

I tend to give a mixture of the two types to my students. The immediate question in an assessment context is whether my students have a particular skill or can apply concepts. Sometimes this can be established by doing several problems of the same or similar type. This is usually the situation when students sign up for a reassessment on a learning standard. In cases where I believe my students have overfit their understanding to a particular question type, I might throw them a problem - a new task that requires higher levels of understanding. I might also give them a task that I know is similar to a question they had wrong last time, with a twist. What I have found over time is that there needs to be a difference between what I give them on a subsequent assessment, or I won't get a good reading on their mastery level.

The difficulty I've established over the past few years learning to use SBG has been curating my own set of problems and exercises for assessment. I have textbooks, both electronic and hard copy, and I've noted down the locations of good problems in analog and digital forms. I've always felt the need to guard these and not share them with students so that they don't become exercises. My sense is that good problems are hard to find. Good exercises, on the other hand, are all over the place. This also means that if I've given Student A a particular problem, that I have to find an entirely different one for Student B in case the two pool their resources. In other words, Student A's problem then becomes Student B's exercise. I haven't found that students end up thinking that way, but I still feel weird about using the same problem multiple times.

What I've always wanted was a source of problems that somehow straddled the two categories. I want to be able to give Student A a specific problem that I carefully designed for assessing a particular standard, and student B a different manifestation of that same problem. This might mean different numbers, or a slight variation that still assesses the same thing. I don't want to have to reinvent the problem every single time - there must be a way to avoid repeating that effort. By carefully designing a problem once, and letting, say, a computer make randomized changes to different instances of that problem, I've created a task I can use with different students. Even if I'm in the market for exercises, it would be nice to be able to create those quickly and efficiently too. Being able to share that initial effort with other teachers who also share a need would be a bonus.

I think I've made an initial stab at creating something to fit that need.

3 Comments

Filed under computational-thinking, teaching philosophy