Tag Archives: reassessments

Too Many Reassessments, Just in Time for Summer

I posted this graph of cumulative reassessments versus the day of the semester on Twitter:

That, my friends, is a reassessment system gone wild. The appropriate title for that image, as one person pointed out, is Too Many Reassessments. The grand total for this semester was 711. There are obvious bunches of reassessments close to the ends of the quarters when the grade book closes.

Here is a histogram of the reassessment data for the semester. There is some discrepancy in the total number in the data here, but I haven't figured out exactly where that is.

I committed to transplanting the system I have used in the past to my new school this year, and didn't want to make a full change without seeing how this would play out. This semester I was much more consistent in the types of questions I gave students reassessing, changing grades based on a reassessment, and the choice I offered them for the level of reassessment. Some of this I wrote about at the beginning of the semester.

The most important observation I can make at this point is that this system is not sustainable as is. I cannot make my sign-up and credit system more efficient to manage the volume - that isn't the issue. I'm satisfied with the quality of the questions I give students. I've developed a pretty nice bank of questions that span the spectrum of application, understanding, and transfer. The bigger issue is my capacity to give the sort of feedback I want to give to students throughout the semester. I have many conversations about learning, and many of them are great, but I cannot multiply myself to have as many of those conversations as I want.

Here's a graph of the average learning standards grade for a sample of students compared with the number of reassessments:

This doesn't support the expectation that more reassessments implies a higher grade. Students are not necessarily doing machine-gun style reassessment. They are working on specific skills and show my what they are doing. They are responding in a positive way to my feedback. Credits, which students earn by doing work and review of concepts, are still required for students to reassess. Students are for the most part using their credits. Expiring credits, as much as I thought it would make a difference, is not making that much of a difference for behavior (i.e. signing up for reassessments) or course grade. I need to dig into the data more to be able to explain why.

In terms of moving forward, I have many things to think about.

  • The past three or four years have been an exercise in exploring a system that centers around student-initiated reassessment. I'm not sure it's time for that to completely go away, but I wonder about shifting my focus to an assessment structure centered on teacher-initiated. I already do this on unit exams, but I wouldn't say it is the focus of where I spend my time.
  • I wonder if reducing the permitted number of reassessments per student to one per week would improve their effectiveness. This effectiveness increase could be based in higher quality feedback from me, more focused effort on the part of the student for improving understanding on a given learning standard, or something else entirely. This reduces the options for students to learn on their own timeline, which isn't a good thing. While we're being honest though, that exponential curve at the end of the assessment period is all the evidence I need to accept that the timeline is based on the grading-period structure, not learning.
  • How do I most efficiently help the weak student that reassesses on the same standard multiple times and makes limited progress on each attempt?
  • How do I give meaningful guidance to the student that aces everything on the first try? How do I get them more involved in finding learning that is meaningful, rather than waiting for me to tell them what to learn?
  • What do the students think? I've collected all sorts of anecdotal evidence that students appreciate the opportunities to reassess, and not just in a superficial way related to their course grade. I've given students an end of year survey to complete, and those results are rolling in slowly as students complete their final exams.

These are the big picture questions that add one more reason to be thankful that the summer is ahead. Getting back to my main point, I am brought back to the idea that quality feedback is the main way we as teachers add value. This, like many things in education, is not easy to scale. This need for improving and scaling the transfer of feedback is really the only basis for innovation in the ed-tech realm that interests me at all these days. So far, despite the best intentions of many that are trying, machine learning is not the answer yet. Make it easy for me to organize and collect student thinking, respond to that thinking, and give helpful nudges to the resources needed to make progress, and then I'll consider your product.

Final exam marking is ahead. Stay tuned.

Rethinking the headache of reassessments with Python

One of the challenges I've faced in doing reassessments since starting Standards Based Grading (SBG) is dealing with the mechanics of delivering those reassessments. Though others have come up with brilliant ways of making these happen, the design problem I see is this:

  • The printer is a walk down the hall from my classroom, requires an ID swipe, and possibly the use of a paper cutter (in the case of multiple students being assessed).
  • We are a 1:1 laptop school. Students also tend to have mobile devices on them most of the time.
  • I want to deliver reassessments quickly so I can grade them and get them back to students immediately. Minutes later is good, same day is not great, and next day is pointless.
  • The time required to generate a reassessment is non-zero, so there needs to be a way to scale for times when many students want to reassess at the same time. The end of the semester is quickly approaching, and I want things to run much more smoothly this semester in comparison to last.

I experimented last fall with having students run problem generators on their computers for this purpose, but there was still too much friction in the system. Students forgot how to run a Python script, got errors when they entered their answers incorrectly, and had scripts with varying levels of errors in them (and their problems) depending on when they downloaded their file. I've moved to a web form (thanks Kelly!) for requesting reassessments the day before, which helps me plan ahead a bit, but I still find it takes more time than I think it should to put these together.

With my recent foray into web applications through the Bottle Python framework, I've finally been able to piece together a way to make this happen. Here's the basic outline for how I think I see this coming together - I'm putting it in writing to help make it happen.

  • Phase 1 - Looking Good: Generate cleanly formatted web pages using a single page template for each quiz. Each page should be printable (if needed) and should allow for questions that either have images or are pure text. A function should connect a list of questions, standards, and answers to a dynamic URL. To ease grading, there should be a teacher mode that prints the answers on the page.
  • Phase 2 - Database-Mania: Creation of multiple databases for both users and questions. This will enable each course to have its own database of questions to be used, sorted by standard or tag. A user can log in and the quiz page for a particular day will automatically appear - no emailing links or PDFs, or picking up prints from the copier will be necessary. Instead of connecting to a list of questions (as in phase 1) the program will instead request that list of question numbers from a database, and then generate the pages for students to use.
  • Phase 3 - Randomization: This is the piece I figured out last fall, and it has a couple components. The first is my desire to want to pick the standard a student will be quizzed on, and then have the program choose a question (or questions) from a pool related to that particular standard. This makes reassessments all look different for different students. On top of this, I want some questions themselves to have randomized values so students can't say 'Oh, I know this one - the answer's 3/5'. They won't all be this way, and my experience doing this last fall helped me figure out which problems work best for this. With this, I would also have instant access to the answers with my special teacher mode.
  • Phase 4 - Sharing: Not sure when/if this will happen, but I want a student to be able to take a screenshot of their work for a particular problem, upload it, and start a conversation about it with me or other students through a URL. This will also require a new database that links users, questions, and their work to each other. Capturing the conversation around the content is the key here - not a computerized checker that assigns a numerical score to the student by measuring % wrong, numbers of standards completed, etc.

The bottom line is that I want to get to the conversation part of reassessment more quickly. I preach to my students time and time again that making mistakes and getting effective feedback is how you learn almost anything most efficiently. I can have a computer grade student work, but as others have repeatedly pointed out, work that can be graded by a computer is at the lower level of the continuum of understanding. I want to get past the right/wrong response (which is often all students care about) and get to the conversation that can happen along the way toward learning something new.

Today I tried my prototype of Phase 1 with students in my Geometry class. The pages all looked like this:

Image

I had a number of students out for the AP Mandarin exam, so I had plenty of time to have conversations around the students that were there about their answers. It wasn't the standard process of taking quiz papers from students, grading them on the spot, and then scrambling to get around to have conversations over the paper they had just written on. Instead I sat with each student and I had them show me what they did to get their answers. If they were correct, I sometimes chose to talk to them about it anyway, because I wanted to see how they did it. If they had a question wrong, it was easy to immediately talk to them about what they didn't understand.

Though this wasn't my goal at the beginning of the year, I've found that my technological and programming obsessions this year have focused on minimizing the paperwork side of this job and maximizing opportunities for students to get feedback on their work. I used to have students go up to the board and write out their work. Now I snap pictures on my phone and beam them to the projector through an Apple TV. I used to ask questions of the entire class on paper as an exit ticker, collect them, grade them, and give them back the next class. I'm now finding ways to do this all electronically, almost instantly, and without requiring students to log in to a third party website or use an arbitrary piece of hardware.

The central philosophy of computational thinking is the effort to utilize the strengths of  computers to organize, iterate, and use patterns to solve problems.  The more I push myself to identify my own weaknesses and inefficiencies, the more I am seeing how technology can make up for those negatives and help me focus on what I do best.