Rethinking the headache of reassessments with Python
One of the challenges I’ve faced in doing reassessments since starting Standards Based Grading (SBG) is dealing with the mechanics of delivering those reassessments. Though others have come up with brilliant ways of making these happen, the design problem I see is this:
- The printer is a walk down the hall from my classroom, requires an ID swipe, and possibly the use of a paper cutter (in the case of multiple students being assessed).
- We are a 1:1 laptop school. Students also tend to have mobile devices on them most of the time.
- I want to deliver reassessments quickly so I can grade them and get them back to students immediately. Minutes later is good, same day is not great, and next day is pointless.
- The time required to generate a reassessment is non-zero, so there needs to be a way to scale for times when many students want to reassess at the same time. The end of the semester is quickly approaching, and I want things to run much more smoothly this semester in comparison to last.
I experimented last fall with having students run problem generators on their computers for this purpose, but there was still too much friction in the system. Students forgot how to run a Python script, got errors when they entered their answers incorrectly, and had scripts with varying levels of errors in them (and their problems) depending on when they downloaded their file. I’ve moved to a web form (thanks Kelly!) for requesting reassessments the day before, which helps me plan ahead a bit, but I still find it takes more time than I think it should to put these together.
With my recent foray into web applications through the Bottle Python framework, I’ve finally been able to piece together a way to make this happen. Here’s the basic outline for how I think I see this coming together – I’m putting it in writing to help make it happen.
- Phase 1 – Looking Good: Generate cleanly formatted web pages using a single page template for each quiz. Each page should be printable (if needed) and should allow for questions that either have images or are pure text. A function should connect a list of questions, standards, and answers to a dynamic URL. To ease grading, there should be a teacher mode that prints the answers on the page.
- Phase 2 – Database-Mania: Creation of multiple databases for both users and questions. This will enable each course to have its own database of questions to be used, sorted by standard or tag. A user can log in and the quiz page for a particular day will automatically appear – no emailing links or PDFs, or picking up prints from the copier will be necessary. Instead of connecting to a list of questions (as in phase 1) the program will instead request that list of question numbers from a database, and then generate the pages for students to use.
- Phase 3 – Randomization: This is the piece I figured out last fall, and it has a couple components. The first is my desire to want to pick the standard a student will be quizzed on, and then have the program choose a question (or questions) from a pool related to that particular standard. This makes reassessments all look different for different students. On top of this, I want some questions themselves to have randomized values so students can’t say ‘Oh, I know this one – the answer’s 3/5’. They won’t all be this way, and my experience doing this last fall helped me figure out which problems work best for this. With this, I would also have instant access to the answers with my special teacher mode.
- Phase 4 – Sharing: Not sure when/if this will happen, but I want a student to be able to take a screenshot of their work for a particular problem, upload it, and start a conversation about it with me or other students through a URL. This will also require a new database that links users, questions, and their work to each other. Capturing the conversation around the content is the key here – not a computerized checker that assigns a numerical score to the student by measuring % wrong, numbers of standards completed, etc.
The bottom line is that I want to get to the conversation part of reassessment more quickly. I preach to my students time and time again that making mistakes and getting effective feedback is how you learn almost anything most efficiently. I can have a computer grade student work, but as others have repeatedly pointed out, work that can be graded by a computer is at the lower level of the continuum of understanding. I want to get past the right/wrong response (which is often all students care about) and get to the conversation that can happen along the way toward learning something new.
Today I tried my prototype of Phase 1 with students in my Geometry class. The pages all looked like this:
I had a number of students out for the AP Mandarin exam, so I had plenty of time to have conversations around the students that were there about their answers. It wasn’t the standard process of taking quiz papers from students, grading them on the spot, and then scrambling to get around to have conversations over the paper they had just written on. Instead I sat with each student and I had them show me what they did to get their answers. If they were correct, I sometimes chose to talk to them about it anyway, because I wanted to see how they did it. If they had a question wrong, it was easy to immediately talk to them about what they didn’t understand.
Though this wasn’t my goal at the beginning of the year, I’ve found that my technological and programming obsessions this year have focused on minimizing the paperwork side of this job and maximizing opportunities for students to get feedback on their work. I used to have students go up to the board and write out their work. Now I snap pictures on my phone and beam them to the projector through an Apple TV. I used to ask questions of the entire class on paper as an exit ticker, collect them, grade them, and give them back the next class. I’m now finding ways to do this all electronically, almost instantly, and without requiring students to log in to a third party website or use an arbitrary piece of hardware.
The central philosophy of computational thinking is the effort to utilize the strengths of computers to organize, iterate, and use patterns to solve problems. The more I push myself to identify my own weaknesses and inefficiencies, the more I am seeing how technology can make up for those negatives and help me focus on what I do best.
1 thought on “Rethinking the headache of reassessments with Python”