I wrapped up grading final exams today. Feels great, but it was also difficult seeing students coming in to get their final checklists signed before they either graduate or move on to a different school. Lots of bittersweet moments in the classroom today.
I decided after trying my standards based grading (SBG) experiment that I wanted to compare different students' overall performances among the grading categories to their quiz percentages. In a previous post, I wrote about my experimentation doing my quiz assessments on very specific skill standards that the students were given. As I plan to change my grading to be more SBG based for the fall, I figured it would be good to have some comparison data to be able to argue the many reasons why this is a good idea.
In my geometry and algebra two classes, there are 28 total students. I removed two students from the data set that came in the last month of school, and one outlier in terms of overall performance.
The table below shows the names of each grading category, as well as the overall grade weight for the category used in calculating the grade. The numbers are the correlation coefficient in the data between the variable listed in the row and column. For example, the 0.47 is the correlation between the HW data and the Quizzes/Standards data for each student.
|Homework (8%)||Quizzes/Standards (12%)||Test Average (48%)||Semester Exam (20%)||Final Grade (100%)|
Some notes before we dive in:
- The percentages do not add up to 100% because I am leaving out the portfolio grade (8%) and classwork grades (4%) which are not performance based.
- Homework is graded on turning it in and showing work for answers. I collect and look at homework as a regular way to assess how well they are learning the material.
- The empty cells are to save ink; they are just the mirror image of the results in the upper half of the matrix since correlation is not order dependent.
- I know I only have 25 samples - certainly not ready for a research publication.
So what does this mean? I don't know that this is very surprising.
- Students doing HW is not what makes learning happen. I've always said that and this continues to support that hypothesis. It can help, it is evidence that students are doing something with the material between classes, but simply completing it is not enough. I'm fine with this. I get enough information from looking at the homework to create activities that flesh out misunderstanding the next time we meet. The unique thing about homework is that it is often the first time students look at the material on their own rather than with their peers in class.
- My tests include some direct assessments of skills, but also include a lot of new applications of concepts and questions requiring students to explain or show things to be true. It's very gratifying to see such a strong connection between the quiz scores and the test scores.
- I always wonder about the students that say "I get it in class, but then on the tests I freeze up." If there's any major lesson that SBG has confirmed for me, it's that student self-awareness of proficiency is generally not great without some form of external feedback. If this were the case, there would be more data with high quiz scores and low exam scores. That isn't the case here. My students need real and correct feedback on how they are doing, and the skills quizzes are a formalized way to do this.
- I find it really interesting how close the quiz average and the semester exam percentages are. The semester exam was cumulative and covered a lot of ground, but it didn't necessarily hit every single skill that was tested on quizzes. There were also not quizzes for every single skill, though I tried to hit a number of key ones.
This leads me to believe that it is possible to have several key standards to focus on for SBG purposes, and also to dedicate time to work on other concepts during class time through project based learning, explorations, or independent work. It's feasible to assess these other concepts as mathematical process standards that are assessed throughout the semester. It strikes a good balance between developing skills according to curriculum but not making classes a repetitive process of students absorbing procedures for different types of problems. I want to have both. My flipping experiments have worked well to approaching that ideal, but I'm not quite there yet.
I'll have more to say about the details of what I will change as I think about it during the summer. I think a combination of using BlueHarvest for feedback, extending SBG to my Calculus and Physics classes, and less emphasis on grading and collecting homework will be part of it. Stay tuned.