Reaction Time & Web Data Collection
If you put out an open call through email to complete a task for nothing in return, it might make sense not to expect much. I tried to make it as simple as possible to gather some reaction time data for my IB Mathematics SL class to analyze. My goal for each class has been to get an interesting data set each time and see what students can make out of it. After several hours of having this open, I had a really nice set of data to give the class.
I know my social networks are connections between some phenomenal people. That said, I didn’t know that the interest in trying this out would be so substantial, and in several cases, get people to try multiple times to get their own best time. In less than a week, I’ve collected more than 1,000 responses to my request to click a button:
I coded this pretty quickly and left out the error correction I would have included given the number of people that did this. I’ve been told that between phones, tablets, desktops, laptops, and even SmartBoards, there have been many different use cases for times ranging from hundredths of a second to more than five minutes – clearly an indication that this badly needs to be tweaked and fixed. That said, I am eager to share the results with the community that helped me out, along with the rest of the world. A histogram:
There’s nothing surprising here to report on a first look. It is clear that my lazy use of jQuery to handle the click event made for a prominent second peak at around 0.75 seconds for those tapping on a screen rather than clicking. Some anecdotal reporting from Facebook confirmed this might have been the explanation. The rest of the random data outside of the reasonable range is nothing more than poorly coding the user experience on my part. Sorry, folks.
This isn’t the first time I’ve done a data collection task involving clicking a button – far from it. It’s amazing what can be collected with a simple task and little entry cost, even when it’s a mathematical one. One of the things I wonder about these days is which tools are needed to make it easy for anyone (including students) to build a collection system like this and investigate something of personal importance. This has become much easier with tools such as Google Docs, but it isn’t easy to get a clean interface that strips away the surrounding material to make the content the focus. For all I know, there may already be a solution out there. I’d love to hear about it if you know.
Neat. What did your students do with this data?
It ended up being a perfect motivation for standard deviation, which we hadn’t yet discussed. The means were right on top of each other, but the variability of the Facebook only data was higher and evident merely from students looking at the values. We needed a better way to describe that variability, so standard deviation was there to save us.
I’d love to say that we did a lot more with it, but we’re getting squeezed against the end of the year here, and I still had a bit further to go before reviewing for finals, so I’m setting it aside. It would be great if it inspired students to perhaps design their own survey for their exploration/internal assessment.