My spell-checker is stupid. It recently wondered whether the URL for my website, http://candicemorey.org, shouldn’t be “scaremongering”. It thinks proper names are mis-spelled words. This is one reason why we don’t let computers write our papers. It would also be pretty stupid of me not to let software help me catch the spelling mistakes that I’m unlikely to notice. In the same way, the statcheck package is a great tool. Unless there is something horribly awry, I’m unlikely to notice if a p-value doesn’t match an F-value for its degrees of freedom, even though this is the hook on which the interpretation of data regularly hangs, the fact that justifies discussion of some phenomenon in the first place. If we messed that up, we’d really like to know, whether we are author, reader, meta-analyzer, or peer-reviewer.
Now that statcheck is being pre-emptively deployed – Psychological Science has announced that they’re running it on submitted manuscripts, and the group behind statcheck has deployed it on thousands of papers – there’s a hum of panic. What if we’re wrong? Won’t this stigmatize great scientists for nothing worse than typos? It’s as though there are scientists out there who think they have never committed a mistake in print. I bet we all have. I know for a fact I have. Acknowledging that everyone makes errors should really take the stigma out of this criticism and let us let statcheck do what it is good for – help us be better.
Anxious reactions to mass deployment of statcheck have all supposed that exposing mistakes in our work will make us look like bad researchers. But if we believe we all mess up, then researchers’ reactions to criticism are what tells us whether they are careless or conscientious researchers. If you think that your work is important at all, if you think that your data and analyses are leading up to some truth, then surely it matters whether the details you report are correct or not. Surely you would want to make it as easy as possible for readers to assess your work. The conscientious scientist who enables and invites criticism will eventually be caught in error, but then that error will be corrected, and the work will be better than it was. If no one has ever detected in error in your work, is it because there aren’t any, or because you’ve made it impossible for anyone to find them?
Researchers who are taking care of quality control deserve our respect. Mere days after we announced the winners of the Journal of Cognitive Psychology’s 2015 Best Paper Award, the corresponding author Vanessa Loaiza emailed me to say that she had recently been re-analyzing the data, and had uncovered a smattering of coding errors that occurred as they transferred scores from pencil-and-paper forms into digital data frames. The authors wanted to issue an erratum correcting the record, even though these mistakes did not change the inferential outcomes. Abashed, the team of authors also felt they may no longer be worthy of the award because they had acknowledged that they made a mistake. It would be ridiculous if we thought more of a hypothetical research team who had never re-examined their data, discovered and corrected their mistakes, or a research team who discovered them but decided not to rectify them, than one who acted and responded as Loaiza and colleagues did. I’m proud that the Journal of Cognitive Psychology honored a group that is so honestly committed to high quality research.
I was troubled that these authors felt like acknowledging a mistake should somehow lead us to discount the quality of their work, even though I know the feeling personally. A few years ago, I issued an erratum to correct errors that occurred when I carelessly copy-pasted code from one analysis to another. These errors were discovered by a colleague after I shared data and analysis scripts with him. I can feel my face flush just recalling how embarrassing this was. If I were unwilling to make my code available, no one, maybe not even me, would have ever have known about this blunder, and I would never have felt this acute embarrassment. But though the errors didn’t affect the inferences we reported, they affected the values in a way that puzzled my colleague. Would it really have been better for my reputation to have a colleague quietly wondering what could be wrong with my analysis rather than knowing for certain that I made a dumb but benign editing mistake, and then corrected it? I think the long-term consequences for my reputation would have been worse had I not made the data and code available for inspection. I just would not have been aware that respected colleagues were suspicious of the quality my work.
Nuijten and colleagues’ work suggests that reporting errors are more prevalent than we imagined. Statcheck only helps us find reporting errors occurring at the final step of the data processing and analysis workflow. Undoubtedly we are making even more errors than they have revealed. We should be worried when we encounter a colleague who isn’t worried about quality control, not when we hear that a colleague is correcting mistakes. We should embrace tools like statcheck, but go even further to ensure quality by also welcoming criticism of the interim steps leading to our reported analyses.