The science world is struggling with a ‘replication crisis’ - time and time again, the results of psychology studies have failed to materialise when the test was run a second time.
In 2010, psychologists at the University of Michigan published a paper asserting a link between washing one’s hands and moral judgements with respect to the most recent decision-making. The experiment involved 40 undergraduates choosing between CD covers or jam pots, with half of the subjects subsequently ‘testing’ a soap by washing their hands with it. They were then asked again to choose between the CDs or jam pots.
Results banked, the paper reported ‘After choosing between two alternatives, people perceive the chosen alternative as more attractive and the rejected alternative as less attractive. This post-decisional dissonance effect was eliminated by cleaning one’s hands.’ The study, albeit a small one, investigated the possibility that hand-washing produced a ‘clean slate’ effect mentally. ‘These findings indicate that the psychological impact of physical cleansing extends beyond the moral domain,’ the paper’s authors wrote.
An inventive and potentially enlightening study, perhaps, but one that failed to replicate. A 2015 study of 100 psychological studies, all published in high-ranking journals, found that only one-third to half of the original findings were replicated in a subsequent study.
Should we lose faith in brainiac studies completely? Is it time to declare ‘all science is bunk’, pick up our clubs and head for the nearest cave? Not quite. The good news is that if you think an evaluation will not hold up to a second round of testing, this may be right, and you don’t need specialist qualifications to lean back in your chair and say ‘I told you so’.
Suzanne Hoogeveen and Alexandra Sarafoglou at the University of Amsterdam presented 27 high-profile social science studies to 233 people. Half were psychology students, but none were doctors in the field. When asked if they thought the results would replicate, their predictions were accurate 58% of the time. When told of the strength of evidence attached to the studies, their accuracy increased to 67%.
‘For those studies for which laypeople were nearly unanimous, the predictions were highly accurate,’ they report – proof, perhaps, that being one of the crowd is not always wrong-headed.
They assure us they ‘do not advocate to replace replication studies with judgments of the general public – nor with those of experts. Rather, people’s predictions may be used to provide a quick snapshot of expected replicability.’
Sarafoglou told the New Scientist (Oct 12) “There is a strong incentive in science in general to publish sexy findings. So implicitly, people get pushed towards finding effects that are counter-intuitive.”
So next time you are ready to dismiss a study which shows wearing orange/hoarding VHS tapes/eating scrambled eggs makes you smarter, spare a thought for the guys and gals holding the whisks.