Joseph Brean Dec 30, 2011
When news broke this year that Diederik Stapel, a prominent Dutch social psychologist, was faking his results on dozens of experiments, the fallout was swift, brutal and global.
Science and Nature, the world’s top chroniclers of science, were forced to retract papers that had received wide popular attention, including one that seemed to link messiness with racism, because “disordered contexts [such as litter or a broken-up sidewalk and an abandoned bicycle] indeed promote stereotyping and discrimination.”
As a result, some of Prof. Stapel’s junior colleagues lost their entire publication output; Tilburg University launched a criminal case; Prof. Stapel himself returned his PhD and sought mental health care; and the entire field of social psychology — in which human behaviour is statistically analyzed — fell under a pall of suspicion.
One of the great unanswered questions about the Stapel affair, however, is how he got away with such blatant number-fudging, especially in a discipline that claims to be chock full of intellectual safeguards, from peer review to replication by competitive colleagues. How can proper science go so wrong?
YouTube Diederik Stapel
The answer, according to a growing number of statistical skeptics, is that without release of raw data and methodology, this kind of research amounts to little more than “’trust me’ science,” in which intentional fraud and unintentional bias remain hidden behind the numbers. Only the illusion of significance remains.