Link: Replication crisis and status competition in psychology

The ongoing “replication crisis” in psychology has become an interesting study in the sociology of science. I don’t have anything especially deep to say about it, but I found this long update by the statistician Andrew Gelman very interesting: “What has happened down here is the winds have changed”. He focuses on a recent op-ed by Susan Fiske, who decries what she terms “mob rule” by other scientists who are questioning the statistical basis of some dearly-held theories in social psychology.

Gelman gives a history of the current “crisis”, pointing out that the statistical problems were apparent to some scientists in the 1960s. The current recognition of those problems has been long unfolding, and has yet to really change the practice of science. But it is raising uncomfortable questions for many established scientists who have built reputations on what many critics now describe as p-hacking. Fiske describes this criticism as “methodological terrorism”. The drama is ratcheting up.

I bring this up not in the spirit of gotcha, but rather to emphasize what a difficult position Fiske is in. She’s seeing her professional world collapsing—not at a personal level, I assume she’ll keep her title as the Eugene Higgins Professor of Psychology and Professor of Public Affairs at Princeton University for as long as she wants—but her work and the work of her friends and colleagues is being questioned in a way that no one could’ve imagined ten years ago. It’s scary, and it’s gotta be a lot easier for her to blame some unnamed “terrorists” than to confront the gaps in her own understanding of research methods.
To put it another way, Fiske and her friends and students followed a certain path which has given them fame, fortune, and acclaim. Question the path, and you question the legitimacy of all that came from it. And that can’t be pleasant.

Today’s established leaders attained their status under old rules that simply do not work for early and midcareer researchers today. Yet those established players have the power to allocate tenure, grants, and recognition, and they have mostly been relying upon the old rules to do so. As a result, early and midcareer scientists are justifiably frightened. They have to play the game by obsolete rules to get ahead, but it is increasingly clear that this game cannot lead to accurate and replicable science in the long term.

The best and most obvious way to make scientific progress is to tear down the nonreplicable edifice, but this inevitably requires attacking the cherished ideas of long-established players – many of whom have lucrative book and speaking careers based upon their social psychology “discoveries”.

And that’s why the authors’ claim that fixing the errors “does not change the conclusion of the paper” is both ridiculous and all too true. It’s ridiculous because one of the key claims is entirely based on a statistically significant p-value that is no longer there. But the claim is true because the real “conclusion of the paper” doesn’t depend on any of its details—all that matters is that there’s something, somewhere, that has p less than .05, because that’s enough to make publishable, promotable claims about “the pervasiveness and persistence of the elderly stereotype” or whatever else they want to publish that day.
When the authors protest that none of the errors really matter, it makes you realize that, in these projects, the data hardly matter at all.

What a mess.