Experts are usually wrong21 Aug 2010
Do high rejection-rates perversely make some journals more likely to be wrong?
That’s the question that occurred to me, reading a column by David Freedman (“Why experts are usually wrong”). Freedman, whose book is Wrong: Why experts keep failing us–and how to know when not to trust them, makes a big point of the high rate of medical studies that are later shown to be incorrect. Put together the desire for easy answers, the pressure for positive results in grants and publications, and a strong tendency toward groupthink, and you end up with a club of experts that propagate wrong information.
It was the passage about journals that made me think:
These journals want the same sorts of exciting, useful findings that we all appreciate. And what do you know? Scientists manage to get these exciting findings, even when theyre wrong or exaggerated. Its not as hard as you might think to get a desired but wrong result in a scientific study, thanks to how tricky it is to gather good data and properly analyze it, leaving plenty of room for ambiguity and error, honest or otherwise. If you badly want to prove an experimental drug works, you can choose your patients very carefully, and find excuses for tossing out the data that looks bad. If you want to prove that dietary fat is good for you, or that fat is bad for you, you can just keep poring over different patient data until you find a connection that by luck seems to support your theory which is why studies constantly seem to come to different findings on the same questions.
Take a journal that rejects 19 papers for every one it publishes. A paper will be much more newsworthy, and therefore more likely to get through the publication filter, if it has some unexpected result. Or, in some fields, if it provides a key confirmation of some bigwigs’ pet theories. Even marginal statistics may be enough to get these kinds of papers published, because they’ll attract a lot of attention and citations. A negative result in most fields, even with very strong statistics, doesn’t drive that kind of interest.
It seems to me that these conditions should make a journal more likely to contain erroneous results. Journals would like us to think that more rigorous peer review makes up for these biases, but clearly it won’t, unless reviewers demand systematically lower p-values.