When results don't turn out

One of my readers pointed me to this letter in the current Nature (2/16/06) by Thomas DeCoursey:

Your recent Editorial (Nature 439, 117118; 2006) bemoans the recurring subject of ethics and fraud in scientific research. I contend that many journals contribute to the prevalence of bad science, because, when the fundamental observation that led to the original publication cannot be reproduced, it is nearly impossible to publish a paper documenting this. Hence, controversies persist in the literature over many years, simply because the corrected story either is never published, or is not published as prominently as the initial paper.
...
Reviewers of contradictory results often ask that the authors explain how the original authors could have obtained their results. To quote a recent rejection letter, "an adequate explanation for the apparent contradictory findings is not provided". Certainly, speculative explanations can be offered for some kinds of experimental differences. But it is never possible to prove how another lab obtained data that cannot be reproduced. One can only be certain of one's own data. This demand for explanation creates serious problems in the case of scientific fraud. In a minor case, the original authors may have fudged one small set of data to 'prove' their theory. In a more serious case, fundamental observations cannot be reproduced. Whether this irreproducibility is due to outright fraud, scientific incompetence or some combination cannot be determined by the authors who try to reproduce the result and fail.

I think this complaint carries a lot of truth, except maybe at two boundary conditions. At one extreme, some medical studies will get reproduced (intentionally or unintentionally) many times without exceptional comment. Especially where the results are borderline.

At the other extreme, most analyses done with fossils are statistically borderline just because there aren't very many fossils. Single-gene studies of population history fall into this category, too. So we tend to rehash the same paper again and again every time a new bone comes out of the ground (OK, pretty long after, if nobody gets to see the bone...). And since most of us know most of the data pretty well, the questions tend to be about whether the method is appropriate, rather than whether the data are honest.

We generally don't need to publish papers that fail to reproduce results, because assuming the method is the same and the data are the same, the results are going to be the same.

But notice that clause, "since most of us know most of the data"! This opens up the obvious exception -- if the data are new and obfuscated, there is a lot of potential for fudging results.

More than any other journal I can think of, Nature seems to make it really easy to obfuscate data. There is seemingly no policy about publishing pictures in standard orientations, no means of evaluating reconstructions, and I discovered this week that "Supplementary Information" published online for older papers can just vanish.