Ed Yong has a long article in Nature about the recurrent problems with non-replication of "Replication studies: Bad copy". The piece begins with the flap over Daryl Bem's work on ESP, in which journals refused to publish non-replications by other researchers. The sad part is that many other areas of psychology follow the same protocol as work on paranormal psychology: Publish highly massaged positive results, don't encourage anyone to replicate.
One reason for the excess in positive results for psychology is an emphasis on slightly freak-show-ish results, says Chris Chambers, an experimental psychologist at Cardiff University, UK. High-impact journals often regard psychology as a sort of parlour-trick area, he says. Results need to be exciting, eye-catching, even implausible. Simmons says that the blame lies partly in the review process. When we review papers, we're often making authors prove that their findings are novel or interesting, he says. We're not often making them prove that their findings are true.
Instead of actual replication, researchers sometimes pursue "conceptual replication": showing that similar experimental designs also yield positive results:
But to other psychologists, reliance on conceptual replication is problematic. You can't replicate a concept, says Chambers. It's so subjective. It's anybody's guess as to how similar something needs to be to count as a conceptual replication. The practice also produces a logical double-standard, he says. For example, if a heavy clipboard unconsciously influences people's judgements, that could be taken to conceptually replicate the slow-walking effect. But if the weight of the clipboard had no influence, no one would argue that priming had been conceptually falsified. With its ability to verify but not falsify, conceptual replication allows weak results to support one another. It is the scientific embodiment of confirmation bias, says Brian Nosek, a social psychologist from the University of Virginia in Charlottesville. Psychology would suffer if it wasn't practised but it doesn't replace direct replication. To show that 'A' is true, you don't do 'B'. You do 'A' again.
Someone quoted in the article compares this situation to a house of cards. I agree. You are building one assumption upon another. The disturbing part is that the discipline accepts that some researchers just have a "knack" for making a particular experimental design work, and other researchers may have trouble recreating the exact conditions. That very attitude enables fraud, as we have seen repeatedly during the last few years. In science, if no one else can make the experiment work, it didn't happen.
The entire article is worth reading and wide discussion.