I happened to be lecturing about cause-effect relationships in my Biology of Mind course today, and will continue the subject next time. So I was interested when I saw this story from New Scientist writer Evan Callaway:
The tendency to falsely link cause to effect a superstition is occasionally beneficial, says Kevin Foster, an evolutionary biologist at Harvard University.
For instance, a prehistoric human might associate rustling grass with the approach of a predator and hide. Most of the time, the wind will have caused the sound, but "if a group of lions is coming theres a huge benefit to not being around," Foster says.
OK, so this seems fairly obvious on the surface. Sure, if you could avoid the lions by jumping at every sound in the grass, then this kind of “superstition” might pay off, and would be unlikely to hurt. But then, being afraid of wind in the grass is really not a superstition. There is reliable mutual information connecting grass sounds and lions. Jumping at every grass movement might lead to lots of false alarms, but will really help avoid the lions, too.
That’s a different kind of scenario than what we usually mean by “superstition”. Like never washing your game socks so that your team won’t lose on Saturday. Or paying the witch doctor so that the demons will spare your little girl.
We can still say that these false beliefs have little fitness cost. After all, the difference between a witch doctor and a real doctor may be important in today’s cosmopolitan society, but throughout most of human existence, faith healers, witch doctors, shamans and thaumaturges really didn’t face any competition.
But still, the simple idea of false association doesn’t seem to cover these kinds of superstitions. I mean, many of these beliefs are so complicated, with involved series of rules and taboos that must be observed. Human superstitions may involve false hypotheses of causation, but they are not only that. So it seems like there must be some active principle at work, prompting people to learn arbitrary sequences of behaviors on the basis of fictions.
Moreover, it is not easy to list cases where animals make false inferences of cause and effect in this way. To be sure, we can manipulate animals experimentally, getting them to learn arbitrary correlations (like Pavlov’s dogs) and then switch them so that they’re false. But that’s not the same thing at all! That’s like the case where other teams all collude with each other to make sure that your team always wins except when you wash your socks.
The research study is available in Proceedings of the Royal Society B, by Kevin Foster and Hanna Kokko. The paper defines “superstitious behavior” as behaviors that result from “the incorrect establishment of cause and effect”. That definition doesn’t include most of what people mean when they refer to “superstitions” – it’s really quite strictly limited to formally correct inferences based on spurious associations. The authors then derive conditions under which such errors may actually increase fitness.
Foster and Kokko assume a model in which the fitness of an individual is determined by some events in a series that includes many other events that do not affect fitness. If some of those benign events are correlated with fitness-determining events (that is, if there is mutual information linking them), then an individual might increase its fitness by exploiting this mutual information.
Of course, there is a problem of detecting the correlations. Suppose we observe for three consecutive months that it rains after the full moon. We might be tempted to think there is some causal connection here—that the full moon causes rain in some way. Maybe the moon goddess has an affinity for water. In any event, our observation may induce us to believe that other full moons will also be followed by rain. That’s inductive reasoning.
To that end, Foster and Kokko’s model might appear to provide an evolutionary account of the problem of induction. Their model entails a trade-off: A greater degree of precision in making correct inferences will have benefits, but the necessary effort will have costs. “Superstition” reigns below a certain threshold – the point at which the costs of more accurate inferences are balanced by the benefits of rapid inference-making.
That’s a simple model, and the trade-off is a mathematical necessity, given the assumptions.
Is that all "superstition" is?
It seems to me that “false inference” is not a sufficient definition for “superstition.” For one thing, superstitions persist even in the face of considerable evidence against them. The player who loses a game may still keep wearing those dirty socks. The family whose first child dies may still employ the witch doctor when the second child falls ill.
Another thing is that superstitions do not really consist of false inferences. They consist of highly distinctive signs. Those dirty socks, or a rabbit’s foot, or oracle bones. Walking under a ladder, broken mirrors, which direction the horseshoe is hanging. “Gesundheit” after a sneeze. You don’t learn about them by inference from natural observations; you learn about them from other people who explain the “boundaries” of the signs – what counts and doesn’t count as part of the belief.
We might compose some sort of analogous argument to that presented by Forster and Kokko. Maybe learning about superstitious beliefs from other people is adaptive because we also learn about valid, true inferences from them, and it’s too costly for us to try to tell the difference. There may be some truth to that idea. We accept superstitious beliefs because everyone around us accepts them, and how could so many people be wrong?
But there are other aspects we should consider. We can’t freely take or leave the beliefs that are common in the society around us. Some of them are enforced at the point of a gun. Others are instilled by ritual repetition from early childhood onward. We can call it “social pressure” or “tradition,” or simply “culture,” but whatever we call it, we have to recognize that we are compelled to accept some beliefs based on what other people around us believe.
If we call a belief a “superstition,” chances are it’s already out of style. Belief in ghosts may have been the norm a hundred years ago; today it’s a smaller niche. Or a “superstition” may be simply idiosyncratic in some way – one person’s eccentric belief. There are many, many equally silly beliefs that we call much more respectful things, like “laws” or “traditions” — the difference being that large communities of people share those rituals.
Any account of superstition has to take into consideration those elements of human learning and sociality. It is far from obvious that any non-human animals have “superstition,” even in the sense of false inferences. If we look at a truly human use of the term – arbitrary sign sequences believed to alter natural phenomena and maintained by social learning – then it is doubtful that any non-humans have such beliefs. Still, phenomena like the chimpanzee “rain dance” might fit the bill, if we could suitably define the concept of “belief” in a non-linguistic context.
At any rate, the kind of logic about costs and benefits of inference-making don’t seem to describe the kind of phenomenon we really mean when we talk about “superstition.” And that’s disappointing. If we take the course of defining the term in an overly simplistic way, ignoring its evident social and semiotic components, then we rob ourselves of the depth of human cognitive creativity.
Foster KR, Kokko H. 2008. The evolution of superstitious and superstition-like behaviour. Proc Roy Soc Lond B (online) doi:10.1098/rspb.2008.0981