Skip to content

Did human language evolve as a spandrel?

A critical look at the point of view that human language did not originate from its adaptive role in communication, but from other cognitive functions.

5 min read
Early twentieth-century ceramic head with brain and labels.
Photo by David Matos on Unsplash

I’m preparing a lecture on the evolution of language. This post started out as a short note about the quote below, referring to a new book by V. S. Ramachandran. But I realized it takes some setting up to explain why I’m interested in it. One thing has always bugged me: Noam Chomsky once claimed that some of the cognitive adaptations that support language, human syntactic abilities in particular, did not evolve for their use in communication.

Human languages are diverse in their use of syntacic rules, but not endlessly so. Chomsky’s Universal Grammar model attempts to explain why the diversity is limited in the particular ways found in spoken languages. The model entails a complex flowchart of rules in the brains of babies. Exposure to spoken language selects among these rules, winnowing out the ones that are not instantiated in the grammar of the language in the child’s experience. The unused rules are, in this sense, never part of the phenotype. So how could they be targets of selection? Chomsky realized that they couldn’t be, and argued accordingly. The rules of grammar must have evolved for some other purpose, as a spandrel. Or they might emerge naturally as a physical principle from the complex brain. Whatever they are, that flowchart didn’t evolve to make babies talk.

An alternative hypothesis is that the cognitive abilities that underlie syntax, so essential to human language today, evolved under selection for their utility in communication. This view was notably promoted by Steven Pinker during the 1990’s, who presented a basically Chomskian view of innate grammatical rules (the “language instinct”) but argued that selection on their linguistic function could support their evolution despite the conditional nature of the rules.

The two perspectives – spandrel versus adaptation – form a classic argument in the study of the evolution of language. It makes a nice way of illustrating the importance of spandrels as potential explanations for the evolution of human characteristics. But I don’t believe Chomsky’s idea in this case, and for the purposes of my lecture I’ve been worried that it’s a bit of a straw man argument. You see, it’s an extraordinary claim that syntax emerged from some other specialized cognitive function, because we really have no reason to think that any more basic cognitive function is very much like syntax. At the least, we deserve some explanation of exactly which cognitive function this might be, and why a process seemingly ideally fitted to organize hierarchical concepts into a serial channel would not have applied to communication from the start.

You can see why I don’t like Chomsky’s idea. It’s like the aquatic ape theory of language, that’s what it is!

It’s hard for me to promote it seriously as an alternative, and so it’s hard to compile the lecture. So I was greatly heartened to discover that a new book by V. S. Ramachandran apparently presents a very similar account of syntactic abilities as a spandrel. The book is The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human. Colin McGinn reviewed the book in the New York Review of Books last week:

As to syntax, Ramachandran proposes that the use of tools afforded its initial foundation, particularly the use of the subassembly technique in tool manufacture, for example, affixing an ax head to a wooden handle. This composite physical structure is compared to the syntactic composition of a sentence. Thus tool use, bouba-kiki, synkinesia, and thinking all combine to make language possiblealong with those ubiquitous mirror neurons. Just as fine-tuned hearing evolved from chewing in the reptilian jawbone structure (an excaption in the jargon of evolutionists)as bones selected for biting became co-opted in the small bones of the earso human language grew from prelinguistic structures and capacities, building upon traits selected for other reasons. The jump to speech was therefore mediated, not abrupt.

Well, there’s a proposal for the kind of anything-but-communication cognition that could plausibly allow the evolution of syntax as a spandrel. It was tools that done it, along with a ragtag band of current neuroscience clichés.

I still don’t believe it. Some archaeologists fetishize stone tools in this way, making them the end-all of human cognitive evolution. But let’s face it: chimpanzees and even capuchin monkeys perform multistep tool operations using the brains they have. Hafting a point on a stick seems like the pinnacle of progress only when points are all the ground yields up.

Consider how many times a child will witness tools being crafted. Now consider how many times the same child hears spoken communication. The second is at least two or three orders of magnitude greater than the first. It’s not statistically credible for toolmaking to provide a cognitive basis for language. The opposite is vastly more likely.

Ironically, my current view is that much of language cognition really may be a spandrel – at least, in the broad sense promoted by Gould. In “The pleasures of pluralism”, Gould argues that most universal cognitive functions are probably spandrels:

The human brain is the most complicated device for reasoning and calculating, and for expressing emotion, ever evolved on earth. Natural selection made the human brain big, but most of our mental properties and potentials may be spandrelsthat is, nonadaptive side consequences of building a device with such structural complexity. If I put a small computer (no match for a brain) in my factory, my adaptive reasons for so doing (to keep accounts and issue paychecks) represent a tiny subset of what the computer, by virtue of inherent structure, can do (factor-analyze my data on land snails, beat or tie anyone perpetually in tic-tac-toe). In pure numbers, the spandrels overwhelm the adaptations.

Gould’s computer analogy is flawed – he may have selected his computer for certain things, but somebody designed the computer to do all those other things. But our brains are not like computers, because many of our cognitive functions are learned, not designed. The appearance of design comes through bootstrapping on regularities in the environment. Nature didn’t select for language by selecting for a flowchart; it selected brains that could learn without rules specified in advance.

UPDATE (2011-03-11): The new issue of Science includes a very useful review article by Joshua Tenenbaum and colleagues that pertains to my final suggestion. The review is about Bayesian approaches to learning and how they can yield a more flexible ability to incorporate structured information.

In traditional associative or connectionist approaches, statistical models of learning were defined over large numerical vectors. Learning was seen as estimating strengths in an associative memory, weights in a neural network, or parameters of a high-dimensional nonlinear function (12, 14). Bayesian cognitive models, in contrast, have had most success defining probabilities over more structured symbolic forms of knowledge representations used in computer science and artificial intelligence, such as graphs, grammars, predicate logic, relational schemas, and functional programs. Different forms of representation are used to capture peoples knowledge in different domains and tasks and at different levels of abstraction.

The review deserves a fuller treatment, but I wanted to add it here to give a hint to the answer of what had to evolve in human minds to make us capable of learning language. We don’t need a set of rules capable of acquiring any and all human grammars, of the sort posited by Chomsky. A more flexible, hierarchical learning strategy can generalize its own rules – in basically the same way an e-mail spam filter generalizes rules. Without going extensively into Bayesian logic – which I find a distasteful ordeal – we can conceptualize a very large hypothesis space reduced by considering it structured into dimensions. Instead of needing a very long 1:1 vector of associative induction of rules from data, we need only a relatively short tree capable of spawning exceptions at each node.

Note that in such a system, the learning algorithms capable of sorting junk e-mail are very similar to those capable of returning useful search results, or for that matter, of finding best-fit phylogeographic hypotheses. The data and inferences are different in these cases, but the methods are quite similar. If neural systems evolved to develop these kinds of networks, it should be no surprise that they might be able to tackle the syntactic rules of a natural language with relatively little specialized adaptation. The system would indeed be a spandrel; co-opting neural adaptations for other kinds of cognition.

I still don’t think toolmaking had anything to do with it.

evolution of languageNoam Chomskyadaptation
John Hawks

John Hawks Twitter

I'm a paleoanthropologist exploring the world of ancient humans and our fossil relatives.


Related Posts

Members Public

Did two pulses of evolution supercharge human cognition?

An intriguing new study tries to tabulate the ages of genetic variants associated with human phenotypes, but its claims about recent brain evolution may not pan out.

A stylized image of a brain with lightning pulsing through it
Members Public

What color were Neandertals?

Even with whole genomes, scientists can't say very precisely what pattern of skin, hair, and eye pigmentation was in ancient populations like the Neandertals.

Fifteen Neandertal faces of varied ages and complexions
Members Public

Understanding numbers as cognitive technology

The Whorf hypothesis says language shapes human thought. A small indigenous group with few number words puts the idea to the test.

Dominoes with colorful dots