Evan MacLean and colleagues write this week in PNAS about the evolution of self-control.
As a blogger, I have little interest in the subject.
But I read the paper closely, because presents an important analysis of something well beyond self-control: the relationship of brain size and cognition across species. The list of authors includes a huge array of experimental psychologists and animal behaviorists, representing the thirty-six species considered in the analysis -- everything from scrub jays and pigeons to elephants and aye-ayes. In all these species, the experimenters assessed performance on tasks related to self-control. They found that the species' performance was predicted by brain size. Bigger brained species tended to exhibit greater ability to control their immediate responses to stimuli in favor of a previously learned behavioral routine.
Self-control and cognition
It may not be intuitive that self-control is such an important aspect of cognition. Self-control includes the ability to inhibit responses to immediate stimuli, in favor of more useful or adaptive learned behaviors. Being able to pause and consider the best response to a situation is essential to higher cognitive abilities.
This study involved two kinds of experiments, in both of which animals learned to obtain a food reward in a certain way, but then were presented with an alternative scene in which a food reward was prominently visible to them but not immediately attainable. Animals who lack self-control immediately go for the new food stimulus, even though they can't get it. By contrast, an animal who is not overly distracted by the new stimulus, and goes back to the learned pattern to obtain the reward, is said to exhibit self-control.
As the authors describe, self-control defined in this way is relevant to fitness and an important component of being able to rely on learning to modulate behavior:
We chose to measure self-control—the ability to inhibit a prepotent but ultimately counter-productive behavior—because it is a crucial and well-studied component of executive function and is involved in diverse decision-making processes (167–169). For example, animals require self-control when avoiding feeding or mating in view of a higher-ranking individual, sharing food with kin, or searching for food in a new area rather than a previously rewarding foraging site. In humans, self-control has been linked to health, economic, social, and academic achievement, and is known to be heritable (170–172). In song sparrows, a study using one of the tasks reported here found a correlation between self-control and song repertoire size, a predictor of fitness in this species (173). In primates, performance on a series of nonsocial self-control control tasks was related to variability in social systems (174), illustrating the potential link between these skills and socioecology. Thus, tasks that quantify self-control are ideal for comparison across taxa given its robust behavioral correlates, heritable basis, and potential impact on reproductive success.
So what did they find? Bigger brains correlate with greater self-control:
Our phylogenetic comparison of three dozen species supports the hypothesis that the major proximate mechanism underlying the evolution of self-control is increases in absolute brain volume. Our findings also implicate dietary breadth as an important ecological correlate, and potential selective pressure for the evolution of these skills. In contrast, residual brain volume was only weakly related, and social group size was unrelated, to variance in self- control. The weaker relationship with residual brain volume and lack of relationship with social group size is particularly surprising given the common use of relative brain volume as a proxy for cognition and historical emphasis on increases in social group size as a likely driver of primate cognitive evolution (85).
Absolute, not relative brain size
Biologists have been interested in the comparative study of brain size and cognition for more than a hundred years. Eugene Dubois, famous for finding the "Pithecanthropus erectus" skullcap and femur at Trinil, Indonesia, was the first to consider explicitly the relation of relative brain size and intelligence. From his point of view, the problem was the inverse of the one we generally face. Dubois did not know the relationship between body size and brain size across mammals. He knew that larger mammals had larger brains, and that the relationship is not linear. From that, he was interested in working out the relationship of brain size and body size between animals with equal intelligence.
If you blew up a Hyracotherium to the size of a horse, its brain would be much larger than a horse's brain. The evolutionary history leading from small bodies to larger bodies in the horse lineage was accompanied by a relative reduction in the size of the brain -- and that is true across every group of mammals. Blow up a capuchin monkey to human size, and its brain would be bigger than ours. Larger animals have absolutely bigger brains but relatively smaller brains. At least, if we expect that the brain should increase in a linear fashion.
One way of looking at this is that larger animals tweak greater efficiency out of brain tissue than smaller animals. Another way of looking at it is that the brain consists of two parts: a part with functions that depend on body mass, and a part with functions that are independent of body mass. Growing the body requires the first to increase, but not the second.
Dubois wanted to determine just how much the brain ought to expand as body size increases. He had a very practical example -- Pithecanthropus. Its brain was considerably larger than the brains of living apes, but was it unusually so for its body size? Consider cats and lions as examples -- closely related within the same family, but extremely different in body size. In Dubois' way of thinking, cats and lions have similar cognitive abilities, similar intelligence. The differences in brain size between the two species should therefore be a simple reflection of body size. Absolute brain size would not be an indicator of differences in cognitive adaptations. Instead, we should pay attention to the brain size relative to what is expected for an animal of the same body size.
During the twentieth century, the most prominent name in the systematic study of brain size in different kinds of organisms was Harry Jerison. His groundbreaking studies of allometry culminated in the 1973 book, Evolution of the Brain and Intelligence.
Jerison proposed a model in which the cognitive specializations of some species could be explained by "extra neurons" -- the quantity of brain tissue above that predicted for the body size of those species. To express the relative brain size of a species, Jerison used the term, "EQ", or "encephalization quotient." The EQ is a measure of the brain size of a species, relative to the size expected for species of the same body mass when comparing across many related species.
The extra neuron hypothesis was centered around the problem of absolute versus relative brain size. If more brain matter was associated with more intelligence, we have a problem explaining why elephants and whales are not smarter than people, with our absolutely puny brains. A good hypothesis is that the relative brain size, and not the absolute brain size is important to cognitive adaptations. Humans have brains that are absolutely smaller than those of whales, but we have brains much bigger than other mammals that have our medium-sized body mass. Humans have a very high EQ.
Here's a hitch: Whales and elephants are very smart animals. Sure, people are smarter, but comparing every species to the single exceptional example doesn't tell us about the general trend.
MacLean and colleagues looked at one particular aspect of cognition, self-control, in a way that controls for phylogenetic relationship. Animals like elephants and scrub jays who are smarter than their immediate relatives, tend to have bigger brains. And across species, it is the absolute size of the brain that predicts their that hypothesis to be false, at least for self-control. The absolute brain size predicts the performance of different species on the tasks examined here. Relative brain size doesn't.
Given a bunch of species with lion-sized brains, and a bunch with cat-sized brains, this study should lead us to expect that the lion-sized brains have greater self-control. Lions and cats aren't cognitively equivalent; lions are smarter than cats.
Group size isn't important
Considering the history of study of brain size and cognition, I find one aspect of the study more surprising than the rest. These tasks related to self-control do not correlate with group size, at least not in the primate species studied here.
A long literature has been developed on the idea that group size has driven the evolution of primate cognition. This literature mostly depends on the observation that primates with larger group sizes have relatively larger neocortex sizes. That "social brain hypothesis" intuitively makes sense: many of the determinants of fitness in social mammals involve predicting or adjusting to the social behaviors of other individuals. That is why it has gotten the nickname "Machiavellian intelligence" -- the idea is that natural selection has crafted our brains to manipulate other individuals.
Self-control seems like an important skill for social life, providing a way for learned social behaviors to emerge in the buzzing, blooming world of social stimuli. Yet across primates, social group size had no predictive value for self-control. The authors suggest that social group size may still be a factor selecting for cognitive abilities in primates, just not the ones they tested for:
With the exception of dietary breadth we found no significant relationships between several socioecological variables and measures of self-control. These findings are especially surprising given that both the percentage of fruit in the diet and social group size correlate positively with neocortex ratio in anthropoid primates (86, 142). Our findings suggest that the effect of social and ecological complexity may be limited to influencing more specialized, and potentially domain-specific forms of cognition (188–196). For example, among lemurs, sensitivity to cues of visual attention used to outcompete others for food covaries positively with social group size, whereas a nonsocial measure of self-control does not (146).
This is a claim that disparate cognitive abilities have been selected for their effects in very contextually narrow environments; self-control in foraging tasks being very narrowly targeted toward foraging; visual attention to other individuals being narrowly targeted toward social competition. The implication is that selection can fine-tune different cognitive functions independently of each other; otherwise known as the "modularity" hypothesis for cognition.
Instead of group size, MacLean and colleagues found that primates with broader diets have more self-control:
Within primates we also discovered that dietary breadth is strongly related to levels of self-control. One plausible ultimate explanation is that individuals with the most cognitive flexibility may be most likely to explore and exploit new dietary resources or methods of food acquisition, which would be especially important in times of scarcity. If these behaviors conferred fitness benefits, selection for these traits in particular lineages may have been an important factor in the evolution of species differences in self-control.
Are primates smart because they learned to forage for specialized foods? That would be consistent with what we know about tool use. It's also consonant with the experimental evidence on other aspects of cognition. It's a provocative idea. Although group size is related to neocortex size across primates, there are many notable exceptions who have small groups and large neocortices, or vice-versa.
For example, the apes are very smart, have high diet breadth, and a huge diversity of group sizes in their natural habitats -- chimpanzees have large groups, gorillas small groups, and orangutans are often solitary. It may not be surprising to see that diet breadth has a much more stable association with cognitive adaptations, with group size being a flexible response to different habitat types, levels of predation, and other factors.
Can we draw any conclusions about human evolution from this study? It's hard to generalize -- just as we cannot conclude much about cognitive evolution from comparing whales and elephants only to humans, we can't predict much about humans by looking only at the broadest phylogenetic pattern. Yet, the importance of foraging and diet breadth to the evolution of primate cognition is provocative. The study seems to weigh in favor of tool use and foraging as primary drivers of human brain evolution, instead of social dynamics.
But that contrast is surely misleading. Human tool use and foraging depend on social cooperation and social learning. Our foraging strategy is principally a social strategy. Broad comparisons across primates are unlikely to tell us much about the uniquely human aspects of our evolutionary history. Instead, we'll have to depend on archaeology.
This kind of study is the future of the study of animal intelligence and cognitive evolution. MacLean and colleagues carried out similar experiments across a broad group of species, allowing them to compare those species in a phylogenetic framework. And they showed very clear differences between the predictive power of relative and absolute brain size. This wasn't a mishmash of results, the results were very clear because of the power of the analysis.
MacLean EL, Hare B, Nunn CL and many others. 2014. The evolution of self-control. Proceedings of the National Academy of Sciences, USA (in press) doi:10.1073/pnas.1323533111
Micaela Jemison of Smithsonian Science describes a recent study by Robert O'Malley investigating the use of termites by wild chimpanzees at Gombe. The piece is a nice one for students, including video and some background on how primatologists have studied insect consumption by chimpanzees in the past.
O'Malley's work emphasized the nutritional content of the termites, including the surprising fact that some termites are up to 25% fat.
Though most of their diet is ripe fruit, chimpanzees are omnivores like humans, not only eating insects but also meat, hunting animals such as monkeys and piglets. So why would chimpanzees spend so much time, more than four hours on some days, collecting and eating such a tiny food when they could be hunting?
“Going after insects is much safer, especially if mothers have their young with them,” Power explains. “The females are a lot more patient and often more skilled at termite fishing than the males. Males often don’t stay around if they don’t quickly get a good return. The females, however, stick it out, maybe because they realize they have a guaranteed source of food.”
Of course, termite consumption may have been an important aspect of human origins, as some of the earliest bone tools seem to have been used for breaking into termite mounds. These insects have intersected with our evolutionary history in many ways, as a prominent intersection between above-ground and subterranean resources.
British Pathé, the famous newsreel producer, has released its catalog of film content to YouTube. There's not a rich backlog of film footage pertinent to paleoanthropology there, but I did find this 3-minute-long clip, "Evolution - The Other Side Of It".
The idea is that Darwinism has two sides -- a creationist may dislike the idea that we came from apes, but the apes may feel the same way about us!
It's a silent short with various captive primates dressed in human clothing. An early stage of the kind of exploitative cinema treatments of primates that have become less and less acceptable over the years. And yet, from the standpoint of its time in 1930, it may have been one of the only ways older cinemagoers would ever have encountered the idea of evolution.
I was at the meetings of the American Association of Physical Anthropologists last week in Calgary. Great to see so many friends, and to meet many new people -- I especially loved meeting some students who had recently been following the blog!
One day I was talking with a group of people, realizing just how much the subject matter of paleoanthropology has transformed during the last five years. For one thing, it seemed like there were a lot of papers about Australopithecus sediba, a species that didn't exist five years ago! But we also noticed an increase in the number of papers that seemed to engage with Australopithecus africanus over previous years. By contrast, it seemed to us that A. afarensis faded in importance compared to previous years. Maybe it was just the way it seemed -- especially with only a single podium session devoted to early hominins.
Anyway, after I got home, I decided to do a little counting through the abstracts of previous years. I looked at the abstract volume from 2008, and also from 2002 -- six years and twelve years ago, compared to this year. I did a count of the number of separate abstracts that mention each species name in human evolution -- and then also for "Neandertals" as a group. The abstracts include both podium talks and posters, and across the last twelve years the total number of presentations has increased, largely by adding more posters. I didn't do any kind of content analysis -- some abstracts mention a lot of species names because of the nature of the comparisons they have carried out, while some are especially devoted to the study of one of these species.
Well, as you might expect there are a lot more talks and posters about Neandertals than the other groups. There are just many more areas of biology that people can examine interesting questions about this group, including genetics, which has fueled a continued increase in numbers. "Neandertals" here includes abstracts that mention the word "Neanderthal", "Neandertal", and "Homo neanderthalensis". What may be interesting is that the "Neandertal" spelling had actually declined substantially in the 2008 meetings, but has more than rebounded this year. "Homo neanderthalensis" as a variant has blipped upward from zero in 2002, although it still accounted for only 5 abstracts this year.
You can see that my initial impression about A. afarensis was wrong: There are just as many abstracts this year about A. afarensis as in past years. Likewise A. africanus has basically held steady across this time interval. I did not include it in the chart, but A. robustus (including mentions of Paranthropus robustus) also held strong with 8 abstracts this year. Homo erectus has fluctuated but remains consistently popular.
Immediately stunning is the incredible presence of A. sediba so soon after its discovery. It didn't exist in 2008. This year it accounted for as many abstracts as A. afarensis. The open access policy to the Malapa hominins has contributed to this incredibly rapid growth in research. I'll further point out that the numbers here are not counting the Paleoanthropology Society meetings, where a number of additional papers on A. sediba were presented -- including by some notable critics of the original research. People can see the material, and they are doing productive work on the fossils as a result.
Note that the timeline of A. sediba from its 2010 description to the 2014 meetings is roughly the same as the time from the 2004 description of Homo floresiensis to the 2008 meetings. Yet the scientific research output on A. sediba this year was double that of the hobbits in 2008.
By contrast, the research on Homo habilis seems to have greatly declined. Only a single abstract mentioned the species this year -- and that abstract was a phylogenetic study that mentioned a long list of other species as well. This trend seems a bit puzzling in comparison to the great interest in A. sediba, considering that the two are thematically linked as possible ancestors of later hominins. I wonder whether the difficulty of recent travel to Kenya may have contributed to researchers' decisions to avoid this topic. The other species to show a strong decline in 2014 was another East African endemic, A. boisei, which went from 7 abstracts in 2002 and 2008 down to 2 abstracts this year.
The chart isn't exhaustive -- I omitted many names, like Ardipithecus, Sahelanthropus and Kenyanthropus that have only been represented in one or two papers a year. Even this year, five years after the publication of the monograph-length treatment of Ardipithecus ramidus in Science, only three abstracts mention the genus.
Is there a message in these comparisons?
Naturally the value of research topics will shift from year to year. I haven't looked at last year, and it would probably be even more illuminating to look at a more detailed trend. You can see that the hobbits continue to attract a good degree of interest, even without anyone being able to include new observations on them!
Carl Zimmer reviews Svante Paabo's new book, Neanderthal Man: In Search of Lost Genomes, in the New York Times: "Missing Links". Zimmer gives a balanced review; in his account the book fails as a memoir but succeeds in describing the background of a significant new area of science:
When the Neanderthal genome is finally published, Paabo is justifiably proud. We can’t begrudge him the opportunity to regale us about the news conferences and honors. But readers may start to wonder what exactly the payoff was for those many years of struggle. Reconstructing a Neanderthal genome was a tour de force, we can all agree, but why does it matter?
My perception: Many of the skilled geneticists who have made breakthroughs in ancient DNA see the Neandertals as objects, not subjects.
I score a mention late in the book, synched to the 2010 unveiling of the draft Neandertal genome. "Somebody get John Hawks some oxygen!" was the reaction to my 2010 post, "NEANDERTALS LIVE!".
I'm tremendously excited by this work. The mere fact that we now have Neandertal ancestors is transformative, extending our very concept of humanity. By doing so, ancient DNA has given us new tools to understand our own nature as cultural beings. At the same time, ancient genomics has illuminated the dynamism of ancient populations. We now know of ancient populations that archaeologists had never previously suspected, and has shown the large-scale movements and mixtures among them.
So why hasn't the premier scientist in this field done better conveying the real excitement ahead of us?
Weirdly, the most provocative implications of Neandertal genomes are precisely those that many geneticists have fiercely resisted. The field is slow to release its strange fascination with the simplistic idea that modern humans arose in a singular -- and simple -- event. No matter how much genetics shows that the process was complex, many still search for silver bullet explanations: megadroughts, volcanic winters, projectile weapons, "fixed" modern human genes carrying the essence of humanity.
The Neandertal Genome Project was begun with the ostensible goal of discovering what makes us human, by contrast with the Neandertal genome. Despite the extensive sharing of DNA uncovered by the project, many scientists are still consumed with understanding the small fraction of human genomes that are not shared with the Neandertals or Denisovans. Even now, the highest intensity of investigation into Neandertal and Denisovan genetics remains devoted to finding the function of those rare human alleles that these ancient humans lack. The course of research on Neandertal genetics seems hardly to have changed after the discovery that Neandertal genes lie within us.
As an anthropologist, I see the Neandertals as subjects. Geneticists seem to regard them as objects. From a storytelling point of view, that difference makes all the difference.
A new paper last week by David Gokhman and colleagues described the pattern of methylation in the high-coverage ancient genomes from Denisova Cave.
Here is the abstract of the study:
Ancient DNA sequencing has recently provided high-coverage archaic human genomes. However, the evolution of epigenetic regulation along the human lineage remains largely unexplored. We reconstructed the full DNA methylation maps of the Neandertal and the Denisovan by harnessing the natural degradation processes of methylated and unmethylated cytosines. Comparing these ancient methylation maps to those of present-day humans, we identified ~2000 differentially methylated regions (DMRs). Particularly, we found substantial methylation changes in the HOXD cluster that may explain anatomical differences between archaic and present-day humans. Additionally, we found that DMRs are significantly more likely to be associated with diseases. This study provides insight into the epigenetic landscape of our closest evolutionary relatives and opens a window to explore the epigenomes of extinct species.
Methylation is a chemical modification to DNA, mostly involving cytosine residues that occur immediately next to guanine (so-called CpG sites). The level of methylation of CpG sites in the DNA in normal cells is very high, upward of 80% or more, but promoter regions of genes tend to be less methylated, and methylation of these DNA region is inversely related to gene expression in cells.
Gokhman and colleagues were able to infer the degree of methylation in these ancient genomes by examining the distinctive pattern of damage to the DNA. Cytosine residues in ancient sequences are often deaminated, which changes them to uracil. Until this pattern was recognized, it was a major source of errors in the interpretation of ancient sequences; now the pattern of deamination has become useful as a way of recognizing genuine ancient sequences as opposed to contaminating modern sequence. The probability of deamination varies with methylation of the cytosine, and this gives a way of interpreting methylation of the original DNA.
Methylation patterns are potentially interesting as indicators of the activity of genes in ancient cells. These patterns vary among tissues in the human body, as methylation is one way that pluripotent stem cells become differentiated into functional types. Hence, bone cells (including ancient bone cells) are different from other tissues in the body in their patterns of methylation. Moreover, because methylation of these somatic cells is not inherited in the germline, there is a lot of "noise" present in the signal of methylation in cells. Some methylation may be purely ideosyncratic, while some is strongly functional and directly determined by gene sequences.
Overall, the study finds that the methylation of ancient DNA in these bones is the same as that present in living people. But there are a good number of locations where the methylation in the ancient bones is significantly higher or lower than found in some living people, and these are potentially interesting areas of further investigation. The most newsworthy is an increase in the methylation of the HOXD9 promoter and HOXD10 gene in the ancient genomes, relative to a lower level of methylation in bone of living people. The study suggests that this methylation change may be a correlate of the limb morphology of Neandertals, in particular with their broader and relatively larger joint surfaces, curved bone shafts and shorter limbs.
I'm skeptical. As Gokhman and colleagues recognize, we have a long way before we will understand the variation of methylation patterns within human populations. They make a start with a very limited number of samples of bones of contemporary people:
A difference in methylation between an archaic and a present-day human does not necessarily imply a fixed difference between the human groups. This difference can stem from variability within a population; or from the comparison of close, yet not identical, cell types (osteoblasts versus whole bones). Hence, we compared the archaic methylation to 37 bones samples, taken from osteoblasts and whole bones (12–14). We sought reliable DMRs, in which the archaic methylation significantly differs from that in modern bones. These samples were measured with 27K arrays and provided information for ~5% of DMRs. In most DMRs, archaic methylation was significantly different from the 37 bones, therefore classified as reliable (FDR < 0.01, z-test, Tables S2, S3).
It's a start, but far from being sufficient to show that these ancient DNA methylation patterns are very different from living people. "Differentially methylated" is a very interesting concept, because methylation is at the borderline of continuous versus discrete variation. We will likely be able to find living people who share a similar pattern of methylation in these gene regions, and thereby investigate whether their phenotypes approach those of Neandertals. They may not be identical to the ancient genomes, but there is no question that the biology of living people overlaps with Neandertals in many of these phenotypic measures. We do have people with broad joint surfaces, relatively short limbs and robust hands and feet today. The exact morphology is not the same, but the ranges of variation overlap -- enough so to make a similar causal pathway a good hypothesis.
We should be able to investigate the variation within ancient populations in a similar way. The variation among Neandertals has been obscured by bad biological stereotyping. Sure, some of the European Neandertals have these aspects of limb morphology, but the West Asian Neandertals extend the overall variation substantially toward longer limbs with less curvature. The pattern of variation in Neandertals is complex, in other words.
It doesn't help that the high-coverage genome from Denisova is geographically far from any Neandertal skeletal remains that actually have limbs. And the Denisovans have no skeletal remains with limbs at all. I'll be more comfortable with this kind of work when more actual phenotypic evidence is at hand!
Gokhman D, Lavi E, Prüfer K, Fraga MF, Riancho JA, Kelso J, Pääbo S, Meshorer E, Carmel L. 2014. Reconstructing the DNA Methylation Maps of the Neandertal and the Denisovan. Science (in press) doi:10.1126/science.1250368
Today this story in ScienceNOW tumbled across my feed:
What a terrible headline!
I mean, really, what were they thinking? Of all the mistakes of science writing, this is the worst -- sensationalizing a story with a pseudoshocking "question" to which the answer is obviously "no". It misleads readers about the process of natural selection.
Science, have you lost your mind and become Buzzfeed?
Likewise, BBCNews gets the story wrong, illustrating it with a photo of a bearded George Clooney and Ben Affleck:
The ebb and flow of men's beard fashions may be guided by Darwinian selection, according to a new study.
The more beards there are, the less attractive they become - giving clean-shaven men a competitive advantage, say scientists in Sydney, Australia.
Hello? Trends in beard lengths today are not caused by Darwinian selection, they are caused by culture. This isn't novel science, it is Alfred Kroeber circa 1918. These trends occur on a much faster timescale than a single generation. Of course, our attentiveness to trends might be a product of natural selection in ancient humans.
But that isn't the subject of the current beard study.
Let me step off the fainting couch about the headlines and look a bit deeper into the articles. The study described in the ScienceNOW story by John Bohannon should make a large segment of evolutionary psychologists very nervous. Research subjects scoring "attractiveness" in many past studies may have been responding to cues that were not controlled by past experimenters.
The setup: they had a bunch of guys grow beards, taking photographs along the way. That resulted in a large array of photos of the same men, bearded, stubbly and clean-shaven. They showed the photos to a large sample of women and men (presumably undergraduates in psychology classes, although the study itself is not yet online). The experimental condition:
But there was a catch: The frequency of beardiness varied in each set of photographs, ranging from rare to common. Some subjects viewed sets of photos in which most of the men were clean-shaven, while others saw mostly the heavily stubbled or bearded versions, and others saw intermediate ranges of stubble and beardiness. If frequency-dependent selection plays no role in facial hair trends, the context shouldn’t matter.
But the context did matter. When facial hair was rare among faces, beards and heavy stubble were rated about 20% more attractive. And when beards were common, clean-shaven faces enjoyed a similar bump, the team reports online today in Biology Letters. The effect on judgment was the same for men and women.
OK, so people score relative novelty higher in attractiveness. That means whenever they are judging photos on "attractiveness" of a trait, they are likely to be filtering their assessments through the frequency of the trait. Our views on attractiveness are subjective, and depend on what else we're looking at.
So what's the problem? For years, psychologists have been examining "attractiveness" by asking undergraduates to look at photos. Sometimes the photos have been manipulated by averaging together the pixel values of a large number of portraits -- resulting in an "average" face. In other experiments, photos have been manipulated to represent more "masculine" or "feminine" forms, or to have slight asymmetries. The assumption underlying this research is that mate choice was very important in human evolution -- so important, that very slight psychological preferences toward a trait might be strongly selected.
The beard study gives a clear reason why this assumption is flawed. The effect of the environment on "preference" for bearded or unbearded men is everything. In the study, the environment is manipulated by the experimenter. In human societies, the relevant environment is manipulated by culture. If the environmental variance is so high, the evolvability of such preferences will be very low.
Others have pointed out that there is very little evidence that the preferences reported in such experiments actually correlate with mating behavior by the same population of undergraduates, let alone ancient humans.
The beard study points to an unrecognized frequency effect in this kind of research. Why do research subjects show a slight bias toward one kind of photo as opposed to others? We should now suspect that a slight variability in the frequencies of distractor variables might explain the significant but weak preferences exhibited in many such studies.
In any event, culture creates a powerful background to a person's responses to photographs in an experimental context. That background fluctuates rapidly on the timescale of individual lifetimes. The environmental variance that results from such shifting preferences makes it very difficult to imagine a dedicated adaptation to preferences about beards operating subconsciously in people today.
Popular Mechanics asks, "How Many People Does It Take to Colonize Another Star System?". The basic problem is that a multigenerational star voyage requires the trekkers to mate and reproduce many times while maintaining a limited population size. Too few people, and the colonists will rapidly lose genetic diversity by genetic drift.
The article starts by noting the work of anthropologist John Moore on the question. Moore concluded that the social structure necessary to prevent inbreeding was essentially that of clans or extended tribes of hunter-gatherers -- strong kin avoidance rules to prevent inbreeding and a population size of 150-300 people.
A new paper by Cameron Smith focuses instead on the worst case scenarios, concluding that the "safe" population size would be much higher:
Entire generations of people would be born, live, and die before the ship reached its destination. This brings up the question of how many people you need to send on a hypothetical interstellar mission to sustain sufficient genetic diversity. And a new study sets the bar much higher than Moore's 150 people.
According to Portland State University anthropologist Cameron Smith, any such starship would have to carry a minimum of 10,000 people to secure the success of the endeavor. And a starting population of 40,000 would be even better, in case a large percentage of the population died during during the journey.
A number as large as 40,000 people would enable the mission to approximate the effective population size of the entire human population of earth before 100,000 years ago or so. For reasons I've discussed many times (for example, "Cultural impedance, demographic growth, effective population size"), the effective population size of humans does not mean that the actual number of people in the ancestral human population was very small. With Pleistocene people, there were many processes that reduced the genetic diversity (and hence our estimates of effective population size) within a population of a relatively large actual population size -- on the order of a few hundred thousands of people.
Forty thousand is pretty small, but on a random-mating voyage of a hundred generations should basically approximate the Wright-Fisher population model. Smith further examines scenarios in which "catastrophic" events may affect the mission, greatly reducing genetic variation (or eliminating it). In these scenarios, a population dispersed across multiple "ships" would create a buffer, but each of those units has its own small population size issues, arguing for a bigger mission.
I'll take a deeper look at Smith's upcoming paper after the AAPA meetings. These future scenarios really help us think about the limits that existed in past human populations, which were less constrained in some ways but more so in others. Moore's approximations for a future "generation ship" mission incorporated social dynamics in ways that have clear parallels in the past (his ethnographic work focused on small village societies of Southeast U.S. native peoples). Smith's simulations refer to a larger-scale aspect of genetic drift.
The interaction of these two factors does not easily reduce to equations, but creates the most interesting anthropological questions. How much social control is necessary to maintain the viability of a colonizing population, not only genetic viability but also cultural viability? What is the balance between shared goals and practical needs?
I question the general assumption that such a mission would "need" to maximize the genetic diversity of the colonists. In fact, many potential groups of interstellar colonists might prefer to reduce their genetic diversity.
Imagine a small group of people with the sufficient motivation to divorce themselves from humans on Earth, launch across interstellar space for thousands of years, forcing their descendants to live within a tiny habitat, with the expectation that their common offspring will colonize a new planet a hundred generations hence. The kind of internal discipline necessary to motivate such a scheme is more like a cult than an open society.
Cults enforce cooperation by means of social isolation,
By increasing the relatedness of the population, they could enhance the incentives for cooperative behavior. In effect, people boarding the voyage on Earth would be assured that their descendants would not merely be notional descendants but in fact strongly genetically similar to them. Such groups don't want to board this ship with a random selection of humanity; they want to board with their cousins. That reduced level of genetic variation would generate a larger genetic payoff for each individual launching from Earth.
They are not going to create a microcosm of Earth's genetic variation. They're going to create a colony of clones.
UPDATE (2014-04-05): A number of Twitter commentators have suggested that you don't need to have so many people if you have a store of frozen sperm and egg cells. In essence, you could create a human version of the Long Term Evolution Experiment run by Richard Lenski. By unfreezing the eggs and sperm of previous inhabitants -- or unrelated eggs and sperm brought from Earth -- the colonists could add whatever genetic variation is required, or "rewind" the colony to a previous gene pool.
That can be done with today's technology. We may question whether freezing is really a viable strategy across 1000 years. As yet we only know that freezing works over 20 years or so, and we don't have good statistics yet about whether germ cells or embryos frozen over longer time periods have any increased chance of mutations or other long-term effects. Still, the level of risk already faced by interstellar voyagers is likely much larger than a slight increase in risk from long-term freezing of germline tissue.
It would make perfect sense to have a large store of adaptive variants available to deal with whatever challenges the colonists face on their new world. Imagine that they settle a world with only two-thirds of Earth's sea-level atmospheric pressure. An influx of frozen Tibetan sperm would bring in genes to adapt the colonists to their hypoxic world.
Of course, we may also consider that a starship 50 or 100 years from now will be leaving an Earth with vastly greater genetic engineering potential than we currently possess. Colonists after the ninety-eighth generation might vastly prefer a bit of genetic tinkering to their own gene pool, instead of unfreezing vastly different DNA from an Earthling stranger. In that sense, the colonists will not need either a large population or a giant frozen sperm vat. They can build as they go.
This brings us back to social dynamics. The colonists must maintain their motivation and ability to put the colonization plan into motion as they arrive at their destination. Death of the colony is not the only risk; their culture may slowly devolve until they are nothing but interstellar lotus-eaters. We don't know how large a cultural group is necessary to maintain the necessary traditions over a thousand-year voyage.
That seems like an interesting problem.
Zach Zorich has written an interesting article for Nautilus, about the optical illusions caused by firelight flickering across parietal art: "Early Humans Made Animated Art".
From the article:
When Lascaux cave was discovered in 1940, more than 100 small stone lamps that once burned grease from rendered animal fat were found throughout its chambers. Unfortunately, no one recorded where the lamps had been placed in the cave. At the time, archeologists did not consider how the brightness and the location of lights altered how the paintings would have been viewed. In general, archeologists have paid considerably less attention to how the use of fire for light affected the development of our species, compared to the use of fire for warmth and cooking. But now in Lascaux and other caves across the region, that’s changing.
We can take the idea of "getting into the mind of the artist" overboard. One of the main techniques to depict motion in artwork is "superposition" -- drawing a body part of an animal in two or more different positions, to imply the motion from one position to the other. That's a technique commonly used in cartoons today by artists. But it's also done by children doodling.
Much of the impression of power and mystery comes from place. Ancient humans added to this substantially with their artistic sense.
In February, I revisited the 1964 definition of Homo habilis by Louis Leakey, Philip Tobias and John Napier: "Leakey, Tobias and Napier on the definition of our genus". The paper is remarkable because it represents the first major attempt to enlarge the anatomical definition of our genus. In that review, I referred to Bernard Wood's work on the Koobi Fora fossils. I also pointed to Wood's later essay, published with Mark Collard, that attempted to shrink the definition of Homo, expelling Homo habilis from our genus entirely.
I still have a post in the queue reviewing that paper closely. But in the meantime, Bernard Wood has published a new essay in Nature commemorating the fifty-year anniversary of Leakey, Tobias, and Napier's paper: "Human evolution: Fifty years after Homo habilis".
Wood's essay also touches on the issue of enlarging our genus. He adds a perspective on the role of OH 5 ("Zinjanthropus") on this problem of the scope of Homo:
Because Nutcracker Man was found in the same layers as the stone tools, the Leakeys assumed that it was the toolmaker, despite its odd appearance. But when Louis announced the discovery, he was not tempted to expand the definition of Homo. That would have eliminated any meaningful distinction between humans and australopiths. Instead he erected a new genus and species, Zinjanthropus boisei (now called Paranthropus boisei), to accommodate it
That is an important idea. I should note that the reduction of Australopithecus into Homo was not an insuperable barrier. By this time, the surfeit of genus names had begun to embarrass those anthropologists who were trying to adopt the ideas of the modern synthesis. Increasingly, Paranthropus, Telanthropus, Atlanthropus, Cyphanthropus, and the like were rejected by the field's young vanguard. At the time that Mary Leakey found Zinjanthropus, only a minority had begun to use Homo erectus in the place of Pithecanthropus. Only eight years earlier, Ernst Mayr had staked out the position that all hominins should be lumped into Homo. That was the range of views, with a larger, more expansive Homo slowly gaining ground among theorists. But that argument would be much easier with the extremely humanlike reconstruction of OH 7 in 1964, than with OH 5 in 1959.
Wood recounts his own experiences analyzing the Koobi Fora fossil collection, during which he became convinced that the Homo habilis sample actually includes two species. He was not the scientist who named Homo rudolfensis, but his analysis was the first to give it teeth (literally), as he staked out a collection of additional mandibular and cranial specimens in addition to KNM-ER 1470.
Today, Wood believes that the Homo habilis and Homo rudolfensis samples should be placed into yet a third genus.
Although H. habilis is generally larger than A. africanus, its teeth and jaws have the same proportions. What little evidence there is about its body shape, hands and feet suggest that H. habilis would be a much better climber than undisputed human ancestors. So, if H. habilis is added to Homo, the genus has an incoherent mishmash of features. Others disagree, but I think you have to cherry-pick the data8 to come to any other conclusion. My sense is that handy man should belong to its own genus, neither australopith nor human.
I don't necessarily disagree about the mishmash of features. But I don't know which specimens Wood has in mind when he says that H. habilis is larger than A. africanus, unless he is referring to brain size and not body size.
At any rate, proposing a name for such a third genus is probably fruitless under the rules of taxonomic nomenclature. The species name habilis was erected as a species within Homo, while rudolfensis was initially suggested as belonging to Pithecanthropus (which almost everybody considers to be synonymous with Homo, and has the type specimen of Trinil, now part of Homo erectus for most scientists. Homo habilis and Homo rudolfensis may not even be sister species, so it would be nonsensical to name a new genus just based on the assumption that they are monophyletic. Wood and Collard did not provide a new genus name for them; they preferred to put H. habilis and H. rudolfensis into the existing genus, Australopithecus. And Wood acknowledges that many anthropologists think that these two species should be collapsed into one -- and some think they both should be collapsed into Homo erectus.
Wood does not mention in his essay the one significant species with a mosaic of australopithecine-like and Homo-like anatomy: Australopithecus sediba. To me, this is the most interesting comparison with the classic Homo habilis definition. Whatever we include in Homo habilis (or break off into Homo rudolfensis) we have no comprehensive anatomical sample across the skeleton. We don't have any idea what a Homo habilis pelvis would be like, nor can we exclude that known pelvic remains like KNM-ER 3228 may be Homo habilis (or Homo rudolfensis). The pelvic anatomy of A. sediba is more Homo-like than the pelvis of A. africanus or A. afarensis, the hand has features that are more Homo-like than the type specimen of Homo habilis. We don't know what a Homo habilis femur looks like. Many have argued that the OH 62 skeleton is Homo habilis -- with its fragmentary proximal femur -- but that depends on an argument about its similarity with STW 53, a specimen that is only Homo habilis by the most generous stretch of taxonomic liberalism.
Consider the problems posed by the Malapa sample in 2010, compared to the Olduvai Homo habilis sample in 1964. The Homo habilis sample presented a larger brain size than known australopithecines that was nonetheless smaller than any known sample of Homo, a human-like hand and foot with some primitive features, and teeth with a handful of features that distinguished them from the known australopithecine (A. africanus, A. robustus and A. boisei) sample. The next thirty years showed that the hand and foot evidence were at best uncertain, leaving the teeth and brain. Malapa initially produced two skeletons, with an australopithecine like brain size, body size, limb proportions and foot, but a series of Homo-like details in the pelvis, teeth, and skull. We now know that both samples represent anatomical mosaics of primitive and derived characteristics. Leakey, Tobias and Napier used their small set of evidence of shared features to enlarge the definition of Homo. Berger and coworkers declined to further enlarge the definition of Homo on the basis of new shared features, instead emphasizing the overall australopithecine-like adaptive pattern represented by the brain and body size.
What a mess early Homo is! Wood draws from this mess the conclusion of repeated divergence and speciation with widespread parallelism.
The ongoing debate about the origins of our genus is part of H. habilis's legacy. In my view, the species is too unlike H. erectus to be its immediate ancestor, so a simple, linear model explaining this stage of human evolution is looking less and less likely. Our ancestors probably evolved in Africa, but the birthplace of our genus could be far from the Great Rift Valley, where most of the fossil evidence has been found.
Far from the Great Rift Valley. Hmmm...
Wood B. 2014. Human evolution: fifty years after Homo habilis. Nature 508:31-33. doi:10.1038/508031a
Becca Peixotto has two updates on the Rising Star Expedition blog today, describing some of the excavation activities this week. In "What's new at this week's excavation", she highlights the smaller scale of the dig and gives some insight about how the cave has changed at the end of this rainy South African summer.
The late summer in the Cradle of Humankind this year was remarkably rainy. In karst regions like this, rainwater does not stay long on the surface in creeks or rivers. Instead it quickly seeps through cracks in the dolomite.
The water sometimes pools in the caves, like the chilly puddle at the narrowest squeeze in the Postbox belly-crawl. No staying dry in that one! The water may hang in the air like it does in the final chamber where there is no standing water but where the humidity registers on our air monitors at 99.9%, or the rainwater may become part of an existing drip, dissolving the dolomite and slowly depositing calcium carbonate as a stalactite or other speleothem.
Becca's second post, "Young Visitor Helps Recover First Top Jaw From the Site", describes the visit of a young "Reach for a Dream" participant to the site, where he helped direct the excavation through the comms.
She then turns to this week's progress:
We accomplished our initial goal of recovering the maxilla (the part of a skull containing the upper teeth) and long bone that have been calling out to us since their initial uncovering four months ago.
This long bone was one of the earliest pieces to be uncovered, but its size and orientation prevented easy removal throught out the November dig because with each bit revealed, other bones were found on top or adjacent to it.
That accomplishes the first of the week's excavation goals, and further new fossils have come out.
Ten years ago I published a paper on the failure of cladistics to resolve questions of early hominin relationships. My study used computer simulation to produce a very large number of small "fossil" samples drawn from populations that evolved entirely under random genetic drift, with every anatomical character accurately measured and independent of every other character. This scenario was unrealistically good in many respects compared to the real fossil record, where characters are not independent, can be distorted by postdepositional processes, and often evolve in parallel under natural selection. What I found is that many of the small samples in the hominin fossil record are not good enough to test hypotheses about their phylogeny.
The tests in this paper show that parsimony recovers a correct phylogeny in nearly 100% of cases where either sample sizes or the number of independent characters are large. But for the foreseeable future, most hominid taxa will be known from only very small samples, and there can only be a very limited number of independent characters observable on fossil skeletal remains. This paper shows that simple parsimony in such cases will often fail to obtain correct results, and the lack of statistical tests for sample adequacy in phylogenetics has meant that until now, paleoanthropologists have not commonly known to what extent their phylogenetic models are falsely influenced by the factors examined here. Many paleoanthropologists may understand that small sample size, correlations among characters, heterogeneity of samples, and other issues pose barriers to phylogenetic research, but nevertheless may feel that cladistics analyses of fossil hominids provide successively better approximations of the truth. However, the results of this paper show that the output of parsimony analyses does not follow the innate statistical instincts that most researchers may have developed in other analytical contexts; indeed, they can be paradoxical, as discussed below.
Large samples work. Small samples mislead. Worse, including small samples into a study with large samples can lead to incorrect arrangements of the large samples.
This positivist outlook is reflected by a common, but fallacious, perception: that phylogenetic research has been converging on the “correct” answers, with the “problem” preventing stable evolutionary trees to be drawn being the continual appearance of new specimens and species. But if the samples available to test hominid phylogenetic hypotheses were statistically sufficient, then analyses would be very unlikely to change when new specimens or species were added. Recent discoveries of early hominids confirm the substantial possibility of change in the current most parsimonious phylogenetic hypotheses. For example, the possible addition of Kenyanthropus (Leakey et al., 2001) as a sister taxon to H. rudolfensis would either remove H. rudolfensis from the Homo clade or it would remove the Homo clade as a sister to Australopithecus. In any event, the topology of basal nodes in the phylogeny (including relationships that are in complete consensus among pre-1999 cladistics studies) could be completely rearranged. That this might occur on the basis of the few apparently derived similarities between two specimens, KNM-WT 40000 and KNM-ER 1470, is strong proof of the statistical weakness of the data. It also implies that even the interrelationships of relatively large samples such as those assigned to A. afarensis and H. habilis may be contingent on the most parsimonious arrangement of other quite small samples. We can expect that other new hominid taxa, including Orrorin, Ardipithecus, Australopithecus garhi, and possibly Sahelanthropus, will therefore further disrupt our previous understanding. With the addition of each new taxon, the number of possible hominid phylogenies grows exponentially greater, and with this number grows the number of ways that phylogenies may be in error.
Since 2004, many paleoanthropologists have done better acknowledging the weaknesses of parsimony analysis. Most substantial discoveries (for example, Ardipithecus ramidus in 2009 and Australopithecus sediba in 2010) have been published with cladograms placing them among known hominin samples. But results have been very cautiously discussed in these cases, emphasizing the drawbacks of other, less-complete specimens in earlier studies of hominin phylogeny. In these cases, specimens that preserve both cranial and postcranial remains have shown how biased the study of purely cranial characteristics can be.
What these examples do not present -- at least not yet -- is more than one or two specimens for most characters. So they underrepresent the variability within species, preventing us from telling with characters are fixed, and which vary. Small sample size remains a severe constraint on our ability to test hypotheses of relationships. What we know about early hominins depends disproportionately on the Hadar, Sterkfontein and Swartkrans samples -- and the attendant assumption that each of these samples mostly represents a single species assemblage. In each case, the variability represented is very extensive, showing us the limits of understanding mixed-species assemblages like that represented in the Turkana basin between 2 million and 1.5 million years ago.
Our knowledge of large hominin samples is very good, and we can be fairly confident about their relationships. But even in those cases, there is ambiguity. For example, is A. africanus closer to Homo than A. afarensis? That depends on how we constitute the samples and which characters we include. The latter question seems deceptively simple -- include everything! But the more we include, the more we must rely on singular specimens.
At an extreme, we turn to features like the upper-to-lower limb proportion. This would seem to have strong adaptive relevance, and the lower limb is clearly relatively longer in humans and Homo erectus compared to earlier hominins. Many scholars have argued that the upper-to-lower limb length ratio in AL 288-1 (Lucy) is more humanlike than in several later australopithecine skeletal specimens (including OH 62, often attributed to Homo habilis). But until recently this was the only skeleton with both upper and lower limb elements sufficiently preserved to estimate length. To compare other "species", researchers were forced to compare the dimensions of joint surfaces, or to estimate bone length based on regressions from joint dimensions or small portions of bone shafts. The discussion of OH 62 has been particularly protracted, with some scholars arguing for humanlike proportions and some for more apelike proportions, on the same bone fragments. In other words, the question comes down to "character analysis" -- the detailed consideration of how the character develops, how it varies within samples, and how it should be scored on fossil specimens. As long as we are counting characters independently in our cladistic study, without considering sample sizes for those characters or the confidence in the character analysis for those characters, our comparisons will be limited to the accuracy of the smallest samples.
Often people have argued that previously-unknown results are credible if a study replicates other results on which prior work largely agrees. That is, credibility can be judged as a function of consistency with earlier work. I considered this issue in my 2004 paper:
It was no surprise, for example, that A. robustus and A. boisei were grouped as sisters in most cladistic analyses, or that A. afarensis was an outgroup to later hominids, or that H. habilis and H. rudolfensis were often grouped with later Homo. The original descriptions of the fossils pointed out the derived resemblances in each of these cases, and there has been relatively little disagreement on any of these points since the fossils were unearthed. Although the inclusion of these well-documented sister-group statements may be a minimum standard of credibility for a cladogram, they convey no necessary confidence in the results of the method for new, unknown, or disputed relationships. Different cladistic analyses of fossils do not sample different possible worlds in the same way as the simulations presented in this paper; they apportion a single set of observations in different ways. Because the observations are the same, the results must agree—absent differences in character analysis or parsimony assumptions—and we should expect unanimity of analyses even if they are statistically inadequate.
Small samples may lead to wrong results, and they are likely to lead to the same wrong results no matter how many times we look at them. The only way to do better is to increase the sizes of samples.
None of this means we shouldn't use parsimony approaches. But we should pay much more attention to the results from analysis of larger samples. And we should be very critical of the composition of those samples. Particularly bad are the surface lag deposits representing landscapes that may have had multiple species on them. In these cases, each specimen may have hundreds of thousands of years of uncertainty in its provenience, and may be attributed to a "species" based on nothing more than the local abundance of dental remains across a half-million year span.
Hawks J. 2004. How much can cladistics tell us about early hominid relationships? American Journal of Physical Anthropology, 125(3), 207-219. doi:10.1002/ajpa.10280
Ann Gibbons reports from a recent conference in Spain about new work that has sequenced a whole genome from a 45,000-year-old femur from Siberia: "Oldest Homo sapiens Genome Pinpoints Neandertal Input". The femur as yet is a context-free find from a riverbank, so it isn't correct to call it an "Upper Paleolithic" specimen, though its radiocarbon date puts it into that time frame in this region of the world. The overall genome of the specimen is similar to living people rather than Neandertals, and the investigators (led by Svante Pääbo) are calling it the earliest modern human specimen to produce a whole genome so far.
Because it is a report on a conference presentation, there are very few useful details. This is the most interesting of the results reported:
Because all living people in Europe and Asia carry roughly the same amount of Neandertal DNA, Pääbo's team thought that the interbreeding probably took place in the Middle East, as moderns first made their way out of Africa. Middle Eastern Neandertal sites are close to Skhul and Qafzeh, so some researchers suspected that those populations were the ones that mingled. But the team's analysis favors a more recent rendezvous. The femur belonged to an H. sapiens man who had slightly more Neandertal DNA, distributed in different parts of his genome, than do living Europeans and Asians. His Neandertal DNA is also concentrated into longer chunks than in living people, Pääbo reported. That indicates that the sequences were recently introduced: With each passing generation, any new segment of DNA gets broken up into shorter chunks as chromosomes from each parent cross over and exchange DNA. Both features of the Neandertal DNA in the femur suggest that the Ust-Ishim man lived soon after the interbreeding, which Pääbo estimated at 50,000 to 60,000 years ago.
Without details, there's not much I can say. There are a lot of controls I'd like to see, but as described here it is not an unexpected result. A slight increase in the representation of the Neandertal DNA coupled with more resolution on the timeline of introgression.
I will point out that the methods used to detect chunks of Neandertal DNA work better with longer chunks. So the result described here is a rather tricky one. Much here seems to depend on the assumption that there was only one time that Neandertals contributed DNA to later populations.
That's not the assumption I would start with.
I'm not in South Africa this week but I am following closely as a small team of excavators is underground in the Rising Star site. I've posted the agenda for the week's work at the Rising Star Expedition blog: "A critical piece of the hominin puzzle".
The initial goal of this work is to recover a hominin maxilla that is exposed in the original “puzzle box” excavation area. As the November expedition was drawing to a close, the excavation team uncovered this maxilla. When they were working that area, the excavators carefully cleared around each fragment of bone before removing it. That’s how they first uncovered the maxilla. Although they could carefully work around it, they couldn’t bring it out of the sediment because of the overlying bones. It broke everyone’s hearts to leave that piece in situ, but at the time, we estimated it would be at least two additional full days of careful excavation work to bring it out.
The team will soon see whether that was right, or whether it was an underestimate!
The short excavation this week is helping to lay the groundwork for the upcoming May workshop, which will produce the initial descriptions of the fossil sample. More than 25 early career scientists have accepted positions in the workshop, from at least 11 countries (and I have a feeling I am missing one or two countries in there). It is an accomplished group and I am looking forward to seeing them working with the fossils!
The rapidly changing field of ancient DNA has settled into a kind of normal science, as several teams of researchers have coalesced around a set of approaches to discover the genetic relationships among ancient peoples. Ewen Callaway this week in Nature profiles some of the key investigators and their recent work: "Human evolution: The Neanderthal in the family".
The headline is driven by Neandertals and the successful sequencing of even more ancient DNA from Sima de los Huesos. But the quest for the most ancient DNA is maybe the less interesting of the two developments discussed in the article. The other is the theoretical paradigm that attempts to break down the genomes of living and ancient people into parts that come from different original populations:
A few years ago, David Reich discovered a ghost. Reich, a population geneticist at Harvard Medical School in Boston, Massachusetts, and his team were reconstructing the history of Europe using genomes from modern people, when they found a connection between northern Europeans and Native Americans. They proposed that a now-extinct population in northern Eurasia had interbred with both the ancestors of Europeans and a Siberian group that later migrated to the Americas6. Reich calls such groups ghost populations, because they are identified by the echoes that they leave in genomes — not by bones or ancient DNA.
Ghost populations are the product of statistical models, and as such should be handled with care when genetic data from fossils are lacking, says Carlos Bustamante, a population geneticist at Stanford University in California. “When are we reifying something that's a statistical artefact, versus when are we understanding something that's a true biological event?”
In the case of the putative ancestral connection between European and Native American groups, the ancient Mal'ta specimen from near Lake Baikal appears to confirm the hypothesis that an ancient group really did exist that contributed to both present-day groups. The advantage of the "ghost population" approach is that it does make clear predictions that can be tested with ancient DNA. Probably the most famous at this moment is the hypothesis that a very ancient "ghost population" must account for some fraction of the ancestry of the Denisova genome.
I think in many cases that "ghost population" approach is too simplistic. It is always possible to split a population into some number of dissimilar parts, but it's not obvious what the most parsimonious scenario should be. One possibility is two "pure" ancestral populations that mixed together, but there are many other possibilities -- including a single geographically dispersed population that coalesced across its range. Mathematically, the two "pure" population scenario is simpler, and it does capture some parts of the evolutionary divergence of populations. But simpler math doesn't necessarily make a more parsimonious hypothesis. When we ignore the archaeological and skeletal records in favor of math, we miss lots of information that might help shape these hypotheses.
But then, that's where anthropologists are important to understanding the past. It is interesting to see the jockeying among geneticists for new results, but you start to notice how they are describing models that don't describe any archaeological reality!