Nicole Herzog and colleagues spent half a year following a troop of vervet monkeys during the controlled burn season at Loskop Dam Nature Reserve in South Africa. They found that the vervets took readily to newly-burned landscapes, substantially increasing their home range to make use of burned habitat.
The study did not include any kind of observation or analysis of the foods the vervets were eating in the burned landscape, so it is not clear whether they were eating new green shoots or insects and other invertebrates that were exposed in the burn zone. It's also unclear the extent to which the monkeys became more terrestrial to use the burned zone. But their use of space greatly increased, and they biased their behavior within their older pre-burn range toward the edges of the burned area.
The paper includes a nice synopsis of behaviors of other primates in relation to fire, including some discussion of the Fongoli chimpanzees (Pruetz and LaDuke 2010). Like those chimpanzees, these vervet monkeys approached the fire to investigate as it was happening, showed no alarm at the approaching flames, and made extensive use of the burned area after the fire. The paper cites studies of macaques and baboons also using burned areas.
They conclude with some implications for human evolution, proposing that our greater reliance on terrestrial resources and controlled use of fire may be part of a longer heritage of "pyrophilia":
While the importance of fire in human evolutionary history has long been acknowledged (Clark and Harris, 1985; Goudsblom, 1986; James et al., 1989; Pyne, 1995; Wrangham et al., 1999; Burton, 2009; Wrangham, 2009), how and why early hominins came to use this force is largely unknown. Descriptions of primates' exploitation of burned landscapes provide strong evidence that they understand fire and attendant changes to travel and foraging opportunities. That even the most terrestrially constrained of savanna-dwelling primates expand into burned territory suggests a deep phylogenetic history of fire tolerance and pyrophilic tendencies. Pruetz and LaDuke (2010) argue that savanna chimpanzees' ability to “conceptualize” fire is a synapomorphic trait within the human–chimpanzee clade. We agree, but given that our vervet subjects exhibited a similarly “conceptualized” response to fire—they did not flee but instead calmly monitored the approaching blaze as the flames, noise, and smoke drew near—argue for an even deeper history of the trait within the primate clade. The fire-positive adaptations detailed in this and other savanna-dwelling primate populations provide clues to understanding the foundation for complex pyrotechnological innovations in our own lineage. If burned landscapes represent novel foraging opportunities, consistent and controlled use of these patches would have contributed to the selective pressures that shaped the unique morphology, mobility, and behavior of our genus.
Primates are curious and learn readily about natural phenomena that surround them. It may not be a stretch to imagine that the human lineage became fire-users by developing a knowledge and consciousness about the occurrence of fire upon the landscape. Maybe even controlled burning has a longer history than we typically think.
Herzog, N. M., Parker, C. H., Keefe, E. R., Coxworth, J., Barrett, A. and Hawkes, K. (2014), Fire and home range expansion: A behavioral response to burning among savanna dwelling vervet monkeys (Chlorocebus aethiops). Am. J. Phys. Anthropol., 154: 554–560. doi: 10.1002/ajpa.22550
Pruetz JD, LaDuke TC. 2010. Brief communication: reaction to fire by savanna chimpanzees (Pan troglodytes verus) at Fongoli, Senegal: conceptualization of “fire behavior” and the case for a chimpanzee model. Am J Phys Anthropol 141:646–650.
I've been out of the country for three weeks! What have I been up to?
Riding a giant brain!
David Roy Smith in the current Frontiers in Genetics has an opinion article that reflects on the way that next-generation sequencing technologies have changed biology: Last-gen nostalgia: a lighthearted rant and reflection on genome sequencing culture..
Sequencing nuclear DNAs has been a different story. Even with huge datasets, state-of-the-art assembly programs, and intricate annotation pipelines, I'm incapable of producing decent nuclear genome assemblies. It doesn't help that the species I choose to investigate are poorly studied and poorly sequenced. For researchers investigating organisms for which high-quality nuclear genome assemblies already exist (i.e., assemblies based on Sanger sequencing), the payoffs of NGS have been great (Koboldt et al., 2013). Perhaps as sequencing technologies improve, personal computing power increases, and bioinformatics software become more user friendly, it will soon be easier for small labs to assemble publication-quality nuclear genomes of non-model taxa. For now, however, the promises of NGS have, at least for me, not lived up to their hype and often resulted in disappointment, frustration, and a loss of perspective.
When technology leads the science, scientists run into practical problems. The problems Smith describes here are problems that consortia solve. Armed with a large log of high-quality assemblies and postdocs who can reprogram bioinformatics tools if necessary, a consortium can straighten out data quality problems that will bedevil a small isolated lab.
But it is simply not practical for most biologists to work in large consortia. Smith works on the genetics of algae. If there were to be an alga genome consortium, it would have to include most of the people working on the genetics of algae.
Maybe that's the best way to move forward. Certainly it makes little sense to have fifty small labs beating their head against the same wall when they could be collaborating. But many scientists find a strong appeal in being independent, formulating their own independent research questions that they can tackle on the scale that makes sense for their labs.
Which makes this passage sadly ironic:
I was taught to approach research with specific hypotheses and questions in mind. In the good ol' Sanger days it was questions that drove me toward the sequencing data. But now it's the NGS data that drive my questions. I recently sequenced the transcriptome of a saltwater Chlamydomonas alga and have been knocking my head against the laboratory door asking, “What is the best way to market, package, and publish these data?” I'm trapped in a cycle where hypothesis testing is a postscript to senseless sequencing (Smith, 2013).
The technology promises to enable a smaller lab to take on more interesting projects. But the technology is limited in a way that requires the lab to shoehorn its work into a very limited set of empirical investigations. That transforms the lab from a hypothesis-testing lab to a technology-justifying lab.
This is where science goes to die.
Smith DR. 2014. Last-gen nostalgia: a lighthearted rant and reflection on genome sequencing culture. Frontiers in Genetics Front. 5:146. doi:10.3389/fgene.2014.00146
Did Homo erectus get herpes from chimps?
Herpes simplex viruses infect epithelial tissues of the oral and genital tracts. Humans today are infected by two different strains of herpes simplex virus, named HSV-1 and HSV-2. A new paper by Joel Wertheim and colleagues investigates the evolutionary relationship of these two strains by putting them into the phylogenetic context of herpesviruses that infect other primates. They find that the HSV-2 virus likely came from chimpanzees to infect ancient humans sometime after 1.6 million years ago.
HSV-1 is the major cause of cold sores, which are clusters of small blisters on the lips and mouth. HSV-1 can sometimes infect the genital tract, but more commonly genital herpes is caused by HSV-2. These viruses are often transmitted by people who exhibit no obvious symptoms. They can be inherited from mother to child, horizontally to sex partners, or in the case of HSV-1, just to other people who share a drinking cup. Like the chickenpox virus to which both herpesviruses are related, both will persist in nerve cells throughout an individual's life. From here, they can cause occasional flare-ups but mostly remain hidden, and some individuals will be asymptomatic for their entire lives. Together these viruses infect between 50 and 100 percent of people in most nations of the world.
Other kinds of primates have their own herpesviruses. For many primates these are not yet known, including key species like gorillas, bonobos, and orangutans. Several different macaque herpesviruses have been sequenced, as have baboon and chimpanzee viruses along with a handful of other primates. The differences among the viruses from different primates occur in approximate proportion to the genetic differences between the primates themselves. This relation suggests that the common ancestors of these primates were also infected by herpesviruses, and that each species has inherited its virus from its ancestors.
But there are some notable exceptions. One has to do with the macaque, baboon and vervet monkey viruses. Wertheim and colleagues confirm that the herpesvirus carried by vervet monkeys is closely related to the baboon virus, even though baboons are much more closely related to macaques than to vervets.
The other exception is HSV-1 and HSV-2. Humans are the only primate known to have two different herpesviruses, and their evolution was not simple. HSV-2 is more closely similar to the chimpanzee herpesvirus, ChHV, than either of these is to HSV-1. Prior to this study, it has not been possible to resolve whether HSV-1 came into humans from a more distantly related primate, such as orangutans, whether HSV-2 came into humans later from chimpanzees, or whether both viruses may have diverged in the very distant common ancestors of humans, chimpanzees and gorillas. It even seemed possible that all ancient hominoids may have had two herpesviruses, which could have been retained in other apes but not yet discovered in them.
Wertheim and colleagues made a series of assumptions about the role of selection in pruning out ancient HSV mutations, in order to determine the timeline of divergence of the HSV-1 and HSV-2 lineages from ChHV. The resulting model suggests that the HSV-1 and ChHV viruses came from the human-chimpanzee common ancestor around 6-8 million years ago. That timeline is consistent with the hypothesis that the hominin lineage inherited HSV-1 from our common ancestors with chimpanzees -- and this timeline is further supported by the fact that the resulting rate of sequence divergence is right to bring today's macaque and baboon herpesviruses from the common ancestors of macaques and baboon species.
But HSV-2 is much more similar to ChHV, with an estimated divergence only 1.6 million years ago. With this kind of date, HSV-2 did not come from the human-chimpanzee common ancestor into ancient hominins. It instead must have come from chimpanzees or bonobos more recently.
Wertheim and colleagues suggest that the virus came into a hominin species that existed 1.6 million years ago, such as Homo erectus or Homo habilis. This is a possibility, but not the only one.
If, for example, the HSV-2 lineage came into humans from bonobos, its divergence from ChHV would still have to predate the chimpanzee-bonobo divergence, which occurred sometime before 800,000 years ago or so. So a 1.6-million-year-old divergence from ChHV would be a consistent result from a bonobo-human transmission.
Or perhaps chimpanzees have a not-yet-recognized strain of ChHV that is very divergent relative to the ChHV sequence used in this study. As in the bonobo transmission scenario, the HSV-2 strain may have come into humans very recently, even within the past 100,000 years, while still having a 1.6-million-year divergence from the known ChHV sequence.
As it stands, the ancient transmission of these viruses is an interesting problem. How did a chimpanzee virus become one of the most common sexually-transmitted diseases in humans today?
It's not an obviously prurient question. Vervet monkeys had to get the baboon herpesvirus somehow, too. Maybe the explanation for HSV-2 is similar to that suggested for HIV from endemic SIV in West African chimpanzees -- a transmission associated with human hunting and consumption of apes. If the transmission of HSV-2 really was coincident with the divergence from the chimpanzee virus 1.6 million years ago, human herpesviruses may provide some of the earliest circumstantial evidence of ancient humans hunting wild primates.
Wertheim JO, Smith MD, Smith DM, Scheffler K, and Kosakovsky Pond SL. 2014. Evolutionary Origins of Human Herpes Simplex Viruses 1 and 2. Molecular Biology and Evolution (in press). doi:10.1093/molbev/msu185
An editorial by Stanley Fields in this month's issue of Genetics asks, "Would Fred Sanger Get Funded Today?". Sanger died last year at the age of 95.
Fred Sanger won two Nobel Prizes for his work, first by determining the amino acid sequence of insulin, and later for developing technology that remains essential for sequencing RNA and DNA. The fact that Genetics found it sufficiently newsworthy to ask the question of whether NIH would fund his work today seems like a vote of "no confidence" in the current funding system.
The paradox posed by Fields is that although Sanger produced highly innovative research, he did not produce a large number of papers. Over his long career, Sanger only published around 70 peer-reviewed research. By contrast, today it is not unusual to see laboratory heads who have 70 publications before age 50. To many critics, the funding climate today values a certain kind of clock-punching in which the volume of publications is valued over any independent assessment of the quality of work. So the question is whether a clearly innovative scientist like Sanger could succeed today.
Peter Higgs posed a similar question from the perspective of physicists in a Guardian essay, "Peter Higgs: I wouldn't be productive enough for today's academic system". He lamented that today's academic system does not provide the kind of time and independence necessary to make fundamental breakthroughs.
Fields decides (justifiably) that Sanger would have no worries today, assuming the NIH study section was rational:
Let's look at just the 5 years preceding the dideoxy paper as if this were the track record upon which Sanger’s next grant application would hinge.... In the 5 years before the dideoxy paper, Sanger published nine papers of original research, encompassing a couple of methods and several reports of sequences. Five of these papers appeared in the Journal of Molecular Biology, two in Nature, and one each in the Proceedings of the National Academy of Sciences U.S.A. and Biochemistry Journal.
That's a good track record for impact factor, although today's publishing landscape makes this comparison less obvious with many more outlets for genetics research. Fields tips the scale with Sanger's prior achievement.
I contend that a modern-day NIH study section would give Sanger a highly fundable score for three reasons. First, Sanger had a track record. He had, after all, won a Nobel Prize in Chemistry in 1958 at the age of 40 for determining the amino acid sequence of insulin. Especially when you consider that the average age of principal investigators obtaining their first NIH grant is now ∼43, Sanger nearing the age of 55 or 60 at the time of our panel would be viewed as a long-time scientific luminary, if no longer a boy genius. While it may be disingenuous to tell young scientists that their best hope of getting an NIH grant before they're into middle age is first to win a Nobel Prize, we can at least recognize that important prior success paves the way for later favorable evaluations. Sanger would seem a good candidate for a Pioneer or similar award from the NIH that relies primarily on the qualifications of the investigator.
So there it is, win a Nobel Prize and you have a good chance of NIH funding!
The really interesting part of the editorial is that Fields himself admits that the present funding system is opaque:
Today I am sometimes hard-pressed to recognize more than the occasional name on a study section roster. The lesson here is that more of the senior members of our community need to serve on review panels. Many have suggested that receipt of an NIH grant should constitute an obligation to serve when asked, a suggestion that should be formalized by the funding agency.
The average age of new investigators for the first time they attain NIH funding is now 43 years old. Fields suggests that innovative work can be promoted most effectively by "funding people, not projects" to reward past innovation. That seems very unlikely to reduce the average age of first grantees.
MIT Technology Review has an article this week about Razib Khan's efforts to sequence his baby son in utero: "For One Baby, Life Begins with Genome Revealed".
From the article:
An infant delivered last week in California appears to be the first healthy person ever born in the U.S. with his entire genetic makeup deciphered in advance.
His father, Razib Khan, is a graduate student and professional blogger on genetics who says he worked out a rough draft of his son’s genome early this year in a do-it-yourself fashion after managing to obtain a tissue sample from the placenta of the unborn baby during the second trimester.
“We did a work-around,” says Khan, 37, who is now finishing a PhD in feline population genetics at the University of California, Davis. “There is no map for doing this, and there’s no checklist.”
The article has an excellent description of what it took to get the original placental tissue sample back from a genetic testing company, and how the sequencing was ultimately done. It's not difficult nowadays to carry out this kind of sequencing; the hardest part of this process was getting the tissue.
This article would be excellent reading for a course in anthropological genetics. There is a short sidebar note that captures the article's tone:
Why it matters
Medical ethics is colliding with parents’ desire for DNA data during pregnancy.
What a comment! In my opinion, "ethics" can't be very ethical if it conflicts with parents' desire for information about their children.
A prenatal genome can be a valuable piece of information for a very limited range of possible outcomes. Chromosomal abnormalities and certain inherited Mendelian genetic disorders can be accurately diagnosed, and some traits can be predicted. Widespread screening for these conditions would be a valuable outcome for many parents.
So why should anyone be against providing this information to parents? In reality "medical ethics" isn't an issue here. It is the widespread belief among certain hand-wringing professionals that parents cannot make responsible choices based on limited information. "Do no harm" gets transmuted into "Do nothing, lest there be some remotely imaginable harm".
Ray Troll is one of my favorite artists. His woodcut-inspired illustrations of the creatures of deep time, especially focused on sea creatures, combine science with a sheer sense of fun. His book, Cruisin' the Fossil Freeway: An Epoch Tale of a Scientist and an Artist on the Ultimate 5,000-Mile Paleo Road Trip, is a classic.
Amy Atwater at the "Mary Anning's Revenge" blog has done a great interview with Troll: "Trolling the Fossil Freeway with Ray". In it, he reflects on his work and the role of art and rock and roll in the science communication world. After talking about some of the large events that reach thousands of people, he turns to consider the effects of more personal contacts:
But its also really cool to go to classrooms and do it on a smaller scale or take a whole school. It's worth the time to go do that because in an hour-long presentation you can't reach them all, but there really is this genuine thing where you can actually transform a life. I also like doing drawing workshops where’s maybe 20-30 people in the room and there's a couple hours of that. Not to get too hippie dippy about it, but I am an old hippie after all, and you connect with the audience and the intellectual pursuit is there in every single level because we are very curious animals and we are very curious about our lives, about every level of our lives. And we should always be!
Art can reach people in ways that writing cannot. When it comes to science art, the role of humor is under-appreciated. That's ironic considering just how many scientists were inspired by Gary Larson's "Far Side" comics, but only a tiny number of artists engage with science in that way. Troll is a real master, combining his characteristically light-hearted take on science with a visually rich style.
The new online publication Vox is running an explainer about the species of face mites that live in your skin: "These mites live on your face and come out to have sex at night".
OK, let's face it. There is no way to come up with a less creepy headline than that.
Your body harbors at least two closely-related species of mites: Demodex folliculorum and Demodex brevis. Both live in your hair follicles, but folliculorum live in the follicles' main cavity, whereas the smaller brevis live in something called the sebaceous gland, which secretes a waxy oil called sebum — likely the mites' main food source.
Both types of Demodex are densest on the face — especially near the nose, eyebrows, eyelashes, and hairline — but they live anywhere on your body where hair follices are. Scientists, however, have never fully studied the total abundance of mites on the human body. Dan Fergus, a researcher that works with [Holly] Menninger, estimates that the average person has between 1.5 and 2.5 million mites, but no one really knows.
Face mites are mostly benign. You don't notice them and they just live off of sebum emitted by your glands. Their life cycle is only around 24 days, so they are constantly growing, mating and laying new eggs. They don't even poop; they just keep their feces all bottled inside until they die.
Admittedly that doesn't sound entirely benign.
The entire article is pretty interesting, and points to the "Meet Your Mites" project from North Carolina State University, one of the projects covered in the "Your Wildlife" site managed by Holly Menninger and Rob Dunn. The project has the goal of sampling a wide diversity of different populations to uncover relationships among the face mites.
There is every chance that they will find unexpected diversity. Really we know very little about the diversity of mites in other species of primates. There have been reports of follicular mites in macaques and a few other species of primates but apparently no systematic sampling of wild primates. And of course it's possible that mites from other species of animals have colonized people since we domesticated them -- or that human mites have colonized our domesticates.
Ed Yong has a great review of face mite biology from a few years ago that is worth reading if you want to find out more.
A number of readers have written to ask about my two-week-long blogging hiatus. I am in Johannesburg working in the new fossil vault with the Rising Star Workshop. I have been going from early morning to evening every day and blogging has taken a back seat to the extraordinary work being done here with the new materials.
I know that most readers here have been following closely the progress of the Rising Star Expedition and its discoveries. The blogging and tweeting from the field has definitely been a big part of how people have been following the story.
The workshop is a bit different from the field excavation. We have assembled more than thirty scientists from around the world to describe and explain the evolutionary significance of the remains. Most of the team are early career scientists who are applying their datasets and expertise to new fossils for the first time. The workshop is an intensive month-long marathon session giving them an opportunity to interact with each other and explore hypotheses. We are testing ideas, doing the process of science on the collection, and until the process has run its course we won't really have any news to report. Unlike the field, where new fossils coming up from the cave are an occasion to share, our attempts to understand the fossils in the lab will take more deliberation.
So nothing to report yet on the science front. But I have been just wonderfully pleased at the collaborations that these emerging scientists have been building over the past two weeks.
Last week Science printed an exchange of technical comments on the topic of the Dmanisi skull 5. The skull was described in a paper last fall (Lordkipanidze et al. 2013), which I blogged here: "The new skull from Dmanisi". The skull is beautiful and provides an almost-unprecedented look at undistorted and unreconstructed cranial form.
Now, Jeffrey Schwartz, Ian Tattersall and Zhang Chi (2014) have challenged the interpretation of the Dmanisi sample. The most provocative aspect of Lordkipanidze and colleagues' 2013 paper was its argument that the variation within Early Pleistocene Homo should all be collapsed into a single species, Homo erectus. Schwartz and colleagues think this is wrong. Instead, they think that Skull 5 represents a different species within the Dmanisi sample. Zollikofer and colleagues (2014) provide a response, pointing out that Schwartz and coworkers here and elsewhere have argued that the five Dmanisi skulls represent as many as four different species.
Morphometric shape and species
Lordkipanidze and colleagues (2013) rested their argument on a multivariate comparison of cranial shape. Their paper included the following figure, which presents a simplified view of the shape differences separating chimpanzees, modern humans, and some fossil hominins:
I discussed this figure in my previous post. The paper expresses two related arguments. First, the authors argued that the variation encompassed by the five Dmanisi crania is no greater than that encompassed by a large sample of modern humans, or of modern chimpanzees. The figure shows that the scatter of chimpanzee crania and the scatter of human crania are both fairly extensive, and the four plotted Dmanisi crania do not exceed that scatter.
Second, the authors made a more controversial claim. They observed that the variation of the Dmanisi crania in their shape comparisons actually encompasses the variation within all the samples of Homo erectus, Homo habilis and Homo rudolfensis. That led them to conclude that the null hypothesis that all these Early Pleistocene crania represent a single species. Homo habilis and Homo rudolfensis simply do not exist under this scenario. As I related last fall, Adam Van Arsdale and Milford Wolpoff (2013) presented a similar argument, testing it on the basis of change over time within the Early Pleistocene Homo sample.
There's no denying that skull 5, which includes the D4500 calvarium and D2600 mandible, reflects a combination of features that anthropologists did not expect to find. Its endocranial volume, at only 546 ml, is smaller than any other known Homo erectus adult. It is relatively small even compared to the sample of crania currently attributed to Homo habilis, and a full 200 ml smaller than KNM-ER 1470, the only complete cranial vault that anyone attributes to Homo rudolfensis. Yet despite its small size it shows marked cranial and mandibular robusticity, has very large teeth -- and especially enlarged second and third molars. If this skull had been found first, maybe we would be debating whether it would be a better fit in Homo habilis than Homo erectus. And yet, it shares many morphological features with Homo erectus skulls that have never been found in Homo habilis.
Facing the challenge
I'm a lumper by nature, as long-time readers know. Yet it is easy to see one problem with the analysis presented by Lordkipanidze and colleagues last fall. What is to stop us from lumping A. africanus in with early Homo as well?
As you can see from the figure above, if we define the range of shape variation in either chimpanzees or modern humans, Sts 5 and Sts 71 fall well within the range of variation that includes the early Homo specimens.
A group of the researchers on the Malapa project and I have gotten together to look at where the MH 1 skull would fit on this graph. It should be no surprise that it can be placed within the early Homo sample as well. Apparently it's not only Homo habilis that doesn't exist -- Australopithecus sediba should be Homo erectus as well.
We prepared a short paper to examine this curious result, focusing on how the gross cranial shape really does not recreate the taxonomic groups usually accepted by anthropologists. Let me point out that replicating this kind of analysis is very difficult, considering that the underlying CT data are not available for study, and the coordinate data from the morphometric analysis are not provided as supplements to the paper. In particular, when the dataset includes CT-based reconstructions, it is impossible to verify the measurements against the original specimens unless the CT reconstructions are available as datasets.
I have complained before about supplementary information packets from papers not providing measurements. A great example is table S6 in Lordkipanidze et al. 2014, which has a list of landmarks along with a "1" or a "0" depending on whether the landmark was included in the analysis.
This is reminiscent of the famous "data table without data" from Suwa and colleagues' 2009 study of Ardipithecus teeth (my post, "Whoa, who stole the data?"). I asked at the time, "What kind of rinky-dink journal is this?" Science obviously has not changed its review practices in the succeeding five years.
Schwartz and colleagues' argument
Fundamentally, Schwartz and coworkers present a very different argument than we have followed in our work. They present two pictures, and write beneath both, "Note the obvious morphological differences (not to scale)." Here's one of them:
I mean, how does this figure get through peer review? The caption simply says, "note the obvious morphological differences". It presents a series of ten crania "not to scale", and does not denote any differences for us to observe. The other figure, figure 1, shows the three Dmanisi mandibles with teeth, and bears the caption, "Note the obvious morphological differences in bone and tooth morphology (not to scale)."
We could easily do the same with a series of ten human crania, or a series of ten chimpanzee crania. "Look for yourself how different they are!" is fundamentally unconvincing. Especially when the different specimens are shown at different scales.
Aside from the two figures, the comment focuses on two issues: the invalidity of using overall shape comparisons, and the specific morphological differences among the Dmanisi specimens. Schwartz and colleagues begin by criticizing Lordkipanidze and coworkers for their "assumption" that the Dmanisi sample represents a single species.
The Dmanisi fossils are assumed to sample a single population, primarily because they come from the same site and a relatively short time period. By a priori defining “difference” as intraspecies “variation,” this permits focusing uniquely on general shape and gross morphology.
They imply that if we instead assumed a priori that Dmanisi includes multiple species, there would be no basis to use the variation in shape at the site for any further comparisons. They go on to recite a list of differences among the mandibular and cranial specimens at the site, in each case claiming that the differences are "potentially species-distinguishing features".
Christoph Zollikofer and colleagues (including most of the authors from the paper last fall) present a reply to Schwartz and coworkers' comment. They begin with some sarcasm:
According to these authors, Dmanisi would now comprise at least four different hominid taxa and thus hold the world record in hominid paleospecies diversity documented at a single site that extends over a mere 40 m2, and probably over a mere couple of centuries.
They then point out that the small area and apparently short time represented by the Dmanisi fossil sample is not sufficient to consider the entire sample as representing a single species. To the time and space element, they add that the sample does not exceed the variation within demes of living humans or other primates.
Through several examples, Zollikofer and colleagues devastate the anatomical case pitched by Schwartz et al. For example:
- Schwartz et al.’s assertion that premolar root number has “taxonomic valence” ignores basic comparative evidence from extant populations. Modern human sub-Saharan populations exhibit the full range of root variation, from single and Tomes’ roots to mesiobuccal + distal and buccal + two lingual roots (9). Variation in root morphology is also observed in the third mandibular premolars [P3s in (2)] of Pan troglodytes verus (10).
A trait that is variable in humans, and variable within other species of primates, is probably a bad trait for distinguishing hominin species. That's not a sophisticated argument, it's just basic taxonomic practice. And consider:
- Quantifying buccolingual compression of the mandibular canines (11) shows that D2735 has buccolingual (BL)/mesiodistal (MD) ratios of 0.88 and 0.90 (left and right canines, respectively), and D211 has ratios of 0.96 for both canines. D2600 has ratios of 0.86 (left) and 1.12 (right). Evidently, the two sides of the latter mandible do not represent two different taxa.
There seems to have been a lot of this lately, taking two parts of a single specimen and saying that they represent two different species.
Obviously, Zollikofer and colleagues get the better end of this exchange. Schwartz, Tattersall and Zhang seem to have been highly motivated to write their comment because of previous assertions about the great taxonomic diversity of Dmanisi in particular and early Homo in general. But they are the most extreme splitters in the game. They ignore the variation within samples of living primates, and explicitly minimize the importance of within-species comparisons. Their position may be consistent, but in the face of the comparative data it is incoherent.
That doesn't mean I think that Zollikofer and colleagues are entirely correct. Dmanisi is an instructive example about the variation within a sample of fossil hominins, but that doesn't necessarily mean we should discard the distinctions between East African fossil specimens.
As Van Arsdale and Wolpoff (2013) have shown, those distinctions have their own problems. I pointed out last fall ("The new skull from Dmanisi") that what we now know about the widespread intermixture of Middle and Late Pleistocene lineages should begin to change the way we think about the Early Pleistocene. That doesn't mean we should erase all distinctions, but the evolutionary pattern does not have to be a simple branching tree.
The evidence from Dmanisi will be stronger when the cranial and postcranial evidence can be integrated into a single picture. The Malapa hominins have shown how evidence from across the skeleton can make the comparisons of different samples much more complicated.
That means its time to re-evaluate the evidence with a combination of approaches and the full set of data.
Lordkipanidze D, Ponce de León MS, Margvelashvili A, Rak Y, Rightmire GP, Vekua A, Zollikofer CPE. 2013. A Complete Skull from Dmanisi, Georgia, and the Evolutionary Biology of Early Homo. Science 18 October 2013: 342 (6156), 326-331. doi:10.1126/science.1238484
Schwartz JH, Tattersall I, Zhang C. 2014. Comment on "A Complete Skull from Dmanisi, Georgia, and the Evolutionary Biology of Early Homo". Science 344:360. doi:10.1126/science.1250056
Van Arsdale AP, Wolpoff MH. 2013. A single lineage in Early Pleistocene Homo: Size variation continuity in Early Pleistocene Homo crania from East Africa and Georgia. Evolution 67 (3), 841-850. doi:10.1111/j.1558-5646.2012.01824.x
Zollikofer CPE, Ponce de León MS, Margvelashvili A, Rightmire GP, Lordkipanidze D. 2014. Response to Comment on "A Complete Skull from Dmanisi, Georgia, and the Evolutionary Biology of Early Homo". Science 344:360. doi:10.1126/science.1250081
Evan MacLean and colleagues write this week in PNAS about the evolution of self-control.
As a blogger, I have little interest in the subject.
But I read the paper closely, because presents an important analysis of something well beyond self-control: the relationship of brain size and cognition across species. The list of authors includes a huge array of experimental psychologists and animal behaviorists, representing the thirty-six species considered in the analysis -- everything from scrub jays and pigeons to elephants and aye-ayes. In all these species, the experimenters assessed performance on tasks related to self-control. They found that the species' performance was predicted by brain size. Bigger brained species tended to exhibit greater ability to control their immediate responses to stimuli in favor of a previously learned behavioral routine.
Self-control and cognition
It may not be intuitive that self-control is such an important aspect of cognition. Self-control includes the ability to inhibit responses to immediate stimuli, in favor of more useful or adaptive learned behaviors. Being able to pause and consider the best response to a situation is essential to higher cognitive abilities.
This study involved two kinds of experiments, in both of which animals learned to obtain a food reward in a certain way, but then were presented with an alternative scene in which a food reward was prominently visible to them but not immediately attainable. Animals who lack self-control immediately go for the new food stimulus, even though they can't get it. By contrast, an animal who is not overly distracted by the new stimulus, and goes back to the learned pattern to obtain the reward, is said to exhibit self-control.
As the authors describe, self-control defined in this way is relevant to fitness and an important component of being able to rely on learning to modulate behavior:
We chose to measure self-control—the ability to inhibit a prepotent but ultimately counter-productive behavior—because it is a crucial and well-studied component of executive function and is involved in diverse decision-making processes (167–169). For example, animals require self-control when avoiding feeding or mating in view of a higher-ranking individual, sharing food with kin, or searching for food in a new area rather than a previously rewarding foraging site. In humans, self-control has been linked to health, economic, social, and academic achievement, and is known to be heritable (170–172). In song sparrows, a study using one of the tasks reported here found a correlation between self-control and song repertoire size, a predictor of fitness in this species (173). In primates, performance on a series of nonsocial self-control control tasks was related to variability in social systems (174), illustrating the potential link between these skills and socioecology. Thus, tasks that quantify self-control are ideal for comparison across taxa given its robust behavioral correlates, heritable basis, and potential impact on reproductive success.
So what did they find? Bigger brains correlate with greater self-control:
Our phylogenetic comparison of three dozen species supports the hypothesis that the major proximate mechanism underlying the evolution of self-control is increases in absolute brain volume. Our findings also implicate dietary breadth as an important ecological correlate, and potential selective pressure for the evolution of these skills. In contrast, residual brain volume was only weakly related, and social group size was unrelated, to variance in self- control. The weaker relationship with residual brain volume and lack of relationship with social group size is particularly surprising given the common use of relative brain volume as a proxy for cognition and historical emphasis on increases in social group size as a likely driver of primate cognitive evolution (85).
Absolute, not relative brain size
Biologists have been interested in the comparative study of brain size and cognition for more than a hundred years. Eugene Dubois, famous for finding the "Pithecanthropus erectus" skullcap and femur at Trinil, Indonesia, was the first to consider explicitly the relation of relative brain size and intelligence. From his point of view, the problem was the inverse of the one we generally face. Dubois did not know the relationship between body size and brain size across mammals. He knew that larger mammals had larger brains, and that the relationship is not linear. From that, he was interested in working out the relationship of brain size and body size between animals with equal intelligence.
If you blew up a Hyracotherium to the size of a horse, its brain would be much larger than a horse's brain. The evolutionary history leading from small bodies to larger bodies in the horse lineage was accompanied by a relative reduction in the size of the brain -- and that is true across every group of mammals. Blow up a capuchin monkey to human size, and its brain would be bigger than ours. Larger animals have absolutely bigger brains but relatively smaller brains. At least, if we expect that the brain should increase in a linear fashion.
One way of looking at this is that larger animals tweak greater efficiency out of brain tissue than smaller animals. Another way of looking at it is that the brain consists of two parts: a part with functions that depend on body mass, and a part with functions that are independent of body mass. Growing the body requires the first to increase, but not the second.
Dubois wanted to determine just how much the brain ought to expand as body size increases. He had a very practical example -- Pithecanthropus. Its brain was considerably larger than the brains of living apes, but was it unusually so for its body size? Consider cats and lions as examples -- closely related within the same family, but extremely different in body size. In Dubois' way of thinking, cats and lions have similar cognitive abilities, similar intelligence. The differences in brain size between the two species should therefore be a simple reflection of body size. Absolute brain size would not be an indicator of differences in cognitive adaptations. Instead, we should pay attention to the brain size relative to what is expected for an animal of the same body size.
During the twentieth century, the most prominent name in the systematic study of brain size in different kinds of organisms was Harry Jerison. His groundbreaking studies of allometry culminated in the 1973 book, Evolution of the Brain and Intelligence.
Jerison proposed a model in which the cognitive specializations of some species could be explained by "extra neurons" -- the quantity of brain tissue above that predicted for the body size of those species. To express the relative brain size of a species, Jerison used the term, "EQ", or "encephalization quotient." The EQ is a measure of the brain size of a species, relative to the size expected for species of the same body mass when comparing across many related species.
The extra neuron hypothesis was centered around the problem of absolute versus relative brain size. If more brain matter was associated with more intelligence, we have a problem explaining why elephants and whales are not smarter than people, with our absolutely puny brains. A good hypothesis is that the relative brain size, and not the absolute brain size is important to cognitive adaptations. Humans have brains that are absolutely smaller than those of whales, but we have brains much bigger than other mammals that have our medium-sized body mass. Humans have a very high EQ.
Here's a hitch: Whales and elephants are very smart animals. Sure, people are smarter, but comparing every species to the single exceptional example doesn't tell us about the general trend.
MacLean and colleagues looked at one particular aspect of cognition, self-control, in a way that controls for phylogenetic relationship. Animals like elephants and scrub jays who are smarter than their immediate relatives, tend to have bigger brains. And across species, it is the absolute size of the brain that predicts their that hypothesis to be false, at least for self-control. The absolute brain size predicts the performance of different species on the tasks examined here. Relative brain size doesn't.
Given a bunch of species with lion-sized brains, and a bunch with cat-sized brains, this study should lead us to expect that the lion-sized brains have greater self-control. Lions and cats aren't cognitively equivalent; lions are smarter than cats.
Group size isn't important
Considering the history of study of brain size and cognition, I find one aspect of the study more surprising than the rest. These tasks related to self-control do not correlate with group size, at least not in the primate species studied here.
A long literature has been developed on the idea that group size has driven the evolution of primate cognition. This literature mostly depends on the observation that primates with larger group sizes have relatively larger neocortex sizes. That "social brain hypothesis" intuitively makes sense: many of the determinants of fitness in social mammals involve predicting or adjusting to the social behaviors of other individuals. That is why it has gotten the nickname "Machiavellian intelligence" -- the idea is that natural selection has crafted our brains to manipulate other individuals.
Self-control seems like an important skill for social life, providing a way for learned social behaviors to emerge in the buzzing, blooming world of social stimuli. Yet across primates, social group size had no predictive value for self-control. The authors suggest that social group size may still be a factor selecting for cognitive abilities in primates, just not the ones they tested for:
With the exception of dietary breadth we found no significant relationships between several socioecological variables and measures of self-control. These findings are especially surprising given that both the percentage of fruit in the diet and social group size correlate positively with neocortex ratio in anthropoid primates (86, 142). Our findings suggest that the effect of social and ecological complexity may be limited to influencing more specialized, and potentially domain-specific forms of cognition (188–196). For example, among lemurs, sensitivity to cues of visual attention used to outcompete others for food covaries positively with social group size, whereas a nonsocial measure of self-control does not (146).
This is a claim that disparate cognitive abilities have been selected for their effects in very contextually narrow environments; self-control in foraging tasks being very narrowly targeted toward foraging; visual attention to other individuals being narrowly targeted toward social competition. The implication is that selection can fine-tune different cognitive functions independently of each other; otherwise known as the "modularity" hypothesis for cognition.
Instead of group size, MacLean and colleagues found that primates with broader diets have more self-control:
Within primates we also discovered that dietary breadth is strongly related to levels of self-control. One plausible ultimate explanation is that individuals with the most cognitive flexibility may be most likely to explore and exploit new dietary resources or methods of food acquisition, which would be especially important in times of scarcity. If these behaviors conferred fitness benefits, selection for these traits in particular lineages may have been an important factor in the evolution of species differences in self-control.
Are primates smart because they learned to forage for specialized foods? That would be consistent with what we know about tool use. It's also consonant with the experimental evidence on other aspects of cognition. It's a provocative idea. Although group size is related to neocortex size across primates, there are many notable exceptions who have small groups and large neocortices, or vice-versa.
For example, the apes are very smart, have high diet breadth, and a huge diversity of group sizes in their natural habitats -- chimpanzees have large groups, gorillas small groups, and orangutans are often solitary. It may not be surprising to see that diet breadth has a much more stable association with cognitive adaptations, with group size being a flexible response to different habitat types, levels of predation, and other factors.
Can we draw any conclusions about human evolution from this study? It's hard to generalize -- just as we cannot conclude much about cognitive evolution from comparing whales and elephants only to humans, we can't predict much about humans by looking only at the broadest phylogenetic pattern. Yet, the importance of foraging and diet breadth to the evolution of primate cognition is provocative. The study seems to weigh in favor of tool use and foraging as primary drivers of human brain evolution, instead of social dynamics.
But that contrast is surely misleading. Human tool use and foraging depend on social cooperation and social learning. Our foraging strategy is principally a social strategy. Broad comparisons across primates are unlikely to tell us much about the uniquely human aspects of our evolutionary history. Instead, we'll have to depend on archaeology.
This kind of study is the future of the study of animal intelligence and cognitive evolution. MacLean and colleagues carried out similar experiments across a broad group of species, allowing them to compare those species in a phylogenetic framework. And they showed very clear differences between the predictive power of relative and absolute brain size. This wasn't a mishmash of results, the results were very clear because of the power of the analysis.
MacLean EL, Hare B, Nunn CL and many others. 2014. The evolution of self-control. Proceedings of the National Academy of Sciences, USA (in press) doi:10.1073/pnas.1323533111
Micaela Jemison of Smithsonian Science describes a recent study by Robert O'Malley investigating the use of termites by wild chimpanzees at Gombe. The piece is a nice one for students, including video and some background on how primatologists have studied insect consumption by chimpanzees in the past.
O'Malley's work emphasized the nutritional content of the termites, including the surprising fact that some termites are up to 25% fat.
Though most of their diet is ripe fruit, chimpanzees are omnivores like humans, not only eating insects but also meat, hunting animals such as monkeys and piglets. So why would chimpanzees spend so much time, more than four hours on some days, collecting and eating such a tiny food when they could be hunting?
“Going after insects is much safer, especially if mothers have their young with them,” Power explains. “The females are a lot more patient and often more skilled at termite fishing than the males. Males often don’t stay around if they don’t quickly get a good return. The females, however, stick it out, maybe because they realize they have a guaranteed source of food.”
Of course, termite consumption may have been an important aspect of human origins, as some of the earliest bone tools seem to have been used for breaking into termite mounds. These insects have intersected with our evolutionary history in many ways, as a prominent intersection between above-ground and subterranean resources.
British Pathé, the famous newsreel producer, has released its catalog of film content to YouTube. There's not a rich backlog of film footage pertinent to paleoanthropology there, but I did find this 3-minute-long clip, "Evolution - The Other Side Of It".
The idea is that Darwinism has two sides -- a creationist may dislike the idea that we came from apes, but the apes may feel the same way about us!
It's a silent short with various captive primates dressed in human clothing. An early stage of the kind of exploitative cinema treatments of primates that have become less and less acceptable over the years. And yet, from the standpoint of its time in 1930, it may have been one of the only ways older cinemagoers would ever have encountered the idea of evolution.
I was at the meetings of the American Association of Physical Anthropologists last week in Calgary. Great to see so many friends, and to meet many new people -- I especially loved meeting some students who had recently been following the blog!
One day I was talking with a group of people, realizing just how much the subject matter of paleoanthropology has transformed during the last five years. For one thing, it seemed like there were a lot of papers about Australopithecus sediba, a species that didn't exist five years ago! But we also noticed an increase in the number of papers that seemed to engage with Australopithecus africanus over previous years. By contrast, it seemed to us that A. afarensis faded in importance compared to previous years. Maybe it was just the way it seemed -- especially with only a single podium session devoted to early hominins.
Anyway, after I got home, I decided to do a little counting through the abstracts of previous years. I looked at the abstract volume from 2008, and also from 2002 -- six years and twelve years ago, compared to this year. I did a count of the number of separate abstracts that mention each species name in human evolution -- and then also for "Neandertals" as a group. The abstracts include both podium talks and posters, and across the last twelve years the total number of presentations has increased, largely by adding more posters. I didn't do any kind of content analysis -- some abstracts mention a lot of species names because of the nature of the comparisons they have carried out, while some are especially devoted to the study of one of these species.
Well, as you might expect there are a lot more talks and posters about Neandertals than the other groups. There are just many more areas of biology that people can examine interesting questions about this group, including genetics, which has fueled a continued increase in numbers. "Neandertals" here includes abstracts that mention the word "Neanderthal", "Neandertal", and "Homo neanderthalensis". What may be interesting is that the "Neandertal" spelling had actually declined substantially in the 2008 meetings, but has more than rebounded this year. "Homo neanderthalensis" as a variant has blipped upward from zero in 2002, although it still accounted for only 5 abstracts this year.
You can see that my initial impression about A. afarensis was wrong: There are just as many abstracts this year about A. afarensis as in past years. Likewise A. africanus has basically held steady across this time interval. I did not include it in the chart, but A. robustus (including mentions of Paranthropus robustus) also held strong with 8 abstracts this year. Homo erectus has fluctuated but remains consistently popular.
Immediately stunning is the incredible presence of A. sediba so soon after its discovery. It didn't exist in 2008. This year it accounted for as many abstracts as A. afarensis. The open access policy to the Malapa hominins has contributed to this incredibly rapid growth in research. I'll further point out that the numbers here are not counting the Paleoanthropology Society meetings, where a number of additional papers on A. sediba were presented -- including by some notable critics of the original research. People can see the material, and they are doing productive work on the fossils as a result.
Note that the timeline of A. sediba from its 2010 description to the 2014 meetings is roughly the same as the time from the 2004 description of Homo floresiensis to the 2008 meetings. Yet the scientific research output on A. sediba this year was double that of the hobbits in 2008.
By contrast, the research on Homo habilis seems to have greatly declined. Only a single abstract mentioned the species this year -- and that abstract was a phylogenetic study that mentioned a long list of other species as well. This trend seems a bit puzzling in comparison to the great interest in A. sediba, considering that the two are thematically linked as possible ancestors of later hominins. I wonder whether the difficulty of recent travel to Kenya may have contributed to researchers' decisions to avoid this topic. The other species to show a strong decline in 2014 was another East African endemic, A. boisei, which went from 7 abstracts in 2002 and 2008 down to 2 abstracts this year.
The chart isn't exhaustive -- I omitted many names, like Ardipithecus, Sahelanthropus and Kenyanthropus that have only been represented in one or two papers a year. Even this year, five years after the publication of the monograph-length treatment of Ardipithecus ramidus in Science, only three abstracts mention the genus.
Is there a message in these comparisons?
Naturally the value of research topics will shift from year to year. I haven't looked at last year, and it would probably be even more illuminating to look at a more detailed trend. You can see that the hobbits continue to attract a good degree of interest, even without anyone being able to include new observations on them!
Carl Zimmer reviews Svante Paabo's new book, Neanderthal Man: In Search of Lost Genomes, in the New York Times: "Missing Links". Zimmer gives a balanced review; in his account the book fails as a memoir but succeeds in describing the background of a significant new area of science:
When the Neanderthal genome is finally published, Paabo is justifiably proud. We can’t begrudge him the opportunity to regale us about the news conferences and honors. But readers may start to wonder what exactly the payoff was for those many years of struggle. Reconstructing a Neanderthal genome was a tour de force, we can all agree, but why does it matter?
My perception: Many of the skilled geneticists who have made breakthroughs in ancient DNA see the Neandertals as objects, not subjects.
I score a mention late in the book, synched to the 2010 unveiling of the draft Neandertal genome. "Somebody get John Hawks some oxygen!" was the reaction to my 2010 post, "NEANDERTALS LIVE!".
I'm tremendously excited by this work. The mere fact that we now have Neandertal ancestors is transformative, extending our very concept of humanity. By doing so, ancient DNA has given us new tools to understand our own nature as cultural beings. At the same time, ancient genomics has illuminated the dynamism of ancient populations. We now know of ancient populations that archaeologists had never previously suspected, and has shown the large-scale movements and mixtures among them.
So why hasn't the premier scientist in this field done better conveying the real excitement ahead of us?
Weirdly, the most provocative implications of Neandertal genomes are precisely those that many geneticists have fiercely resisted. The field is slow to release its strange fascination with the simplistic idea that modern humans arose in a singular -- and simple -- event. No matter how much genetics shows that the process was complex, many still search for silver bullet explanations: megadroughts, volcanic winters, projectile weapons, "fixed" modern human genes carrying the essence of humanity.
The Neandertal Genome Project was begun with the ostensible goal of discovering what makes us human, by contrast with the Neandertal genome. Despite the extensive sharing of DNA uncovered by the project, many scientists are still consumed with understanding the small fraction of human genomes that are not shared with the Neandertals or Denisovans. Even now, the highest intensity of investigation into Neandertal and Denisovan genetics remains devoted to finding the function of those rare human alleles that these ancient humans lack. The course of research on Neandertal genetics seems hardly to have changed after the discovery that Neandertal genes lie within us.
As an anthropologist, I see the Neandertals as subjects. Geneticists seem to regard them as objects. From a storytelling point of view, that difference makes all the difference.