Header image

john hawks weblog

paleoanthropology, genetics and evolution

Photo Credit: Dental chipping in Homo naledi. Ian Towle and colleagues

Biological Anthropology association speaks out on data access

For many years, biological anthropologists have been talking about data access.

This month the American Journal of Physical Anthropology is running a commentary: “Data sharing in biological anthropology: Guiding principles and best practices”.

An ad hoc committee on data access and data sharing produced the commentary, which they describe as a consensus of forty participants across the field.

I think that it is very positive that biological anthropologists are having these conversations. There is broad agreement that the data that underlie published studies should be available for replication and meta-analyses.

On the other hand, I’ve noticed over the years that many scientists who agree in principle that data should be available nevertheless find many ways to obfuscate or prevent access. I see some language in the published statement that makes me nervous. For example:

Project design should include a clear data management and sharing plan that is in place prior to the start of the project. Data sharing should be viewed over a time horizon related to the length of the research project, such that different parts of a data set may be shared at different times. For example, timelines in a grant proposal might include specific target dates for making particular data available (e.g., metadata, raw data, etc.).

I get very worried when I see this. In my experience, timelines and target dates in grant proposals do not translate into data access upon publication. In some areas of biological anthropology, projects that have been funded by our major grant agencies are less likely to archive data in ways that other researchers can access, even though they have filled in the mandatory “data access plans”.

It’s also curious that the NSF-funded data repositories for biological anthropology data, such as MorphoSource and PaleoCore, are not included on the list of recommended data repositories. I know that many projects have satisfied NSF data access plan requirements by referring to these repositories. Yet some people have worried that such data repositories are not sustainable in the long term because they rely upon continued funding.

Anyway, I recommend reading the statement and thinking about how the best practices can be improved.

The complexity of paleomagnetic pole flipping

Scott Johnson writes at Ars Technica about the Brunhes-Matuyama boundary: “The last magnetic pole flip saw 22,000 years of weirdness”.

The researchers interpret this additional data as showing a major weakening of the magnetic field starting 795,000 years ago before the pole flipped and strengthened slightly. But around 784,000 years ago, it became unstable again—a weak field with a variable pole favoring the southern end of the planet. That phase lasted until about 773,000 years ago, when it regained strength fairly quickly and moved to the northern geographic pole for good.

The Brunhes-Matuyama paleomagnetic reversal is conventionally recognized as the boundary between the Early and Middle Pleistocene. When we talk about recognizing geological time periods, it is important to realize that our understanding of the boundaries is limited by the precision of our geochronological methods, and the physical processes that give rise to geological changes themselves.

This is an example where a boundary has 22,000 years of wiggle room that we might not have expected. In the span of 780,000 years, that’s not a long time, but if we want to examine whether two events are simultaneous or one caused the other, it’s a long time.

Profile: David Reich on ancient DNA

Harvard geneticist David Reich recently was awarded a prize in Molecular Biology from the National Academy of Sciences. On the occasion, PNAS has done an interview with him, by journalist Beth Azar: Q and As with David Reich.

The interview may not offer much new for people following human evolution closely, but I thought it was worth sharing Reich’s comments on how the field of ancient DNA might move forward:

PNAS:What are you most excited about moving forward?
Reich: I’d like to help midwife this explosive new field into something that is mature and fully integrated into archeology. One goal is to help generate a lot more data from understudied places in the world, especially outside of Europe, and to build an ancient DNA-based atlas of human migrations all around the world. I would also like to help realize the potential of ancient DNA to provide insights into biology. To understand biological change over time, it is critical to understand how the frequencies of genetic variations change. To do that, large sample sizes of ancient people are needed. In the last two years, due to efforts by our lab and others to scale-up data production, the needed sample sizes are finally becoming available.

Dinosaur property war

Phillip Pantuso of the Guardian reports on the legal battle over the ownership of significant dinosaur fossils: “Perhaps the best dinosaur fossil ever discovered. So why has hardly anyone seen it?”

In a test, last November that court ruled that fossils on Montana state and private land could be considered minerals. “Once upon a time, in a place now known as Montana, dinosaurs roamed the land,” begins Judge Eduardo Robreno’s opinion. “On a fateful day, some 66m years ago, two such creatures, a 22ft-long theropod and a 28ft-long ceratopsian, engaged in mortal combat. While history has not recorded the circumstances surrounding this encounter, the remnants of these Cretaceous species, interlocked in combat, became entombed under a pile of sandstone. That was then … this is now.”

This is a well-reported case recently, and the Guardian account provides more detail and background than other stories I’ve seen. It is very different for me as a paleoanthropologist to think about the world of commercial fossil hunters. I can’t disagree with Horner’s opinion expressed in the article:

They contacted natural history museums around the world, including the Smithsonian – where the bones were offered for a reported $15m – and the Museum of the Rockies, in Bozeman, Montana, whose then head paleontologist, Jack Horner (the inspiration for the character played by Sam Neill in Jurassic Park) told them they were scientifically useless.
“In order for a specimen to be of scientific use and publishable, we have to know its exact geographic position, its exact stratigraphic position, and the specimen must also be in the public trust, accessible for study, which this specimen is not,” Horner says.

Fossils are often beautiful objects, and museums are often great showcases for these objects for public engagement and understanding. But today the science requires a lot more detailed examination of the sedimentary context of fossils than the nineteenth century. Not every fossil is of great interest to present scientists. For scientific research today, separating fossils from their context should be a scientific judgement, in which we must weigh the destruction of context against the possibility of collecting and analyzing information.

For the interests of science, the best place for many fossils is to keep them in the ground. When we excavate anything, there is a loss of information and context, a destruction. As technology has developed, it has given us ways to study fossils and their context with less destruction, and to collect information that was once invisible or simply discarded. The future will bring better methods. In every case, we must consider whether today is the right historic time to separate a fossil from its context, balancing the gain to science against the loss of future opportunities—and any risks to the fossil in its present location.

For hominin fossils, the decisions are just as complex. I’m very glad that private ownership and market value of the fossils is not an issue for our work in South Africa.

Are parsimony analyses better than Bayesian methods for phylogenetics?

During the past few years, Bayesian approaches to phylogeny reconstruction have become more and more widespread, including analyses of fossil hominins. Among hominins, Bayesian approaches sometimes lead to very different results from parsimony, even when applied to exactly the same datasets.

That’s a problem. As a field, we can put much more effort into building morphological datasets of fossils. The best existing datasets still have enormous holes and gaps—they are highly biased toward cranial and dental traits, and even these traits are underrepresented in published datasets compared to fossils that preserve the traits. Researchers who have tried to include some specimens in their analyses have actually been denied permission to study them, meaning that they must rely on published studies only, which exclude many traits. So there’s much room to better document the fossil record that already exists.

Last year Robert Sansom and coworkers carried out a study to examine what difference it makes to use Bayesian methods versus parsimony upon the same datasets. Their conclusion is stated in their title: “Parsimony, not Bayesian analysis, recovers more stratigraphically congruent phylogenetic trees”.

Other scientists have looked at how these methods perform in generating phylogenetic trees for simulated data. In that artificial context, Bayesian methods do better than parsimony. But what about real data? Sansom and colleagues considered it possible that some aspects of real datasets make them different from the usual simulated datasets.

That’s tough to test, because we don’t know the real phylogeny in real datasets. But Sansom and coworkers looked at how the different methods perform with relation to the stratigraphic position of fossils — testing for stratigraphic consistency. They found that Bayesian algorithms don’t do so well:

Bayesian analyses yielded trees that were significantly less congruent with stratigraphic data. Given that the 167 empirical datasets were from a wide range of authors, clades, time periods and taxonomic levels, we can place confidence in the small but significant differences observed. Taking stratigraphic range data as a benchmark independent of morphology, therefore, indicates that parsimony should be preferred over Bayesian analyses, but these empirical results differ from simulation studies. We explore a few possible explanations for this discrepancy.

To be honest, the difference in performance between the two methods in this study is pretty slight. Both methods do badly in generating trees that are stratigraphically consistent. Parsimony was better in a statistical sense but the difference was not large. To me, the bottom line is that some real datasets have features that make Bayesian methods work badly, and others probably have features that make parsimony work badly (and many probably are unsuitable for both).

One thing that the study discusses is whether published trees and methods have already been biased by researchers who were aiming for a particular solution:

Cycles of revision and re-analysis of morphological data matrices during construction could lead practitioners to prioritize phylogenetic solutions that fit some preconceived ideas for final publication (either consciously or subconsciously), including stratigraphic fit. Under such circumstances, parsimony trees might exhibit artificially elevated stratigraphic congruence because parsimony is the historic default method used to evaluate morphological data.

From my experience with hominins, this kind of bias is a real possibility. Until the last few years, paleoanthropologists seemed to aim at a particular kind of phylogenetic hypothesis: one in which successive species in stratigraphic order are progressively more closely related to living humans. That’s an intrinsically unlikely order for relationships to occur. Even though scientists profess to accept that human evolution was a tree, they still tend to arrange them as if they were a straight line.

In conclusion, our analyses demonstrate a clear result: Bayesian searches yield trees that have significantly lower stratigraphic congruence compared with trees from parsimony searches. We find little difference between parsimony using equal and implied character weighting—they are roughly comparable with respect to stratigraphic congruence. If stratigraphic congruence is taken as a benchmark for phylogenetic accuracy, then, maximum parsimony is the preferred method of choice for analysis of morphological data.

I’m not sure that stratigraphic congruence is anything that we should be aiming at. With hominins, it has become clear that relationships between lineages may have little to do with the age of the fossils. I’m also dubious that any change in algorithm is going to bring us closer to the “real” phylogeny. As long as we have substantial missing data from specimens that have already been found, the algorithms are garbage-in-garbage-out.

McKenna and Bell on ranked categories

I learned mammalian systematics and cladistics around the same time that Malcolm McKenna and Susan Bell published their 1997 book, Classification of Mammals: Above the Species Level. The movie-title placement of the colon in their book title suggests the epic nature of the task they took on.

McKenna began during the 1960s to undertake the task of updating Simpson’s 1945 mammal classification to accord with the rules of cladistics, and published an interim part of the work in 1975. When I learned mammal paleontology, it was with class notes drawn from mimeograph copies of old notes. McKenna’s 1975 classification had a prominent place in this—sometimes as the only available classification for some groups, sometimes as one among other conflicting alternatives. From it I learned the placement of many extinct branches of early mammals, and saw systematics as an important part of understanding the fossil record.

How can systematists generate a classification that makes sense given the phylogenetic arrangement of mammals? Much of the evolutionary diversity of mammals has historically been recognized at the level of Linnaean orders – Primates, Carnivora, Perissodactyla, and so on. It happens that many of these orders have a very similar time depth, because they originated at or shortly after the Cretaceous-Paleogene impact event 66 million years ago. But relationships below the level of these orders are diverse – some lineages diversified enormously with sudden adaptive radiations at different times, others were more conservative. And when extinct mammals come into the picture, the diversity and time depth expected of “orders” and other higher level groups become less clear.

Today, genomic evidence indicates that the order Primates is a sister to the order Dermoptera (colugos). The group including both is known as Primatomorpha. Rodentia and Lagomorpha (rabbits) are likewise sisters, grouped as Glires, and this group appears to be a sister to Scandentia (tree shrews), although possibly Scandentia is closer to Primatomorpha. All these together form a group with the name Euarchontoglires. There remain several levels above Euarchontoglires but below the class Mammalia. Primates themselves have an extinct group of relatives known as Plesiadapiformes—sometimes included as a stem group within Primates, but sometimes included within Primatomorpha as a sister to Primates. Each of these higher-level branches of the mammal tree belongs to a distinct level of the hierarchy.

To deal with this complexity, systematists must multiply levels. But how many levels? Linnaeus could stack as many orders into a class as he liked, because he was not working with a bifurcating tree. A modern cladistic classification involves many, many bifurcations, each successive bifurcation in the tree representing a distinct level in the hierarchy. To get from one class to 40 orders requires at least six bifurcations: five hierarchical levels of classification between the class and order. Five is not enough for mammals, because of all the extinct stem branches represented by the known fossils. Each new fossil discovery of early mammals potentially introduces another level.

Simpson (1945) had included fifteen levels from class to species; McKenna and Bell (1997) increased this to 25 levels—recognizing categories such as “magnorder” and “supercohort” above the order, and “parvorder” and “subtribe” for lower levels.

They addressed the interesting difference between a classification and a tree, by discussing how prefixes relate to the hierarchy. As I re-read this passage from page 18, I thought it worth sharing:

Certain taxonomic categories came to bear prefixes suggestive of special hierarchical linkage (e.g., in the family-group, subfamilies are always subsumed in families). Others did not (e.g., tribes might alternatively have been dubbed "microfamilies" or some such term connoting subordination). use of a prefixed category implies that the category to which the prefix applies is also used. In the Linnaean system we do not cconstruct superfamilies directly from subfamilies without also employing families. Logically, however, above the species level there is nothing special about sub- or supercategories. They all could have received unprefixed cardinal names or simply be referred to as taxa. Such names are, after all, just labels (recognition symbols) (Mayr 1953:391). That prefixed categories did not receive unprefixed cardinal names, free of reference to another rank, seems to us to be partly a matter of practicality and memorability, and partly a function of their authors' essentialistic belief in the objective reality (beyond a construct of human language) and commensurability of various examples of such taxonomic levels as classes, orders, families, and genera (see Slaughter 1982). We employ prefixed names for the sake of stability, because they have been long in use, but we do not hesitate to allocate to an incertae sedis position some taxa whose names happen to be prefixed. For reasons of stability we might not wish to change their rank or to list the lower-ranked contents but not the valid but prefixed monophyletic taxon containing them.

It is a thoughtful observation. A tree is a logical structure that does not care whether humans can recognize and remember its parts. One advantage of a system of classification is that it is built with human memory in mind. The use of categories that bear a hierarchical relationship not only in definition but also in the form of the category names themselves has utility. Once a student learns that a parvorder is below the level of an infraorder, and a mirorder is above the order but below the grandorder, they’re not likely to confuse them.

Yet.

Any set of taxonomic levels faces a problem as soon as any new stem branch emerges between two adjacent levels of the hierarchy. Taxonomists who recognize lots and lots of levels have a buffer against taxonomic changes, because there will be empty levels. But with new discoveries of stem groups, the empty levels may eventually be filled.

We are in that situation with hominins. Hominini is a “tribe”. The group used to be called “Hominidae”, at the family level, but the discovery of the branching order of the apes argued for recognizing the family at a higher level of the tree, so that Hominidae includes great apes and humans, the subfamily Homininae includes African apes and humans, and the tribe Hominini includes only humans and fossil species closer to humans than to chimpanzees and bonobos. But clearly that still does not leave enough levels. The McKenna and Bell classification only provides family, subfamily, and tribe. The branch including chimpanzees, bonobos, and humans lacks a level in this hierarchy. Some scientists advocate recognizing this branch as the tribe Hominini, which would make humans and their fossil relatives a subtribe, Hominina. A different approach would be to introduce more levels: Historically, below the family level, taxonomists have used categories like “infrafamily”,”hypersubfamily” and “supersubfamily”.

None of this would matter very much to everyday use of these groups, if the names of them were not connected to the level. McKenna and Bell discuss this as well. What I didn’t realize is that the use of level-specific suffixes was itself a post-Linnaean innovation with the laudable aim of making levels more consistent:

With the proliferation of ranked categories that had increased steadily from Linnaeus's original six, came also a perceived need to encode the names of taxa themselves as signifying that the taxa for which they stood were members of some particular rank. In each particular discipline, the names of family-group taxa came to have various standardized inflected suffixes linked to the perceived rank. Thus, in zoology, a name ending in "-idae", signifies a taxon at family rank. Latreille (1796), who introduced the family category to zoology, did not use the suffix "-idae". That modification was provided later by Kirby (1815), and has not only stuck but is now legislated by the ICZN. We think of these inflective conventions as part of the "Linnaean System" but, in fact, they are arbitrary post-Linnaean additions to it, originally added for the mnemonic usefulness but now the occasion of much pedantic drudgery whenever taxonomic rank is changed or organisms are transferred from one kingdom to another.

There has been plenty of pedantic drudgery associated with changing hominin taxonomy, and that’s not counting the many holdouts.

Anyway, what if our taxonomies routinely use more and more levels? Doesn’t that get hard to keep track of? It’s fascinating to me that McKenna and Bell defend their 1997 classification of 25 levels by noting that “it’s no harder to learn than the alphabet”. But this passage really made my jaw drop:

In the present classification of more than 5000 mammalian taxa that are assigned generic or subgeneric rank, additional categories have proven useful in depicting in words a somewhat richer hierarchical arrangement of mammals than that found in Simpson's (1945) classification. There are now many more mammalian taxa to classify than was the case in 1945, both in real terms and because of the efforts of "splitters" and paleontological "apparent lineage choppers". Increasingly, most of these names organisms are made known from fossil materials only, sometimes very poorly represented. Moreover, the cladistic revolution in systematics has resulted in far more attention to phylogeny than was the case in the 1940s. The 25 taxonomic levels used in our classification actually fall closer to the theoretical minimum, 13 (see below for formula) than to the thousands that would be required if the classification reflected a completely pectinate (and very unlikely) sequence of taxa. The hierarchical level sequence is no more difficult (for humans) to learn than the alphabet, or probably less so in that some of the levels are very easy to remember because of meaningful prefixes and suffixes. We see no particular reason why, if useful, additional categories (or simply unranked taxa) should not be proposed (or revived). Computers can remember them for us. Indeed, in the program Unitaxon (TM) used to process the data resulting from this classification, facilities exist to expand and keep track of the names, number and sequence of taxonomic levels indefinitely, if deemed appropriate.

Ha! We don’t need to remember taxonomic categories because the computers can remember them for us!

If you’re interested in outsourcing your taxonomic knowledge to a computer, you can still see the Web 1.0 page for Unitaxon, listed as a “software product from yesteryear”. Here’s an excerpt:

Unitaxon Browser 2.0 is available directly from its developer, Mathemaesthetics, Inc. The application is distributed on CD for both Macintosh (System 8 and 9) and Windows (95 or later) operating systems. The Browser will work in Classic compatibity mode under Mac OS X. For maximum performance reasons, the Browser reads the entire classification into memory when you open the file. Depending on the level of taxon commenting in the database, the overhead is currently about 1MB RAM per 1200 taxa on average.
For instance, the most recent classification of the Mammals has been placed on the net in Unitaxon Browser format. It is our expectation and hope that other large taxonomic databases will follow suit.
The price per copy for the Browser is US $128, plus shipping/handling.

Well, that’s one solution.

The changes in 20 years have enormous. Even the link in the Unitaxon website to “vertebrate paleontologists” at the AMNH no longer connects to vertebrate paleontologists — the AMNH site now redirects the link to its “Center for Biodiversity and Conservation”. Malcolm McKenna passed away in 2008.

Anyway, the McKenna and Bell introduction has a lot of really interesting and useful thoughts about taxonomy and classification. The volume was published at the height of cladistic morphological classification, just as DNA evidence was starting to become a potent source of information about the deep relationships of mammal groups. As such, the McKenna-Bell classification has become outmoded in many details, even if some of the guiding concepts behind their taxonomy remain valuable.

Quote: Blumenbach looking for the horned rabbit

I have open Johann Blumbenbach’s A Short System of Comparative Anatomy, in the 1807 English translation by William Lawrence. The full text is on Google Books.

In a footnote to page 24, where Blumenbach described the various horns and antlers of the group known as the Pecora, Blumenbach describes the jackalope!

I have collected about twenty instances, from the middle of the 16th century downwards, in which horned hares are said to have been found, with small branches like those of the roebuck, both in different parts of Europe, and in the East Indies. Were this fact ascertained, it would furnish another striking point in which these animals resemble the pecora. The fact is suspicious, because I have not yet been sufficiently satisfied of a single instance in which the horns were on the hare's head, although every trouble has been taken to procure information; and they appear in the drawings, which I posses [sic], by far too large for a hare.

It seems likely that the source of this idea was the muntjac, or other small cervids. Still, it’s not hard to imagine Americans heading west, thinking that some of the large jackrabbits might turn out to be antelope-like in more ways than one.

Quote: Weidenreich on the resistance to Neandertals as human ancestors

Franz Weidenreich, in his 1943 article, “The ‘Neanderthal Man’ and the ancestors of ‘Homo sapiens’ (p. 44):

At the time when Darwin and Huxley first claimed that Man evolved from a primate similar to the anthropoids of today, little evidence substantiated by palaeontological facts was available. In the meantime, however, quite a number of fossil forms have been recovered all of which may justifiably be claimed as “missing links.” Yet, strangely enough, the more such intermediate types came to light, the less was the readiness of acknowledging them as ancestors of Homo sapiens. In many cases the scepticism apparently was the last bastion from which the final acceptance of Darwin’s theory could be warded off with a certain air of scientism. In other cases, it was the pure respect for traditional axioms when advanced by authorities.

Weidenreich is honorary patron of the Neandertal anti-defamation league.

How will ancient proteins change paleoanthropology?

Nature has a news feature by Matthew Warren that provides a nice background to recent work on proteomics of fossil hominins: “Move over, DNA: ancient proteins are starting to reveal humanity’s history”.

Like most areas of new technology, news stories are training the public to expect that lab folks will wave a magic wand and soon answer all questions. The reality is more complicated.

Protein sequences from teeth and bones provide tremendous promise for understanding the evolution and relationships of ancient hominins. But it’s important to separate the science from the hype. Protein sequences provide information that is orders of magnitude more limited than genome sequences. A single low-coverage genome sequence built from three Neanderthal specimens led the observation that humans today have Neanderthal ancestry, many of them as much as 2 or 3 percent. That observation—arguably the single most important ancient DNA result—is outside the power of protein work. Another area where ancient DNA has transformed our knowledge is the divergence times of ancient hominin groups. Protein data will provide some information about such divergences in cases where DNA is unobtainable. But the precision of such dates will be very low, because the number of changes in amino acid sequences among hominoids is small.

That being said, those of us who work with fossil hominin material are excited about the potential of proteins. They are already transforming some areas, and they have much to offer in others. For example:

  1. Proteins are great for identifying hominin fragments too small for reliable anatomical identification. In this area, proteins are already prime-time science. This has been one important application of the ZooMS approach, with tremendous success under the leadership of Katerina Douka on the Denisova Cave bone collection.

  2. Along similar lines, ZooMS has started to make important contributions to understanding the species composition of fragmented faunal collections. We could use additional investigation of how comparable such results are to traditional, less-destructive quantification of faunal collections based on identifiable fragments.

  3. As related in Warren’s story, protein sequences have brought new information to phylogenetic questions of extinct groups, such as sloths, South American ungulates, and rhinoceroses. Hominin systematics is a mess. Few studies agree on the shape of the phylogenetic tree of human relatives. This confusion comes from a combination of too little morphological data about some fossil fragments, and too many instances of morphological convergence or parallelism in various hominin lineages. Protein sequences don’t necessarily solve these problems—there will be many lineages with no data, few changes among most species, and convergent amino acid changes are by no means impossible in hominins. But everyone who works on hominins is interested in any approach that can squeeze more information out of the record.

The article spends much time on this third area of future promise, which is the most susceptible to hype. This is partly driven by the recent recognition of the Denisovan affinity of the mandibular fragment from Xiahe, China, based upon protein sequence. That similarity is compelling, but it is based upon very little information—one single change in the collagen sequence that has been observed in a Denisova Cave genome and not in living humans or Neandertals.

The enamel proteome has more information than collagen, and many ancient hominins are more different from us than Neandertals and Denisovans. That means when we look at ancient hominins like Homo erectus, we can expect a bit more information about their evolutionary divergence from us. That’s a time frame during which we know much less about relationships:

Go back one million years or more, and things get even less clear. H. erectus, for example, first emerged in Africa around 1.9 million years ago, but without DNA evidence, it remains uncertain exactly how it is related to later hominins, including H. sapiens.

I think the most likely result of this research is that “it” is going to be “they”. Protein analysis is not the only new approach shedding new light on the relationships of Early and Middle Pleistocene hominins, and all of the new information is raising new questions about how we recognize and understand species and populations.

Work on internal structure of the teeth, involving Maria Martinón-Torres and others, is starting to provide some fascinating evidence that “Homo erectus” in Asia is a complicated story, involving branches that haven’t been previously been recognized. Gross morphological evidence has long suggested that African “Homo erectus”” is likewise complicated, and in Africa work on the internal anatomy of teeth is only just getting started.

The big challenge is this: We live today in a world where “lumping versus splitting” is no longer an interesting question about extinct hominins. We know that populations of hominins existed for hundreds of thousands of years in relative isolation, with very little or no gene flow, and yet still interbred with each other. That interbreeding shaped the evolution of later populations, including people living today, even when it makes up a fairly small proportion of their ancestry.

So, how do we examine the evolution of earlier populations? Even with DNA evidence, there is a limit to our ability to test for layers of hybridization and introgression, because of the small and geographically limited samples we have. With morphology and low-information biomolecules like proteins, we are going to need a new synthesis to understand the connections between biomolecules, morphology, and development.

This is nothing to be afraid of. New sources of evidence are going to make it possible to fill in some broad strokes that are currently monochrome. We’re going to see multiple populations of H. erectus—whether we will call those species, or paleodemes, or populations, is not yet clear, and will depend on their temporal and spatial patterning.

I’m less sure that we will resolve much about hominin phylogeny. It’s very bad now. Three different methods of looking at the phylogenetic placement of Homo naledi have led to three very different results, and similar problems have emerged with Australopithecus sediba, Homo floresiensis, and other species. These are some of the most complete skeletal samples of any hominin species, and our field cannot reliably place them on a tree. Proteomics will provide some new evidence to add to the tree, but it may only deepen some of the problems.

Meanwhile, as the science of proteomics develops, we need to be vigilant to avoid some of the mistakes that have been made by ancient DNA researchers. The Nature article includes a link to last year’s story by Ewen Callaway: “Divided by DNA: The uneasy relationship between archaeology and ancient genomics”. That story went into some details about the conflicts that have arisen between ancient DNA specialists and archaeologists. These groups have different histories, and few ancient DNA specialists have made the effort to understand some of the deeper, darker history of archaeology, with troubling results for their work. Meanwhile, both groups have a history of failure to work effectively with descendant communities and local researchers in many countries.

Although Warren’s story new story about proteomics does not focus on this element, ancient proteins will pose many of the same problems as ancient DNA sampling. The sampling is destructive, and they have possibly greater application to cultural material than to the human remains themselves. Meanwhile, integrating protein results–which are limited–into a more complex picture involving many kinds of information will be a challenge.

I look at those challenges as opportunities. What a single laboratory might perceive as a “problem” of interpretation is a place where leadership from other researchers is most valuable. I know many of the people who are making progress in proteomics, and I think they’re on a good track at the moment to make better science with these kinds of collaborations.

Quote: Keith's awkward analogy for Neanderthal dental anatomy

Here’s a painful analogy deployed by Arthur Keith (1924:253) for Neandertal dental anatomy:

The nature of the taurodontal change in tooth formation may be explained by the use of a homely illustration. It is the fashion in Europe to separate the legs of trousers---which correspond to the roots of the teeth up to the fork of the thighs. But there have been fashions where the seat of trousers, corresponding to the floor of the pulp cavity, has been carried down to the level of the knees, or even to the ankles. In teeth of the taurodont form the seat is carried to correspondingly low levels, or, as in this example from Ghar Dalam (Fig. 1, E), carried to the level of the ground and thus turned into a skirt.

Quote: Darwin on the line of progenitors leading to humans

In the Descent of Man, Charles Darwin ends his discussion of the relationship of other animals to humans with this evocative paragraph:

Thus we have given to man a pedigree of prodigious length, but not, it may be said, of noble quality. The world, it has often been remarked, appears as if it had long been preparing for the advent of man; and this, in one sense is strictly true, for he owes his birth to a long line of progenitors. If any single link in this chain had never existed, man would not have been exactly what he now is. Unless we wilfully close our eyes, we may, with our present knowledge, approximately recognise our parentage; nor need we feel ashamed of it. The most humble organism is something much higher than the inorganic dust under our feet; and no one with an unbiassed mind can study any living creature, however humble, without being struck with enthusiasm at its marvellous structure and properties.

Why won't Science publish replication studies?

An article in Slate by Kevin Arceneaux and coworkers recounts their experiences trying to publish a replication of a high-profile psychology study in Science: “We Tried to Publish a Replication of a Science Paper in Science. The Journal Refused.”

The story concerns a 2008 study in Science that claimed that people react differently to scary pictures depending on whether they are political liberals or conservatives. The study was widely publicized at the time of publication and has become a mainstay of

There’s one problem: It didn’t replicate. Arceneaux and coworkers explain how they got grants to set up expensive equipment in their laboratories and tried to extend the work with hundreds of subjects. And failed. And then they tried to replicate the exact circumstances of the original study, with the input of the original authors, with a larger sample of subjects. And failed.

They wrote it up and submitted it to Science. Desk reject. The story is well worth reading, this is the authors’ bottom line:

We believe that it is bad policy for journals like Science to publish big, bold ideas and then leave it to subfield journals to publish replications showing that those ideas aren’t so accurate after all. Subfield journals are less visible, meaning the message often fails to reach the broader public. They are also less authoritative, meaning the failed replication will have less of an impact on the field if it is not published by Science.

Science is published by the American Association for the Advancement of Science. The cause of science is not advanced by publishing studies that attract huge public attention, but then failing to publish the results when those studies fail to replicate.

I am surprised that the editors of the journal do not see the opportunity here to establish a responsible precedent. Well-powered, pre-registered studies that revisit splashy research findings are the way that future science is going to happen. As it is, Science is appealing to researchers who design underpowered studies that produce counterintuitive results. As we’ve seen in the last few years from the “replication crisis”, those studies are very likely to turn out to be bunk.

I would add one thing. To me, here’s an irritating part of the story that is not getting the attention it deserves:

We had raised funds to create labs with expensive equipment for measuring physiological reactions, because we were excited by the possibilities that the 2008 research opened for us.

That’s the power of a research study published in Science: it changes the funding environment for all scientists in a field. Such studies establish for referees and grant agencies what is worth investing time and resources in.

That’s bad. No single study should have that kind of influence. But the reality is that new research directions often come from just such single cases, and a study like this can start a rush to be in the first wave of researchers investigating a new phenomenon. When those results are bunk, all that time and money—that could have been spent in more promising directions—is wasted.

Quote: Martin on the difficulties of reconstructing human migrations

For a writing project, I’ve been looking at some pre-Darwinian accounts of human origins and relationships. One of the most detailed was published in 1841 by William Martin, “A General Introduction to the Natural History of Mammiferous Animals”.This is a haphazard book, something like a rambling Dickensian biology textbook, but I thought it was worth sharing this paragraph on the difficulty of examining the relationships of humans:

Let it also be remembered, that the migrations of Man are, for the most part, not single acts, performed by one tribe, and, so to speak, finished at once; but they have generally been like the waves of the advancing tide---the way once open, swarm has followed swarm, the movement has been general, and years have passed, till, at length, the flood has either ceased to roll on, or has taken some new direction. Meanwhile the invaders have become amalgamated with the more ancient possessors of the soil, and their commingled descendants again with other invaders, in their turn. Most nations, besides, if even relics of their early history be by chance preserved, have fondly claimed for themselves a romantic or heroic origin---a descent from gods, or god-like men---have blended facts with fables, between which it is not a little difficult to separate, and have assigned the most extravagant antiquity to their commencement. Hence, then, the difficulty of forming a clear digest of the subject, and of tracing the branches and offsets to their primitive stocks; hance the uncertainty which attends the most plausible hypotheses.

Scaling data, some hints in dimension reduction methods

PLoS Computational Biology has a very helpful article by Lan Huong Nguyen and Susan Holmes meant to help people with statistical visualizations: “Ten quick tips for effective dimensionality reduction”.

Commonly, people examining large datasets with many dimensions will present their results with figures that show only two dimensions. In genetics, most of them will use principal components analysis (PCA) to reduce thousands of dimensions into two. In morphology, PCA is also very common, although some specialists may use Procrustes fitting or other methods. This paper by Nguyen and Holmes runs through several common misconceptions and errors in choosing methods to reduce dimensions and displaying the results of such procedures.

One of the biggest: A PCA plot should be scaled according to the variances of the dimensions, not an arbitrary scale. Otherwise, data that are really normally distributed may look anything but.

Figure 2 from Nguyen and Holmes 2019, showing the effects of different aspect ratios upon visualizations of PCA results
Figure 2 from Nguyen and Holmes, 2019. These charts all show the same data, which were generated by selecting two sets of normally distributed (Gaussian) random variables with two centers. The two clusters are red and blue in the final frame, which has an aspect ratio based on the variance in the data. The others show incorrectly scaled data, which are easily misinterpreted. I would add, morphological datasets are based on much smaller samples, and more easily give rise to false interpretations.
It's a frequent irritation to me that for data visualizations we are so often at the mercy of people who write up papers but do not share original data. So in presentations or for secondary work you're left relying upon someone else's PCA plot. These are almost always composed with bad choices of colors, unreadable fonts, and weird scales that make no sense. Don't get me wrong, there are some beautiful data visualizations out there. But the average paper in morphology or genetics is full of stinkers. And it would be so easy to just provide the original data so that those of us who re-use data in other contexts can make your results look better. Share!

More on posters

Nell Greenfieldboyce of NPR covers the trend toward making posters at academic conferences more like billboards on the highway: “To Save The Science Poster, Researchers Want To Kill It And Start Over”.

"Imagine you're driving down the highway, and you see billboards, but instead of an image and a catchy phrase, there's paragraphs of text all over the billboards," says Morrison. "That's what we're seeing, we're walking through a room full of billboards with paragraphs of text all over them."
It's impossible to take in unless you stop in front of a poster to read it. But there are so many posters that we just keep moving.
"It's mostly noise. You're just skimming desperately," says Morrison, "and you're going to miss a lot as you walk by." Maybe people stop and engage with one or two posters, Morrison says, but it generally takes time to even figure out what the poster is about. That means researchers often spend time with a poster that turns out to be not all that significant for them.

Anything to make posters more engaging and useful is worth doing. I wouldn’t go for this style myself, because in paleoanthropology and genetics we can rely instead upon compelling graphics, which are not part of the “billboard” style. But I completely agree with the basic idea, recognizing that presenters try to cram too much detail into their posters, and usually fail to make editing decisions that reinforce their takeaway points.

The important thing with any presentation is to build with your audience in mind. Posters are a style of presentation. They are more personal than a podium presentation, and that means that the poster should serve the purpose of introducing the presenter to the audience.

For a scientific meeting with 30,000 attendees, and many non-presenters who are attending for professional enrichment, the billboard poster may be the best way of focusing the audience on a single takeaway. But for a smaller conference where building relationships with other professionals is the main goal, a more nuanced approach is probably the way to go.