Anthropology graduate program rankings

The National Research Council (NRC) released its “rankings” of graduate programs in U.S. universities this week. I say “rankings” because they didn’t actually present a single ranking, or anything that could be interpreted as a single ranking.

Science (paywall) and Nature (free) both have news stories about the faux-rankings this week. Science calls them “Mr. Potato Head rankings” because you can switch in different criteria:

Given all those caveats, some university administrators are taking the rankings with more than a grain of salt. "We're pleased with how well our own programs ranked," says Patricia Gumport, dean of the graduate school at Stanford University. "But we have concerns about the methodology. So we're not planning to use the range of rankings."
It's easy to see the source of Gumport's concern by looking at what the assessment says about Stanford's anthropology department, to pick just one example. The department is ranked between 13th and 47th on the R scale and between 3rd and 9th on the S scale. In addition, it falls between 3rd and 14th on research activity, between 1st and 43rd on student support and outcomes, and between 12th and 33rd on diversity.

Totally meaningless. What does it mean for an anthropology department to be between 1st and 43rd? It’s also worth pointing out that (during the time of the survey) Stanford had two “anthropology” departments – one mostly biological-archaeological and one mostly cultural – and they have very different ranges of rankings. Duke is in the same situation.

That makes a lot of the “rankings” apples-to-oranges affairs. At Wisconsin, we do really well on the “faculty research output” ranking – between 3rd and 10th, in an integrated department with three fields. The schools that rank plainly ahead of us are either biology-only or heavily skewed toward biology. So we seem pretty good on that scale.

But there’s a lot of ambiguity. How are different kinds of publications weighed, for example? The highest-ranking departments averaged 1 publication per faculty member in 2006, according to their statistics, but this seems impossibly low. And how much does a book influence the statistics versus a journal article?

As someone who participated by providing information to the ranking committee, I proclaim this was a total waste of everyone’s time. What does anyone get from this to compensate them for the enormous time spent compiling statistics and filling out paperwork and surveys? We waited five years and don’t even get a ranking!

The most obvious effect of the obfuscatory scheme is to prevent people from printing a top-10 or top-20 list. How can I do it? There are fifty schools that might be in the top 20 for each ranking criterion!

Is there any newsworthy aspect of the results, such as they are?

Here’s one: the “reputation” ranks and “objective ranks” differ hugely for the top few anthropology programs. For example, the University of Chicago has a reputation rank between 1st and 5th, but the statistical rank is between 25th and 46th. That’s a huge difference! The University of Michigan (my alma mater) also has a reputation rank between 1st and 5th, but its statistical rank is between 5th and 15th. Harvard has a high rank in both reputation (1st to 4th) and statistics (2nd to 5th), but its department split after the statistics were compiled.

By contrast, the top-statistically-ranked programs have much lower reputation scores. Duke’s biology-dominated department has a statistical rank of 1st or 2nd, but a reputation rank of 16th to 49th. Penn State likewise has a statistical rank of 1st or 2nd but a reputation rank of 8th to 19th.

Clearly reputation is not an accurate guide to faculty research activity, student support or student outcomes. Some departments are coasting on the reputations of long-dead scholars, others haven’t yet produced the tradition that would come from placing students widely around the country.

Some of these up-and-comers are doing really well placing students in industry and across other biological fields – which may not build their reputation in the academic world, but is clearly providing value for their students.

How should students interested in anthropology use the data? A student should be looking first and foremost at which scholars are doing excellent work in the field that interests them. For anthropology students, this consideration is far more specific than the department level; it goes to the individual faculty member. The job market has been such that the real stars in our field are spread among many different institutions, whose ranks are based on other factors.

Find the people whose work you would be proud to have been involved with. Engage with their work. Try to understand the process that they used to create it, whether that was based in laboratory, field, observational or theoretical approaches.

Then once you have a list of people, consider their institutions. Write to their department graduate administrator to find out about student life issues. See if you can get in touch with or meet graduate students in the program now. Get a feel for the place – including other departments, whose courses and faculty members you will also be relying on. Find out what kind of support they offer across the years you will be pursuing your degree. If they have a very high time to degree, find out why – Are students returning after leaves of absence? What proportion require training in indigenous languages? What proportion are involved in field projects?

I don’t think that the data from the NRC rankings are very helpful for these kinds of inquiries. It is important to realize that reputation is overrated, but also important to understand that in the academic world reputation still matters. There are some programs choked with students where individuals can’t get any attention. And there are some great opportunities out there waiting for the right student to come along.

You want to avoid the clogged arteries and find the opportunities to shine.