Scandal: grant reviewers favor work that is "original," "feasible"

OK, so that doesn’t sound like a scandal. Yet, that’s one of the themes of this Inside Higher Ed article:

Michle Lamont decided to explore excellence by studying one of the primary mechanisms used by higher education to -- in theory -- reward excellence: scholarly peer review. Applying sociological and other disciplinary approaches to her study, Lamont won the right to observe peer review panels that are normally closed to all outsiders. And she was able to interview peer review panelists before and after their meetings, examine notes of reviewers before and after decision-making meetings, and gain access to information on the outcomes of these decisions.

Well, that sounds like it could be interesting. And maybe it was – she wrote a book describing her work. But get a load of some of the “problems” she found:

On diversity, Lamonts research finds that peer reviewers do factor it in (although the extent to which they do so varies by discipline). But peer reviewers are much more likely to care about diversity of research topic or institution than gender or race, she finds.


"I think excellence means nothing, she said, suggesting that panels be honest about the criteria they use. I think you have to give the criteria. Typically it's originality, feasibility, and also the social and intellectual significance.

In other words, the quality of the proposed research takes a front seat. Good.

Lamont did observe some insidious practices. Panelists often were less critical of applicants from more prestigious institutions. She confusingly calls this “institutional affirmative action,” I would call it Ivy League bigotry. And here’s what the article says about humanities reviewers:

Many humanities professors, she writes, rank what promises to be fascinating above what may turn out to be true. She quotes an English professor she observed explaining the value of a particular project: My thing is, even if it doesnt work, I think it will provoke really fascinating conversations. So I was really not interested in whether its true or not.

Yes…ah…well…OK, then. Seems to me they might get better results with a random number generator.

On the whole, she stresses the point that reviewers are biased toward their own interests and background. I think that’s part of the system that we have to accept: otherwise, we may as well have robots evaluate grant applications. I think people would be a lot less likely to serve on panels if they couldn’t put in a voice favoring their own perspective on the field – not direct conflicts of interest, but intellectual philosophy and grounding.

I wonder if Lamont was able to see the influence of those who select the panelists. Most people know that the composition of the panel determines funding successes and failures, and the people who invite or choose panelists ultimately choose the direction of the funding. That’s a more secretive process than the panel deliberations themselves.