Few topics excite more controversy among scientists. When I spoke about the h-index to the German Physical Society a few years ago, the huge auditorium was packed. Some deplore it; some find it useful. Some welcome it as a defence against the subjective capriciousness of review and tenure boards.
No one officially endorses the h-index for evaluation, but scientists confess that they use it all the time as an informal way of, say, assessing applicants for a job. The trouble is that it's precisely for average scientists that the index works rather poorly: small differences in small h-indices don't tell you very much.
In anthropology, the h-index has almost no utility at the time it matters – hiring and tenure. Citations have a long tail distribution – a few papers will usually capture the majority of citations of a scholar’s work, with most papers being relatively uncited. The h-index provides a measure that discounts the citations from one or two super-highly-cited papers, in an attempt to quantify more of the shape of the distribution of citations among an individual’s works. The number of publications and citations for early-career scholars is just too low for the shape to differ much among scholars that have published the same number of papers. You see, just as an individual’s distribution citations have a long tail, so does the distribution of citations among scholars. Publication count gives a proxy for effort, but whether that effort has translated into important effects is generally not well indicated by citations until later in the career.
Metrics are a way to deflect accountability from promotion committees. Stag antlers work, in principle, because they are honest signals of the stag’s ability to survive and thrive in the face of a significant handicap. If that’s true of later-career scholars with high citation counts, it’s probably a sign that the handicaps should be removed for younger academics!