The Unfairness of Citations
Over the past 10-15 years, citation indices have overtaken simple counts of papers as a means of evaluating research success of academic faculty. And, by and large, this is a step forward, though not nearly as big a step as you might think. Some recent adventures in determining what papers to cite when writing for a letter journal help to illustrate the point.
First, a letter journal only gives you so much space; how you use it is up to you. So there are all kinds of tradeoffs: figures sometimes get reduced to the point of absurdity despite journal guidelines against such things, text gets so terse you seek relief in reading the legalese of a EULA from Microsoft. But the big tradeoff is in what previous work you cite. Now back when earth science letter journals were starting up, the literature was fairly small (lots of pre-plate tectonics stuff had little relevance and the root observational work was still a relatively compact literature). Now, though, you can fairly cite 10 or 20 papers on even some of the most obscure topics. There simply isn’t room in a letter journal for all that. What should you cite, and what should you leave out?
What you should cite is the most primary set of papers: the ones that made the observation others have since heaped interpretation upon. But here’s the rub: you might want to cite one of those later papers for some other aspect of the topic. If you just cite that paper for both uses, you save several lines of text–but you’ve now given the perception that the derived paper was the original source. Even worse, you can have some background elements that might require 10 or 15 references to cover everything; the solution is often to simply not cite anything, hoping the reader will recollect that this is rather well-known information. Of course this risks a neophyte hitting this paper and thinking they have reached the basement on this topic.
This pressure can result in minor absurdities. GG once reviewed a letter journal paper that cited one of his papers; in review, he suggested that the citation be replaced by a citation to another, more primary paper [update note: this was a paper not written by GG], but when the article appeared, the citation to the GG paper was still there. Why? The author later confided that she saved a citation by doing that (later in the paper she needed to cite GG’s paper anyways). So a more deserving paper was denied its citation.
Although this is very true for letter journal papers, it can extend into lengthier publications just by a combination of force of habit and copying the citations those letter journals have provided. [A rather more bizarre problem is the mis-citation that propagates through the literature–transposed page numbers, a wrong volume number, etc. Most committees counting citations just use the numbers that Web of Science or Google Scholar or some similar tool turn up for the proper citation. Such mis-citations tell you who is actually looking at the original paper and who is just copying research from an earlier paper].
So in counting citations there is the risk that papers that conveniently contain lots of background are cited in place of more primary material, which is then citation-starved. So while citations are a somewhat better metric than simple counts of papers, you do have to exercise caution. A review paper, for instance, might get a lot of citations not because of any special insight but because it is a convenient means of citing a lot of other literature.