Tag Archive | scientific communication

Prestige

The Nature Index stuff made GG wonder just how highly cited are the best geoscience papers from those prestigious journals? And how do they stand up to some other journals?  So here are some results from Web of Science.

For ease of calculation, we’ll just look at the journals that were all earth science, and let’s limit things to since 1960.  So of the prestigious journals in the Nature list, here are the number of citations of the top three papers:

  • Earth and Planetary Science Letters (1966 start): 6600, 5835, 2482
  • Geochimica et Cosmochimica Acta: 8167, 3035, 2348
  • Geology (1972 start) 1562, 1158, 1113
  • Geophysical Research Letters (1974 start) 2342, 2241, 1239
  • Journal of Geophysical Research: Solid Earth (1991; 1985-1991 Solid Earth and Planets, before that just JGR B) 2583, 2529, 2516.
  • Nature Geoscience (2008 start) 1374, 1017, 931

The four bold faced citations are those above 3000. Now here are some other reasonably prominent geoscience journals with their top 3 citations.

  • Applied Geochemistry (since 1987) 3593, 776, 692
  • Bulletin of the Seismological Society of America 3237, 2388, 1898
  • Chemical Geology (since 1968 WoS) 5846, 2639, 2087
  • Contributions to Mineralogy and Petrology (since 1969 in WoS) 2210, 2090, 2042
  • GSA Bulletin 2143, 1560, 1246
  • Geophysical Journal RAS (to 1987)/GJI (1989-) 3454, 2763, 1910
  • Journal of Geology 2105, 1905, 1850
  • Lithos (1975 start in WoS) 1226, 1147, 976
  • Tectonics (1981 start) 1263, 1021, 785
  • Pure and Applied Geophysics (1964 start in WoS) 2342, 610, 599

Several of the most highly cited papers are in review journals not listed here (which are quite prestigious and often the good review papers carry more than just a review). But looking at this list it is hard to say that this second list is really all that different in producing extremely highly cited papers, and you could argue that this list might be just as important a set of journals as that used in Nature Index.  Even a journal as uneven as Tectonophysics occasionally has a gem in it (1756 citations) and specialty journals like Quaternary Research (2253) and Precambrian Research (1267) often produce influential papers. Even some of the new electronic journals (G^3, Geosphere) have some well cited papers despite starting this century.

The message? Prestige is earned by what you say, not where you say it.

Metrical Madness

And now, for your enjoyment of the ability to place your institution above others, we introduce yet another metric! (Applause, hosannas, people falling to the floor in ecstasy). And so, as is usually the case, the promotion people at the relevant universities (like GG’s) push any favorable numbers out the door, like here.

The new metric? Well, OK, technically it is now 4 years old but it seems to have gained some prominence with a recent modification: Nature Index. And just what does it measure?  It is simply counting the number of articles in a subset of “prestigious” journals over the past year affiliated with institutions.  Which journals are prestigious? You wouldn’t be wrong to say Nature journals, many of which make the cut. In earth and environmental science (where CU ranked in the top 10, much to the pleasure of the university’s promotors) the list is:

Read More…

Count….down, please

Sneha Kulkarni asks at the Scholarly Kitchen

Is the deluge of scientific publications taking us closer to unraveling unanswered questions? Or is it adding to the noise that makes identifying the really significant publications difficult?

One guess as to the answer.

We’ve been in this neighborhood before a few times but it bears repeating. Simply making the reward structure in science revolve around numbers of papers and their derivatives (like h-indices) is just plain bad.  As the post reminds us, it burdens reviewers, it tempts shingling, it encourages sloppiness if not outright dishonesty, it clutters the literature, maybe even deletes all your email.  Maybe we should rename the process “publish and perish.”

Unfit for Print Update

Awhile back GG groused about why journals continue to make electronic versions of their publications look exactly like the paper copies.  Tiny strides are made from time to time (Geosphere, the journal GG has worked with the most, changed its layout awhile back from portrait to landscape and got rid of the awful, awful, AWFUL practice of having material on a single page split between stuff in both landscape and portrait orientations), but by and large materials remain static images.  While GG has been focused on trying to get things to work within the current structure of pdf files (which do allow some interactivity), others over the years have advocated totally different means of distributing science.  For instance, Jon Claerbout years ago advocated for the “reproducible paper” (which we previously discussed when considering issues with the “geophysics paper of the future”).

This brings us to a new Atlantic article titled The Scientific Paper is Obsolete, which outlines two major efforts to get around the limitations of paper by making a totally new format. As spelled out in the article, it is like the battle between Apple and Google over phone operating systems, or probably more accurately like Windows and Linux battling over how to make a desktop operating system. The article’s author, James Somers, adopts Eric Raymond’s earlier characterization of desktop OS strategies as a battle between cathedral builders versus the bazaar.  Heading Team Cathedral is Stephen Wolfram and Mathematica, where a replacement for the scientific paper would be a nicely prepared Mathematica notebook. Team Bazaar is all open-source, having elevated Python to the point of a new system termed Jupyter which also makes notebooks. [Oddly missing from the article is any mention of Matlab, which shares many of the same traits with Mathematica and is far more popular in engineering and much of earth science].

Why hasn’t one or both of these taken over the world of science? In the article, Bret Victor is quoted as saying it’s because this is like the development of the printing press, which merely reproduced the old way books looked for quite awhile until newer formats were recognized as better and adopted.  Sorry, but he is wrong.  This is like the invention of paper. And this is why there is so much uncertainty about adopting these technologies.

Read More…

Academic Faceplant

GG has pretty adamantly argued that you shouldn’t be measuring a scientist’s worth from where she publishes and thus rewards based on journal metrics are misguided and ethically wrong. So it was gratifying in one sense to see Sylvia McLain’s post on publishing in lower impact journals. But then GG read the comments, which included these:

I value what you say here and I thank you for expressing this important perspective. I, too, made similar choices early in my faculty career. I had several very high impact publications from my PhD and postdoc years, but during my time as an assistant professor I focused on just putting out good solid science…The tenure committee called this a “downward career trajectory” and sent me packing.

Sure, people in your specific subfield can appreciate high quality j Chem Phys, but everyone else only has brand name to go on. If you are expecting to look [for?] a job some day, high profile journal articles are pretty much the only ones that count.

Low IF journals are not going to get you tenure, so I sure hope assistant professors don’t take this advice to heart.

Indeed there might be gems in low impact journals and those profs might be excellent scientists, but why not settle for excellent work published at high impact journals?

Is it true there are places where “everyone else only has brand name to go on” and so decides on tenure based on the journal’s impact factor? Really? People are sent packing solely because of where they publish? GG is beyond appalled and only hopes this misrepresents what some schools are doing.

Look, ideally tenure should reflect the impact a faculty member has had on their community. That should be measured by what the leaders in the field say along with an evaluation by the faculty of what the main contributions are and what influence they have had. This requires departments to read the candidate’s work and to solicit useful reference letters (i.e., ignoring ones that simply count publications or citation indices). While citation numbers and publication history can provide some information, it should be tiny and used more to better understand where the candidate’s work resides. GG feels that this is what we look at in our department at CU Boulder, so this isn’t pie-in-the-sky.

That said, it appears that some places are blinded by the editorial whims of the tabloid journals (the greatest barrier to publication in Science or Nature is not peer review or the quality of the work, it is having enough of a hook so that the discipline editor can sell the paper to the main editorial group–and this might depend on what the journal has recently published or just how many news stories some other work got). Read More…

Death By Insta-science

Well, it had to happen.  The “pre-print” archive movement is finally creeping into earth science, and with it, undoubtably, will come all the poisons that it represents.

The noble case being made for such services is that they can allow for broader peer-review, they can avoid embargoing good science by evil journal gatekeepers, and they can accelerate the pace of science.  All three are misguided, at least in earth science. Many if not most earth science articles are unlikely to attract much attention (most academia.edu earth science papers are unread, let alone “reviewed”). Delays in publication are as much the difficulty in getting reviewers as anything else; outright obstruction can be avoided by going to another journal (there are quite a few) or complaining to a society president. And arguably the pace of science is a bit too fast, judging by the sloppy citations of the literature and piecemeal publications from some corners of the field.

If not for noble reasons, what is pushing this? It appears part of this is a desire by some to get something “citable” as soon as possible–for instance, Nature Geoscience editorializedIn an academic world structured around short grant cycles and temporary research positions whose procurement depends on a scholarly track record, there is room for a parallel route for disseminating the latest science findings that is more agile, but in turn less rigorously quality controlled.” [This is hilarious coming from a publisher whose lead journal actively quashes public or even professional discussion of papers prior to publication].

Let’s be clear: this is a crappy excuse that opens the door to “fake science.”

Read More…

Masquerade Funhouse?

A thread on an AGU mail group lately has gone back and forth on whether peer-review of proposals by U.S. federal agencies is fair. Some have asserted that retribution exists in the system, but many of those who have participated have argued it is about as fair as any other activity involving humans, downplaying the possibility of massive collusion to punish an individual. It would not surprise GG if on a few occasions some kind of retribution tipped the scales against a proposal, but it is far more likely in most cases that a combination of other factors doomed a proposal. What emerged in this thread was an interesting thought, namely the reemergence of the idea of a double blind (or at least single blind) review system.

One fundamental premise, as noted by one writer, is “past performance is no indication of future success.” Basically, somebody who has generated something good might well lay an egg, while somebody whose last project failed could be on to something good.  There are two issues here GG would like to contemplate: what does it mean to “succeed” and “fail,” and what components of an individual’s scientific reputation might be relevant.

First, failure is always possible.  In trying to gain knowledge previously inaccessible to humanity, a scientist is venturing into the unknown. Things not going as planned is not particularly unusual. But what does it mean to fail? Read More…