Archive | communication RSS for this section

Changing Subscriptions

Once upon a time, having a “subscription” meant that things would come to you until either the term of the subscription ran out or you cancelled the subscription. The stuff that had already come, whether issues of Teen Vogue, the record of the month or volumes of an encyclopedia, were yours to keep. But in the world of the academic library, that model is vanishing, and with it potentially are large parts of the academic literature.

In the paper past, an academic library’s subscription to a professional journal meant that the library got paper copies of the journal that they could then place on shelves and allow people to read. As budgets might tighten or interests wane, libraries would cancel subscriptions–but those journals they had purchased remained on the shelves unless purged to make room for other material. This model is essentially dead.

Instead publishers have shifted to the software definition of “subscription”–which isn’t really a subscription at all. Just as to use Adobe’s Cloud package of software requires you to have an active subscription, so does getting access to all the issues of Science that you had subscribed to over the years. And if the journal decides to go to predatory pricing? Your options are nil. That money you poured into the journal all those years means nothing. In general, libraries are not allowed to make local copies of all the content they are subscribing to.

In some ways there is an even worse side to all this.  Publishing once meant that there would be lots of copies of your work out there.  One library burns down, another is vandalized–it doesn’t really matter as there are plenty of other copies of the work out there. Now, there is essentially one. Admittedly there is more than one copy of all the work–most publishers subscribe to the LOCKSS model or the CLOCKSS model for preserving their materials–but the terms of use are still restrictive. An academic publisher might go belly up and have their archive bought by a private firm that then charges a fortune for access. Or maybe a vandal degrades all the copies of the works in the online archive. It takes little imagination to envision a wealth of knowledge effectively evaporating.

Arguably this is one of the best facets of a true open access policy: the freedom to copy materials means that there can be multiple archives.  University archives can legally maintain and share copies of work produced at their institutions. Research groups can maintain thematic collections of articles relevant to their focus. (Note that current open access policies do not necessarily allow this: much as you can view some movies online so long as you watch the ads, some open access materials could require you to access the original portal and, perhaps, see advertisements there).  In a sense, this can return libraries to their original function: instead of mere portals for providers, they return to being actual repositories of knowledge. So while we may have permanently lost the meaning of “subscription,” we can recover the true meaning of “library.”

The Inadvertent Replication Crisis

Many of you no doubt have heard of the lack of reproducibility studies in some scientific fields. This has led to condemnation of publications that have rejected or discouraged papers attempting to reproduce some observation or effect.

Now this is not such a big deal in solid earth science (and probably not even climate science, where things are so contentious politically that redoing things is viewed in a positive way). Basically, for most geological observations we have the Earth, which remains pretty accessible to pretty nearly all of us.  Raw observations are increasingly stored in open databases (seismology has been at this for decades, for instance). Cultural biases that color some psychological or anthropological works don’t apply much in solid earth, and the tweaky issues of precise use of reagents and detailed and inaccessible lab procedures that have caused heartburn in biological sciences are less prominent in earth science (but not absent! See discussions on how fission track ages are affected by etching procedures, or look at the failure of the USGS lab to use standards properly). We kind of have one experiment–Earth–and we aren’t capable of reproducing it (Hitchhiker’s Guide to the Galaxy not withstanding, there is no Earth 2.0).

No, the problem isn’t failing to publish reproductions.  It is failing to recognize when we are reproducing older work.  And it is going to get worse.

AS GG has noted before, citations to primary literature are become more and more scarce despite tools that make access to primary literature easier and easier. This indicates that less and less background work is being done before studies are moving forward: in essence, it is easier to do a study than prepare for it. The end result is pretty apparent: new studies will fail to uncover the old studies that essentially did the same thing.

Reexamining an area or data point is fine so long as you recognize that is what you are doing, but inadvertently conducting a replication experiment is not so great. Combine this with the already sloppier than desired citation habits we are forming and we risk running in circles, rediscovering that already discovered without gaining any insight.

IT Merry-Go-Round

Long long ago, computers were big expensive machines lodged in climate-controlled rooms behind lock and key, access being held by the masters of the campus IT professionals. Users paid by the kilobyte, by the seconds of connect time, by the milliseconds of compute time. The gods of IT raked in money like casinos.

Then came the PC.  Within a few years, the IT department at MIT, for example, had collapsed from its previous lofty heights, discontinuing mainframes and reducing support staff to posting flyers around campus, offering services users were delighted to ignore. The totalitarian system was dead! Long live democracy!

Well, slowly but surely we’ve encouraged a new generation to take up the crown and beat us with the scepter of access until we bow down in homage to our noble masters. “The Cloud” is, in fact on most campuses, just the same mainframe.  Better OS, much better iron, but as campus IT has decided that mere users must be protected from the world beyond, they have leveraged the need for security from the broader internet into security for the denizens of the IT department. Despite, all to frequently, their staff being the source of the serious break-ins (in GG’s building, the two serious security lapses were both caused by mistakes made by IT professionals).

And yet it is even more insidious. Instructors are increasingly told to place their courses within course management systems, web-based monstrosities like Blackboard, Canvas, and Desire2Learn. These three (GG has had experience with all of them) are essentially interchangeable even as each is painful in its own way; their main advantage over just regular web pages is that intraclass materials are private and so protected information like grades and use of copyrighted materials can be freely placed online.  Yet, practically like clockwork, campus IT decides it is time to shift from one to the next.  Why? Usually some relatively trivial capability is trundled out to justify the move (Now on smartphones! Now with free-form answer quizzes! Now looks snazzier!)–despite the likelihood that the previous provider will match that new wrinkle within a year or two.  So faculty and teaching staff and students are forced to learn yet another way of doing the same damn thing, which means….time for our boys (and a few girls) in IT to collect paychecks running workshops on how to do things and building web pages on how things are different and, of course, spending months if not years first installing and then troubleshooting the new software and then migrating content over all while supporting the old system for a year or two longer than originally planned until it is now time to begin the process of investigating the latest iterations of such software, which inevitably leads to…moving to a new system!

Something similar goes on with email support, internet video conferencing, personnel management software and other computer-related interfaces. Non-IT administrators who in theory are riding herd on this are so divorced from both users and the technology that they lack the backbone to say “no, what we have will suffice.” It remains unclear if the disruption to instructors and students plays any role in the calculations made to justify these changes (it seems certain to be underestimated).

Of course campus IT is at increasing risk of being outsourced to companies like Microsoft and Google (indeed many functions already have).  It isn’t hard to predict that there will be a major scandal when a university’s “private” information somehow wanders off campus.  Watching all this can make a grumpy geophysicist who remembers the early days of the internet and the last gasps of the old IT mainframes dwell fondly on the memories of hope…

The New Not-So-Normal

Gov. Jerry Brown surveyed the devastation Saturday in Ventura — the area hardest hit by firestorms that have displaced nearly 90,000 people in Southern California — calling it “the new normal.”-Los Angeles Times, Dec. 10 2017

OK, so GG is late to the parade of folks deriding the term “the new normal”. But it is a source of some grumpiness, and so while struggling to catch up to the existing bandwagon (and being pursued by the revisionist anti-bandwagon), here’s the gripe: when put in sentences like that above, the “new normal” sounds like we are there.  Climate change has happened, this is what it looks like, get used to it.

Now the defense of the term is that the new normal is change, and not for the better. If this is how people are reacting to this term, then fine.   But that isn’t the way it sounds. Articles on heavy rainfall, rapidly intensifying hurricanes, “bomb” lows, and flash droughts often put it as “this is that future you’ve heard about.  It is here.  Too bad.” The problem is that that future isn’t here yet–there is more to come, from the spread of tropical diseases to water shortages so intense that depopulation of some areas will be the only response to the creation of a refugee crisis that makes that from Syria look like tourist travel. So any terminology that seems to imply that we are over the hump is making it seem like that awful future we heard about is not so terrible after all. Annoying, maybe, and deadly for a few, sure, but then when haven’t there been weather-related deaths?

Basically, these are now the good old days. It isn’t hard to imagine folks 50 or 100 years from now saying “I remember when there were still forests left to burn–now it is just all the brush that burns.”

The Best Citation Index

Although GG has repeatedly flogged various meta-indices as lousy ways of measuring science, let’s take a moment to celebrate what it does do.  In places where administrations were lazy, they used to value faculty on the number of papers in prestige journals and then the number in quality journals (um, well, guess this is still a thing in some quarters).  Whether these papers had any impact was unclear. So citation indices tell us that somebody was paying attention (or not).  So while the use of citations as a means of evaluating scientists is a poor substitute for really knowing what the scientist did, it is an improvement.

But there are all kinds of issues with the citation indices in use.  So what would be the best citation index?

First issue is pretty obvious and one that is kind of addressed in some existing measures.  That is self-citation. If you churn out enough papers citing your own work, you can look pretty good.  This works really well if you have a circle of collaborators playing the same game (some journals got kicked out of Science Citation Index for this sort of thing). So dropping self-citations is a start (though how you define a self-citation needs work–to GG it should be a common author on both original and citing papers).

Second is the common complaint that you can get cited for the wrong reason–namely as an example of what not to do. This is a bit of a red herring–scientists are loathe to directly criticize others in print (even as they might badmouth them in private). A quick search of papers GG has for “wrong” turns up few cases of a specific paper being cited for being incorrect (and most of those cases are comments on the original paper). No, the actual snubbing is usually in not citing a paper that arguably is relevant because you think they made a mistake in some way.  So what you really want to count are the non-citations in relevant papers. These snubs carry plausible deniability (“I didn’t cite you? Shoot, I meant to, but I forgot / the journal limits citations / it got taken out by an editor / I hadn’t seen the paper”) but realistically are telling you that the uncited paper is not so well-received after all. So a better citation index would subtract the appropriate non-citations.

Third is the reverse problem: the over-citation of some papers that are sort to community totems for certain ideas. These can be review papers or textbooks, and those kinds of publications are pretty recognizable for this thing, but it will often happen that a paper will be one everybody thinks of as the origin of some class of ideas or one that just happened to review enough material that it acts as sort of a meta-citation. The citations to such a paper are frequently gratuitous and sometimes reveal that the authors have not read the classic paper in question. These citations are often a simple placeholder in response to an editor or reviewer asking for a citation. So an even better citation index would downright or remove placeholder citations.

But at the very bottom of the problems with citation metrics is that not all citations are equal. A citation that provides the motivation or techniques central to the citing paper is far more valuable than, say, a citation for some peripheral background material. Perhaps the number of times a citation appears in a paper would give you a clue, but just a clue, as to the importance of the earlier work to the current paper. Realistically you’d have to understand the paper and the material being cited to gauge how central that citation was. The best citation index would somehow weight all the citations for that measure.

Of course, that is really what you are asking those experts providing external letters of evaluation for. So maybe we already have the best citation index–we just didn’t recognize it for what it is….

Super-honey?

“You attract more flies with honey than vinegar” the old saying goes.  It would seem a pretty marginal publisher got the word and is trying it out:

*Dear Dr. **C. H Jones**,

* *Greetings from Nessa Journal of Geology & Earth Sciences (NJGES)*

Recently we have come across your presentation at the *”Seismology of the Americas Meeting Latin American and Caribbean Seismological Commission Seismological Society of America May 2018 Miami, Florida” *with the title *”**Exploring the Extent of Wave Propagative Effects on Teleseismic Attenuation Measurements within the Sierra Nevada**”*. I presume that it will outstandingly attract the readers and will receive applause from the people of all walks of life. I believe it will enrich the knowledge and experiment of people who are involved in all these researches and experiments.

Man, what temptation! “receive applause from the people of all walks of life”! GG cannot wait to walk out to the mailbox to thunderous applause from the neighborhood, or be mobbed in the grocery store for having published in the legendary NJGES!

Though to “outstandingly attract the readers” might mean standing out on a street corner with a sign “please read and applaud.”

Geologic Illustrations, Part 3: Directions

Its been awhile since we discussed ways to make publication figures both accurate and fair: part 1 dealt with the problem of mapping variables that varied across the map.  Part 2 was mainly an illustration of just how horrible Excel is for earth science work.  Here we’ll consider some issues with directional data such as paleomagnetic directions and paleocurrent and such not.

Let’s start with the classic rose diagram:

RoseDiagrams

Pretty different looking, no? On the right is the classic rose diagram where the length (radius) of each pie wedge is scaled by the value in that azimuth range. In this case, these are back azimuths of teleseismic arrivals measured for a tomography study. You can easily see that things are dominated by events to the northwest and to a lesser degree to the southeast and southwest.

To the left is the exact same data plotted by area instead of length. Which is better? As a test, what fraction of the data lies in the wedges from 120-140° and 300-320°?

Read More…