The latest white paper on the future of tectonics is out. The product of a workshop and months of work, this is a document meant to help NSF figure out what to fund. A lot of proposals in the next few years will cite one or more of the “Grand Challenges” put forward in this document.
Will this lead to more impactful science?
Frankly, GG isn’t sure one way or another. Presumably all the folks who participated tried to get their research interests represented in this document. So the challenges advertised represent some umbrella of ongoing research programs. This sounds more like a current summary than a truly forward looking document.
However what the document might do is point out to researchers places that are stumping current research efforts, perhaps encouraging those not yet participating in those efforts to develop a different angle on these problems.
Presumably NSF likes these documents to help them winnow out proposals that aren’t addressing major problems. But that risks choking off more iconoclastic work that might truly open up new avenues of research or solve issues not currently under study but important.
You’d really want to see if such visioning documents from, say, 20 years ago captured what we now see as the big advances. Did Earthscope really envision ambient noise tomography? No, though it did enable its widespread application and rescued Earthscope from a promise it would have failed to deliver.
Did visioning in 1960 emphasize marine magnetics or testing mobilist concepts? You have to worry that groupthink might discourage innovation.
Quick pointer to a web posting about an article that gained a lot of attention (and so really good metrics) by being really bad. A good reminder that numbers of citations need not reflect any intrinsic quality.
A couple of recent pieces, one an editorial in the New York Times and another at Vox, argue there is a “war on science” (to use the Times’s hackneyed phrase). First, let’s drop the “war” stuff. Ever since Fox News went on the “War on Christmas” path, that terminology is meaningless short of armed soldiers killing scientists.
But what we are seeing is incredible. From the disbanding of scientific review panels to the placement of political appointees in the grants cycle to the gagging of scientists employed by the government to the cessation of collections of scientific data to the elevation of a contrarian rear-guard to equal or greater levels of influence with overwhelming scientific consensus in making regulatory decisions, it is abundantly clear that the Trump administration, rather than simply ignoring science, intends to silence science. This is not bulldozing partisan opposition; this is overlooking reality. Given their outlook, we might expect DDT to return to store shelves next to leaded paint.
This is ignorant bullshit. But before all the conservatives get hot under the collar and the liberals give each other high-fives, keep in mind that this game is not being played solely by the right.
Here in Boulder there is a vast expanse of cropland under the control of the county. The purpose was to retain open space and maintain a rural barrier between Boulder and neighboring towns. Because the land is owned by the county, it can make rules about what happens when it leases the land to farmers. And one of those rules they’ve decided to implement–over the objections of the farmers working those lands–is to remove GMOs from county farmlands. As a five-part series of op-eds in the Daily Camera points out, this decision flies in the face of established science. One can spend hours reading the various letters to the editor, the position papers submitted to the Board of Supervisors, the various blog posts, etc. And it is almost as enlightening as the corners of the internet dedicated to showing that climate change isn’t real; GG earlier termed many of these kinds of arguments policy proxies: you use them as cudgels against actions you dislike (for instance, some GMO opponents seem to hate Monsanto as a corporate monstrosity; some GMO supporters point out that Whole Foods is a far bigger concern directly invested in the “organic = good” mindset; neither argument bears on the safety or efficacy of GMOs in agriculture). About the closest non-crop scientists can get to the science without going nuts might be the National Academy’s report from 2016. Which really doesn’t support wholesale dismissal of GMOs.
Now the county can do whatever it wants in this regard; there is no law saying that land management has to be scientifically defensible. It is less clear that such an argument can defend the EPA’s removal of scientific review panels, but the mindset that science is a tool to be employed as a partisan weapon seems to be growing more common. Instead of using scientific inquiry to resolve disputes that are grounded in reality, science is being selectively harvested to support one’s preexisting views.
Science is ideally a tool we use to avoid fooling ourselves. We have to be open to discovering we are wrong, which is one of the hardest things for many of us to admit. But those who would overturn scientific consensus have to recognize that you don’t overturn such consensus on the basis of a small amount of information. For instance, evolution is observationally confirmed by thousands up thousands of studies of faunal successions in rock strata. Finding a T-Rex tooth in a 10,000 year old human campfire isn’t going to overturn evolution. Anthropogenic climate change at this point is supported by so many observations in so many ways that the possibility that it is an artifact of some other misunderstanding is vanishingly small. GMO safety is well supported (but not to the degree climate change is; note this is not considering the economics of GMOs). There are many things we can act on now with a pretty solid assurance we won’t be mistaken; on other aspects, we should fund the science.
When making policy these days, it is incumbent on government to at least hear the scientific consensus and know where the edge of that consensus lies. For instance, global warming is caused by burning fossil fuels. Ice sheets will retreat, oceans are much warmer and more acidic, storms can be far wetter, droughts can be much drier, heat waves will be hotter are all so directly supported by simple physics, observation, and numerical simulation that all these can be acted upon without further inquiry. More difficult and unclear are things like the net precipitation budget over years-long time frames in regions of the U.S., the intensity of winter storms, or changes in the frequency of tornados; many such topics deserve continued inquiry.
But what we cannot do is simply pooh-pooh the science we don’t like. Or pretend it doesn’t exist.
There is a sort of odd melancholy portion of the literature that exists in science, a sort of paper or papers that are almost the valedictories of scientists as they stand back from conducting science. A lot of them really don’t belong in the literature per se, as some are a jumble of thoughts, hunches, recriminations and other odds and ends. Let’s call the whole collection codgertations. Frankly, they need their own home in the literature; to see why, let’s consider what some have looked like (no, GG is not going to name names; he is grumpy, not cruel).
At the more useful end of the spectrum are thoughts on seminal problems in the field that reflect experience of years but an inability to push through to completion, perhaps because of conflicts, physical disability, time, funding, etc. These are the seeds of proposals future, proposals these authors will never write, and so these can be gifts of insight to younger researchers who may have overlooked these problems.
Somewhere in the middle are kind of incomplete papers, stuff that’s been hanging out in a drawer (or a floppy) for many years that finally is being shoved out into the world to justify the effort made in keeping it all this time. Some of these are papers that were superseded years ago by other work; some are on things fairly tangential to ongoing research. Others just don’t quite get anywhere. None are really damaging anything; they maybe are just taking up space.
The worst flavor of codgertation is the self-celebrating review of one’s greatest hits, the very worst being stewed in a vat of recrimination for past injustices, allowing for debasing the contributions of others. These tend to assert rather than derive or infer and come across as lectures from angry old people who can’t be bothered to properly cite the relevant work or logically support an argument. “I’ve been in this field forever,” they seem to say, “and this is how things really are!” Right Grandpa, can you go back to watching Wheel of Fortune, please? Or yelling at those kids on the lawn?
What cements all of these is that they aren’t really typical scientific papers–and it is worth noting that only a fraction of practicing scientists ever write anything like any of these. But those that do are often counting on the deference of junior colleagues to allow them their say, and truth be told, there is indeed value in some of these papers. And we might actually be losing insights from those less egotistical senior scientists who choose not to write such unusual documents because they perceive that they don’t really belong. But if you review one of these papers, you can go nuts in trying to come to grips with egregious self-citation and a faint grip on the current literature, loosely connected topics, poorly supported logic, and other flaws that would sink a typical paper. Really reaming a paper written by such a senior scientist can seem disrespectful, yet letting it go as “science” feels dirty. So GG is suggesting that perhaps some journals should allow a new form of communication which (you presumably have guessed) would be termed “codgertations.” [OK, as the comment below notes, that is rude and self-defeating; the commenter’s suggestion of Reflections or GG’s Valedictories would be more appropriate.] The beauty of this is that we could capture the good without having to hammer the bad. We’d encourage those on their way out the door to share some wisdom even as we know we’ll have to accept some scolding. And we wouldn’t be caught between honoring our elders and defending our literature.
When representatives of scientific organization and funding bodies go before Congress, they will often remind representative and senators that basic science is a crucial underpinning of practical progress. Those of us who pursue such basic science often feel warm and fuzzy inside at such defenses, but how delusional is this?
In Science, Ahmadpoor and Jones attack this question by following citation trees, both within the patent world and the scientific world to see what fraction of the literature is connected to patents. And, kind of amazingly, the connections are stronger than many of us might have guessed; they come up with 80% of cited papers are connected down the line with patents, and 61% of patents are sourced in part on scientific literature.
Not surprisingly, this varies a lot by discipline. Virtually every nano-technology paper has spawned a patent, while only 38% of mathematics papers are linked to a patent. So GG dug into the supplementary material to see where geoscience came out in all this.
Geochemistry and geophysics had 66% of papers being connected to a patent with an average chain of about 3.4 citations to the patent (a value of 3 meaning the average paper was cited by another paper was cited by a paper that was cited by a patent). Interdisciplinary geoscience was about 63% and an average chain of 3.6 papers, geology 61% and 3.9. Oddly mining and mineral processing papers (about as applied as earth science categories get) only figured in patents 61% of the time and still needed about 3.4 citations to get to a patent (evidently the more useful papers got classified as metallurgy and mining, a category with 77% of the papers tying in to patents). Mineralogy did surprisingly well, 67% of papers figuring in eventual patents after an average of 3.5 citations.
Oddly, petroleum engineering was rather poor with only 52% of papers being cited in patents (one wonders if some papers had to come out after patenting had started).
Given that a lot of the geoscience literature is probably more closely tied to understanding individual resources than new techniques that might be applicable to eventual patents, these numbers are pretty reassuring to those of us who don’t follow how our work might go into practical application. But a caveat: by using full citation indices, the authors of this study make no attempt to determine which of the cited papers really were critical and which were window dressing. So, for instance, a paper that developed a new geophysical tool that was tested in the western U.S. might cite a review paper on the geology of the region in the introduction; that paper gets credit for contributing to the new tool even though any insights in that paper might be utterly incidental. Still, these numbers do make us on the basic research side of things feel like we aren’t selling snake oil in suggesting that eventually our research will prove of practical use….
A rather interesting comment chain on the website of a social scientist got GG thinking about corrections. (The blog post and comments deal with how to address published mistakes, with comments ranging from “never contact the authors” to “of course you contact the authors”). If fact, GG has gotten into lukewarm water with a couple of folks for pointing out things in their published papers in this blog. Anyways, what merits a correction? And what merits making a stink when there is no correction?
Take, for instance, a map. GG can identify several instances where maps in papers were simply in error. In one case, the author had misaligned a published map he was copying from and put a bunch of isopachs far away from where they belonged. In another, an author was rather cavalier about his location map, which placed a sampling locality far away from its actual location. Now in each case the problem could be recognized (in the first, by looking at the original source that was cited in the caption, and in the second from data tables with coordinates). Neither of these errors have been corrected (and in one case, I know the author is aware of the problem). As in neither case does the error influence the interpretation in the paper, is this worthy of correction? Of a comment?
Let’s rise up a level. Read More…
Hot on the heels of the Nature paper complaining about reliance on bibliometrics measures of success we have an Inside Higher Ed piece similarly bemoaning how simple metrics corrupt scientific endeavor.
And so what else showed up recently? Why, two new bibliometrics measures! One, the Impact Quotient, frankly does nothing but replace one useless measure (the Impact Factor) with a highly correlated one (the new Impact Quotient). The other is the s-index, which is a measure of how often a worker cites his or her own work.
We are going from trying to figure out something new about how the world works to making sure that everybody knows that we found out something new about how the world works, with the potential that the “something” has become increasingly trivial….