When representatives of scientific organization and funding bodies go before Congress, they will often remind representative and senators that basic science is a crucial underpinning of practical progress. Those of us who pursue such basic science often feel warm and fuzzy inside at such defenses, but how delusional is this?
In Science, Ahmadpoor and Jones attack this question by following citation trees, both within the patent world and the scientific world to see what fraction of the literature is connected to patents. And, kind of amazingly, the connections are stronger than many of us might have guessed; they come up with 80% of cited papers are connected down the line with patents, and 61% of patents are sourced in part on scientific literature.
Not surprisingly, this varies a lot by discipline. Virtually every nano-technology paper has spawned a patent, while only 38% of mathematics papers are linked to a patent. So GG dug into the supplementary material to see where geoscience came out in all this.
Geochemistry and geophysics had 66% of papers being connected to a patent with an average chain of about 3.4 citations to the patent (a value of 3 meaning the average paper was cited by another paper was cited by a paper that was cited by a patent). Interdisciplinary geoscience was about 63% and an average chain of 3.6 papers, geology 61% and 3.9. Oddly mining and mineral processing papers (about as applied as earth science categories get) only figured in patents 61% of the time and still needed about 3.4 citations to get to a patent (evidently the more useful papers got classified as metallurgy and mining, a category with 77% of the papers tying in to patents). Mineralogy did surprisingly well, 67% of papers figuring in eventual patents after an average of 3.5 citations.
Oddly, petroleum engineering was rather poor with only 52% of papers being cited in patents (one wonders if some papers had to come out after patenting had started).
Given that a lot of the geoscience literature is probably more closely tied to understanding individual resources than new techniques that might be applicable to eventual patents, these numbers are pretty reassuring to those of us who don’t follow how our work might go into practical application. But a caveat: by using full citation indices, the authors of this study make no attempt to determine which of the cited papers really were critical and which were window dressing. So, for instance, a paper that developed a new geophysical tool that was tested in the western U.S. might cite a review paper on the geology of the region in the introduction; that paper gets credit for contributing to the new tool even though any insights in that paper might be utterly incidental. Still, these numbers do make us on the basic research side of things feel like we aren’t selling snake oil in suggesting that eventually our research will prove of practical use….
Already have been amused by the rank listed at Amazon for GG’s new book, but now that the book is listed as shipping, there are several companies offering the book as used! Wow. Where might these used books come from? Do they buy them wholesale somewhere and then toss them around a room or something? (The publisher is only just now sending copies to reviewers). Such a strange business….
A rather interesting comment chain on the website of a social scientist got GG thinking about corrections. (The blog post and comments deal with how to address published mistakes, with comments ranging from “never contact the authors” to “of course you contact the authors”). If fact, GG has gotten into lukewarm water with a couple of folks for pointing out things in their published papers in this blog. Anyways, what merits a correction? And what merits making a stink when there is no correction?
Take, for instance, a map. GG can identify several instances where maps in papers were simply in error. In one case, the author had misaligned a published map he was copying from and put a bunch of isopachs far away from where they belonged. In another, an author was rather cavalier about his location map, which placed a sampling locality far away from its actual location. Now in each case the problem could be recognized (in the first, by looking at the original source that was cited in the caption, and in the second from data tables with coordinates). Neither of these errors have been corrected (and in one case, I know the author is aware of the problem). As in neither case does the error influence the interpretation in the paper, is this worthy of correction? Of a comment?
Let’s rise up a level. Read More…
Time again for “Not Quite In Time–the NY Times again steps on a bar of soap when looking westward”. Today’s installment concerns an article on fire in the forests of the west and in particular California. This would be a fine article…were it written about 1975 or so. Reading it today feels like, well, hearing from a cousin that there is now this great amusement park in Orlando called Disneyworld….
OK, so what is the beef? First, this is a retread of an argument that has gone on for at least 40 years over fire suppression in western forests. Prior to the great Yellowstone fires of 1988, the Park Service in particular had decided that fire suppression was bad and the Forest Service was leaning in that direction. But when blazes on the margin of Yellowstone blew up and some blamed the Park’s “let it burn” policy, that policy was quickly dumped. Nothing had changed on the science side; this was entirely a change driven by public perception. Heaven only knows how many stories in High Country News covered the various efforts to deal with the twin goals of forest health and protection of communities that discussed this issue with more depth and insight.
But here’s the thing. In that 40 or 50 years since the science was pretty clear that fire suppression was a problem has come a second recognition, one this Times article utterly missed:
Scientists are still trying to figure out how regularly forests burned in what is now the United States in the centuries before European settlement, but reams of evidence suggest the acreage that burned was more than is allowed to burn today — possibly 20 million or 30 million acres in a typical year. Today, closer to four million or five million acres burn every year.
Scientists say that returning forests to a more natural condition would require allowing 10 million or 15 million acres to burn every year, at least.
“More natural condition”? The thing we know really well at this point is that fire before European settlement was in fact frequently managed by Native Americans, who used it as a tool to control their landscape. That the reporter goes to the Sierra Nevada, where this practice is very well documented, and utterly overlooks this aspect of the problem is troubling. Because the fires natives set were not the massive conflagrations that we are seeing now; they were more like the management fires set within, say, Sequoia or Yosemite national parks to try and reduce the fuel load without a catastrophic fire. So when the reporter in essence is claiming that big, huge fires were both natural and the pre-Columbian norm, he is creating a fantasy.
This makes it seem like the biologists arguing for these big fires are themselves ignorant of this past behavior (it seems this is unlikely to be a fair evaluation of their knowledge, though you do wonder a bit). Hopefully this is an incorrect impression, but if it is not, then there needs to be some education of the biological community about this.
Here’s the deal: “Pre-Columbian” or “before European settlement” is NOT the same as “natural”. Arguably we know little or nothing about fire in a human-free landscape as no ecosystem in the U.S. has been free of humans since the end of the last Ice Age. There was some work in the lodgepole forest of Yellowstone suggesting that big fires have been the norm there for many centuries, and there is a decent argument to be made that this does not reflect human activity. But in the Sierra, there is evidence of Indian-created fires from the foothills to treeline.
You’d like to think we could at least advance arguments about land management to somewhere near the current science. This article makes it seem that widespread recognition of a problem that was really roughly 40 years ago has only just occurred. Can we please move the setting on the time machine to 2017?
Hot on the heels of the Nature paper complaining about reliance on bibliometrics measures of success we have an Inside Higher Ed piece similarly bemoaning how simple metrics corrupt scientific endeavor.
And so what else showed up recently? Why, two new bibliometrics measures! One, the Impact Quotient, frankly does nothing but replace one useless measure (the Impact Factor) with a highly correlated one (the new Impact Quotient). The other is the s-index, which is a measure of how often a worker cites his or her own work.
We are going from trying to figure out something new about how the world works to making sure that everybody knows that we found out something new about how the world works, with the potential that the “something” has become increasingly trivial….
To nobody’s great shock, Adobe recently announced the end of the Flash plug-in for web browsers in 2020. Given the number of iDevices that don’t support Flash and the growth of tools that keep Flash from running, the writing has been on the walls for some time.
Now supposedly this does not mean the end of ActionScript and .swf files and such not, but it feels like there is an issue that is being overlooked. Interactive pdfs would seem to be potential victims of the death of Flash as, at present, you have to use .swf materials within pdfs (that is, there is no way to include HTML5 in a pdf) and there are indications that the display of these within Acrobat and its kin might require the Flash plugin to be present. Is the .swf format and capabilities likely to be maintained if Adobe’s Flash-creation tool Animate CC is more widely used to generate HTML5?
Why bring this up? Read More…
GG has defended peer review a few times as a means of limiting the damage from flawed papers. It is a positive good for science despite its limitations. But we are quite possibly in the waning days of peer review.
What has inspired this dystopian view? GG is an associate editor and has been for many years, and it has gotten ridiculously hard to get reviews. The growth of multi-author papers means that the number of conflicted potential reviewers has grown, limiting the reviewer pool. More and more, potential reviewers are choosing to not even respond to a request for a review–which eats up more time than a simple “no”. Others are giving the quick “no,” which is better, but still extends the reviewing process. Others are agreeing, but then decide the task is more onerous than cleaning up after a sick dog; months can go by with no response or just a hurried “getting to it now”. Sometimes there is never a review. Meanwhile, authors justifiably fume at the long times their papers are in review. At some point, the system will simply break down: authors will opt for venues not requiring review or using some form of post-publication review.
There are two culprits: the tremendous volume of papers, and the increasing demands on the time of reviewers. The first is driven by a mindset that every grant must yield papers–and in some circles, that is, every grant must yield at least a paper a year. Incrementalism drives identification of the least publishable unit. Toss in a growing trail of reviews as papers pinball down from the most prestigious journals to less desirable destinations and you seem to have an unending stream of requests for review. The final straw is the decreasing availability of funds, which paradoxically drives an increasing number of proposals which, once again, demand review.
On the flip side are the demands on reviewers’ time. First, these are the same people writing all those proposals and papers, and that takes a lot of time. But as universities have tightened their belts, more menial tasks are foisted on the faculty, from dealing with routine administrative paperwork to emptying their trash cans. Also, there is more pressure for public outreach, which takes time. Not to mention the allure of social media like Twitter and Facebook and (um) blogs. (GG views blogging as recreational, FWIW).
The obvious solutions are unlikely. Odds are low that more money will soon fill the coffers at NSF, or that outreach components for grants will be reduced. Once a university operates without trash collection in offices, it is unlikely to restore it when it can instead invest in amenities to attract the shrinking number of undergraduates.
There is only one knob really in our control, and it is our expectations of what our colleagues should be doing. Ask “what impact has your science had?” rather than “how many papers did you publish last year?” and maybe we could stem the tide.
Of course, that might require reading your colleagues’ papers. Which (sigh) takes time….