One of the things that has generally been under the radar in earth science has been the general retreat from equipment making. Basically, devices are generally good enough for the work we want and so the changes being made, such as they are, are not game changers. With the focusing of portable equipment into centralized equipment facilities, there has been a standardization on specific types of equipment. While this overall has been a plus for the community as investigators can check out equipment familiar to them, it has discouraged the kind of tinkering that individual research groups used to conduct when they all had to make their own tools. Although there have been moments when some dramatically different kind of sensor seemed possible, pretty much what we have today only differs in small respects from what we had 30 years ago.
So the Nature paper suggesting that a tiny gravimeter that possibly could be cheaply manufactured is kind of exciting and unexpected (see also the BBC News story). This builds off of the kinds of accelerometer found in many smartphones and has, in the lab at least, demonstrated the ability to measure tides into the microgal level (about one billionth of the pull of gravity). If the device can be made portable, it would replace instruments costing upwards of $30,000 that require substantial power and need to be treated nicely. Right now gravity surveys are rather tedious affairs; if this gravimeter shows real stability over long time periods, it could allow for widespread measurements of changes in gravity due to changes in water storage, snowpack, and even tectonic changes.
Or, perhaps, is it dead?
After World War II, America embarked on an experiment: put taxpayers dollars out there for scientists to explore, well, whatever seemed interesting. The recipients of these grants had to convince their colleagues that what they wanted to do was worthwhile, but otherwise there wasn’t an agenda. The idea was that such basic research was unpredictable but that some of it would produce fertile ground for more applied applications.
Over time this blanket approval for curiosity-driven science has been nibbled away. On the one hand are large projects. In earth science we can point to EarthScope and SCEC as large projects that absorb money for particular purposes. Without commenting one way or the other on the wisdom of either, it is clear that only science relevant to those programs need apply. SCEC, for instance, gets about $15M for 5 years for its activities just from NSF. These activities necessarily are directing money that otherwise might be used for research into other topics.
On the other hand are community-driven “frontier” type documents specifying where “grand challenges” are. These documents are often used to decide which proposals will get funded. This tends to work against research into less heavily promoted corners of earth science. Do such agendas help science that will have the most impact? (The old saw about a camel being a horse created by a committee comes to mind here…).
Arguably the inclusion of engineering in NSF has challenged the whole notion of curiosity-driven science; engineering is applying science to specific problems. Or is there funded research in, for instance, how to build a Dyson sphere? [Maybe. GG is kind of ignorant here but suspects that by including a field known for specific applied outcomes that it changes how you might view NSF as a whole].
Researchers wanting to pursue their own ideas will frequently find that they need to justify their research in terms of these large programs and research agendas. Some will follow the lead and do something that these programs say are things we want to do. Is that curiosity driven research if you, the applicant, are not the one who is curious?
Yet another piece of pie is now allocated for education and outreach. Arguably this has gotten out of hand in some ways (see Zimmer’s article in Cell for a surprising take on this; maybe this includes this blog, too, though GG has never claimed activities here as being part of any funded research). Again, it isn’t the merits of the E&O but the fact that time and money are not being used for research that previously were so used.
And finally we have the specter of politicians overseeing the kinds of research that are to be funded, both by asking for internal review documents and by determining which directorates at NSF should be getting money (though there was some good news today on that front). Once funding for specific directorates becomes a political football, scientists could get whipsawed between priorities as different political parties take control of the process. And your curiosity might be damped a bit if you thought it would lead you to defending yourself in Congress. Add in the likelihood that this would produce decreases in budgets for NSF over the long term if this became the norm and this seems like bad news for curiosity-driven research.
The current model where all research has to produce peer-reviewed papers (and in some circles, at a specified rate of publication even) demands that risks be avoided. The current model of university research demands that money be brought in continuously to support the institution. The more papers and more press releases the better say universities and funding agencies (who can easily muster pages of statistics to show such things). Say you ignore the subtle messages being sent and buy into “high risk, high reward science” that NSF says it wants. And say your high risk project fails. What exactly do you write in that big space where you need to report on “Results of Prior Support” in your next grant proposal? Nobody told you that the high risk also included a high risk to your odds of ever getting funded again…which of course could carry other repercussions.
Well, maybe this is science nostalgia with no real basis (right up there with imagining the 50s were great during the Cold War or the 70s with stagflation and an oil crisis). But it feels like the kinds of science Vannever Bush was arguing should be supported back when NSF was formed are not the kinds of science we are looking to fund now. And that is too bad, because the long-term biggest bang for the buck probably will come from that kind of science.
No, this is not about being careful in what you say, or how quickly you jump if tapped on the shoulder. This is testing for how well an inversion can convince you of the presence of an anomaly.
Seismic tomography is one of those windows into the earth that is either a huge advance or a hall of mirrors. The single greatest challenge is to show that some high- or low-velocity blob is real. Sometimes you can do this by looking at raw travel time residuals, but most of the stuff we are looking at these days is lost in the noise in raw data–it takes the blending of tons of data to get to the anomalies in question. (Seismologists have been wading in big data for awhile now).
Probably the most convincing test is some kind of sensitivity test (or, if you do the full matrix inversion properly, an a posteriori covariance or resolution matrix–but with the numbers of degrees of freedom in most tomographic studies, these are few and far between). A simple form is a checkerboard, but let’s consider a better one, a hypothesis test. As we’ll see, there are unexpected pitfalls.
A couple of essays today bring the Grumpy Geophysicist back again to the topic of Wilderness and wilderness (you can enter that as a search in the upper right to see some of those posts). In one in the New York Times, the author seems torn over the implications of the use of tracking devices to preserve endangered species. On the one hand, this helps us keep species around that might simply vanish if neglected. On the other, how wild is a place if we can find and kill the predators eating what we don’t what them to eat? (You wonder how far we are from Predator drones striking actual predators). The other essay, in Boulder’s Daily Camera, suggests that we have lost sight of the proper goals of preserving natural landscapes. This author argues that “Nature is [now] viewed as highly resilient rather than fragile, not to be valued for its magnificent abundance and diversity, but for the “ecosystem services” it provides humans.” [There is some naiveté in this, as National Forests were created largely to preserve watersheds so that clean water could flow to lowlands below, surely an ‘ecosystem service’].
To recap where GG comes at this, first, there has not been a pristine ecosystem unaffected by humans in probably close to 10,000 years in North America. Second, this means that many if not most ecosystems are in some ways unbalanced. Third, the setting aside of lands as Wilderness resulted from a coalition of interests. What these and many other similar pieces point to is the ongoing fragmentation of that coalition.
Story in the Denver Post the other day addressed the anger some landowners have over regulations designed to limit the ability of oil and gas companies to drill. The reason is that these landowners, unlike many in the area, have retained ownership of the minerals beneath their land. The irony is that there wasn’t originally an intent to allow people obtaining farmland to own precious minerals. The American precedent most relevant for mineral rights owners in Colorado probably originated in the foothills of the Sierra Nevada more than 150 years ago.
At issue back then was the gold under a Mexican land grant that was owned by John C. Fremont, a man famed at the time for having documented the routes west into California and Oregon and for having played a role in conquering California from Mexico. He had arranged from some property to be bought in the state and he ended up with a property called Las Mariposas in the Sierran foothills. At the time, the property seemed nearly worthless as the previous owner had not in fact “proved up” on the land in accordance with Mexican law. Whether Fremont really intended to buy or keep the land is disputed, but the fact was, he held the Mexican title. And a year later, when gold was found a little to the north at Sutter’s Mill, Fremont realized that he had, literally, bought a gold mine.
Well, the chickens have come home to roost in Oklahoma. After spending years obfuscating and denying any role in creating earthquakes in the Sooner State, the oil and gas industry has burned through their political cover and now are being told to cut way back on injecting waste from oil and gas wells into deeper strata.
Had industry taken a more proactive stance some years ago and invested some time and money in determining which wells, under which circumstances, were causing earthquakes, they might not today be facing a downturn in production. It is quite likely that many of the 411 wells told to cut back were not part of the problem, but figuring that out now is probably very difficult. Meantime, the fluid already injected over the past several years is likely going to continue to produce earthquakes for some time to come.
. . . and reply later? Or comment and reply now?
What is this about? In an era of blog posts and open-access review, the classic means of having a scientific discussion is the comment and reply. Comments are generally short pieces that are intended to point out factual or logical errors in a published paper; the replies are for the original authors to refute (if they can) what the comment says. Oftentimes readers discover that one side or the other missed the point, which can be helpful.
Different journals have had different policies on comments and replies (or discussions or whatever other label might exist). In earth science, journals like Geology and GSA Bulletin have usually published the comment and reply together. Some other journals like the Journal of Volcanology and Geothermal Research, have chosen to publish comments once they have been reviewed and only then are the original authors invited to prepare a reply, one that will show up months later.
The argument for comments and replies showing up together is that this is a conversation; missing half of it can lead to misunderstandings (such as the impression that the original authors do not disagree with the comment). The argument against is that by putting authors under time pressure to produce a reply, the quality of the reply is compromised and may be ill-thought through, or if not pressuring the original authors, the authors of the comment are treated poorly if the comment takes forever to come out.
FYI, this was all prompted by the decision of the Geosphere editors (a sister publication to Geology and GSA Bulletin) to use the “publish the comment now and the reply later” strategy, which felt really odd to many if us familiar with these other GSA journals (if you want to weigh in on this strategy, pro or con, contact the Geosphere editors–I am not providing a direct link as really it should only be publishing geoscientists bothering these guys. If you publish, you know how to find them).
So we’ll try a first-ever poll of those of you brave enough to wander into this blog. What do you think should be the standard? And of course feel free to comment on this whole comment and reply thing…