Although in some ways overexploitation of water resources has faded out a bit as climate change has caught the focus of those worried about the sustainability of civilization, it hasn’t gone away. Water managers in the western U.S. have probably paid closer attention to the possible changes in climate than…well, nearly everybody. And unlike many others, they are looking to act. But how?
The California drought that ended in early 2017 was a preview of all the problems climate change is apt to generate. While the snowpack and rainfall amounts were not as low as in the late 1970s drought, the heat was noticeably greater, and research found that the intensity of the drought on vegetation was greater than the older, drier drought because of the higher amounts of evaporation and transpiration. And then the drought ended in dramatic fashion, with big snowpacks and rainfalls leading to the erosion of the spillway of the Oroville Dam in late winter of 2017. Basically feast or famine.
So what is the rational response to this? Frankly, it is to have more water storage, ideally with less evaporation. And, somewhat oddly, increasingly variable precipitation puts pressure on the usual alternative of improved water conservation.
A number of the posts the Grumpy Geophysicist has written have hidden in their depths a fundamental tension between science as an ideal goal and science as a profession. Consider part of Hubbert’s GSA Presidential Address screed from 1963:
Instead of remaining primarily educational institutions and centers of fundamental inquiry and scholarship, the universities have become large centers of applied research. In fact, it is the boast of many that their highest-paid professors have no teaching duties at all! Instead of providing an atmosphere of quiet, with a modicum of economic security afforded by the system of academic tenure, where competent scholars may have time to think, the universities have become overstaffed with both first- and second-class employees. Those of the first class, who bear the title of “professor” and enjoy academic tenure, have largely become Directors of Research; those of the second class, whose competence often equals or exceeds that of the first class, are the research-project employees whose tenures extend from one contract to the next.
Complementing activities of this sort [of large research lab] is the prevailing academic system of preferment based upon the fetish of “research.” Faculty members are promoted or discharged on the basis of the quantity of their supposed research, rarely on the basis of their competence as teachers. And the criterion of research is publication. The output per man expected in some institu- tions, I am informed, is three or four published papers per year. In almost any university one hears the cynical unwritten motto: “Publish or perish.” In addition, there is the almost universal practice of paying the traveling expenses to attend scientific meetings of those faculty members who are presenting papers at the meeting; the “nonproductive” members can pay their own way or stay home. The effect of this on the number and quality of papers with which the program of every scientific meeting is burdened requires no elaboration.
Although Hubbert spent most of his career outside of universities, he clearly deplored what he viewed as the corruption of the intellectual pursuits of the universities by the development of the post-WW II government-funded research establishment, a development most modern scientists view with great regard. And Hubbert did miss that this development did in fact increase the ability of the universities to train graduate students, so the negative he expressed was overstated. Even so, it is a question worth contemplating: is a successful scientist a successful professor, and vice versa?
Reblog: Bt GMOs reduce pesticides, increase yields, and benefit farmers (including organic farmers) — The Logic of Science
Logic of Science does a nice job of explaining in detail why and how one particular flavor of GMO crop is almost certainly a good thing–which underscores both that willingness to overlook science can be from the left and right.
Few technologies have been demonized to the same extent as genetic engineering. According to countless websites, GMOs are an evil scourge on the earth that destroy biodiversity, use exorbitant levels of pesticides, and hybridize rampantly with wild crops, and all of that is before we even get to the (largely false) claims about Monsanto. Reality, […]
The noble case being made for such services is that they can allow for broader peer-review, they can avoid embargoing good science by evil journal gatekeepers, and they can accelerate the pace of science. All three are misguided, at least in earth science. Many if not most earth science articles are unlikely to attract much attention (most academia.edu earth science papers are unread, let alone “reviewed”). Delays in publication are as much the difficulty in getting reviewers as anything else; outright obstruction can be avoided by going to another journal (there are quite a few) or complaining to a society president. And arguably the pace of science is a bit too fast, judging by the sloppy citations of the literature and piecemeal publications from some corners of the field.
If not for noble reasons, what is pushing this? It appears part of this is a desire by some to get something “citable” as soon as possible–for instance, Nature Geoscience editorialized “In an academic world structured around short grant cycles and temporary research positions whose procurement depends on a scholarly track record, there is room for a parallel route for disseminating the latest science findings that is more agile, but in turn less rigorously quality controlled.” [This is hilarious coming from a publisher whose lead journal actively quashes public or even professional discussion of papers prior to publication].
Let’s be clear: this is a crappy excuse that opens the door to “fake science.”
Humans are really bad at understanding risk. This is hardly a new or major insight. We know that driving drunk can lead to a host of bad outcomes, yet this happens on a nightly basis. We know that airplanes are safer than cars, yet many will eschew the fast airplane ride for a slower car trip on a safety basis. We know cigarettes shorten lives, and yet people start smoking every day. Probably each and every one of us engages in some mental slight-of-hand to do something we rationally know we shouldn’t. [And we are equally bad on the other end of the spectrum: hang out in Vegas if you want to see that kind of bad understanding of expectations].
*Yawn*. Old news.
Related to this is the way that national news media trumpets the latest dramatic event. A school shooting. A plane crash. A train derailment. A terrorist attack. People feel threatened even though all of these events are profoundly unlikely to occur in their lives. They get stressed (like the kid writing his will anticipating a school shooting). They change plans. Even though their lives might be more threatened by an unfilled pothole than these other events.
The bizarre part though is that these same misunderstandings of risk and amplification of the unusual find their way into places where we could hope for a more rational evaluation. And yet bad decisions can be found in legislatures across the country. As some folks have pointed out many times, nearly all gun deaths are not mass shootings but more personal events–domestic disputes and suicides. Spending a lot of time focused on hardening schools or arming teachers is diverting a lot of energy and resource from more productive places (like improving education or immunization). This isn’t to say that there should never be consideration of efforts to avoid school shootings, but that the amount of effort is out of proportion to the impact.
And yet, we do have mechanisms to address some of these kinds of issues. We buy life and fire insurance even if we don’t expect to die during the term of insurance or have our house burn down.
Then there is the other side of the equation: stuff that matters a lot but gets no attention until it is a disaster. There are lots of varieties of this, from decrepit water systems that poison a population to failing pipelines that pollute water and ground to atmospheric pollution that shortens lives. Mine dumps that pollute streams. Usually these are local disasters and so there isn’t quite the spotlight that might make clear how large of a problem these might be nationally.
At the high end of the risk spectrum is global warming. Here our native issues with future risk vs. present reward and evaluating the combination of magnitude of impact with the probability of outcome get super amplified. For instance, at this point we know that sea level will rise and that the seas will become more acidic, and both of these events are producing observable impacts from king tide flooding in Florida to changes in aquaculture in Washington. We know that we will see more record high temperatures. We are awfully certain that there will be more intense heat waves and more droughts because of the direct relation of these events to mean atmospheric temperature. These events, by themselves, are capable of producing thousands of deaths and billions of dollars in damage or costs of mitigation. And we know that failing to limit greenhouse gas emissions will make all this even worse.
And frankly, there are a lot of less certain but still quite probable events even more worrying, ranging from crop failures to ecosystem collapses to more frequent high-intensity storms of all stripes (including, ironically, snowstorms). Civilization was built on a pretty stable and forgiving climate.
Now, there are a number of rational responses to this, ranging from planning on migrations and mitigations to imposing cap-and-trade or carbon tax policies in place to even banning new oil and gas development, yet we have instead seen instead scrubbing of climate change from government websites and documents. Nothing is more irrational than denial without evidence.
One of the rational responses might be to encourage nuclear power, an argument put forward forcefully (probably a bit too forcefully) in an Analog science-fact article by C. Stuart Hardwick [not online]. Here without question our fear of the unusual comes into play, and the number of warnings in popular media make this seem undesirable. And yet it might not be, as things do get safer as engineers recognize limitations of earlier designs. Germany has been keeping high-carbon soft coal power plants in the mix longer than necessary because they have instead chosen to shut down nuclear plants as solar and wind plants come online. It is possible that even the health effects of long-term exposure to coal plant emissions might make nuclear even more attractive. Yet people are familiar with finding coal dust on window sills or having a nagging cough, but worry about radiation exposure at any level, despite the fact we all are exposed all the time at some level.
Consider this: back in the early 1990s, there were still underground nuclear tests going on at the Nevada Test Site as characterization work was underway for a nuclear waster repository at Yucca Mountain (on the edge of the test site). The Nevada congressional delegation was unified: testing must continue, but waste storage had to be stopped. This was stupendously irrational (even if it was politically acceptable): a main argument against waste being stored was that it would have to pass through the greater Las Vegas area on its way to Yucca Mountain and there might be a spill. But to conduct tests, actual nuclear warheads were being moved through Vegas. Arguments that the waste repository might leak seemed to overlook the fact that underground explosions were forcibly injecting radioactive material into the rock and groundwater system.
We will never have rational personal behavior; there are just too many things about people that are harder to correct than is worth the effort. But as our power over the globe tightens, we need to put that personal irrationality behind us and find ways to govern from a more rational risk/reward understanding, one that necessarily will require scientific study of many topics. Or, as a recent New York Times op-ed put it, we can ignore science at our peril.
Sorry, had to point this out. Anderson Cooper was here at CU and said that…
…he doesn’t know of another profession that tries as hard to “get it right,” and correct itself when it makes mistakes.
(This according to a Daily Camera story, which we know must be right because they are journalists and try to “get it right”….)
Oooh, ooh, let GG try to see if there might be other such professions!
Um, science, maybe? ‘Cause that is kind of the definition in many ways.
Probably a lot of engineering (hey, let’s rebuild the Tacoma Narrows bridge the same way!).
Bet airline pilots try hard to get it right.
How about rocket science? When was the last time NASA blew up a rocket because of frozen O-rings? Just once, right?
Sure, there are lots of fields where the same mistake happens over and over (can we say “food poisoning” and “Chipotle” and find them together in news stories from multiple years?). But you do sometimes wonder how much journalists actually notice about occupations other than politician and commentator….
Why make a model? For engineers, models are ways to try things out: you know all the physics, you know the properties of the materials, but the thing you are making, maybe not so much. A successful engineering model is one that behaves in desirable ways and, of course, accurately reproduces how a final structure works. In a sense, you play with a model to get an acceptable answer.
How about in science? GG sometimes wonders, because the literature sometimes seems confused. From his perspective, a model offers two possible utilities: it can show that something you didn’t think could happen, actually could happen, and it shows you situations where what you think you know isn’t adequate to explain what you observe. Or, more bluntly, models are useful when they give what seem to be unacceptable answers.
The strange thing is that some scientists seem to want to patch the model rather than celebrate the failure and explore what the failure means. As often as not, this is because the authors were heading somewhere else and the model failure was an annoyance that got in the way, but GG thinks that the failures are more often the interesting thing. To really show this, GG needs to show a couple actual models, which means risking annoying the authors. Again. Guys, please don’t be offended. After all, you got published (and for one of these, are extremely highly cited, so an obscure blog post isn’t going to threaten your reputation).
First, let’s take a recent Sierran paper by Cao and Paterson. They made a fairly simple model of how a volcanic arc’s elevation should change as melt is added to the crust and erosion acts on the edifice. They then plugged in their estimates of magma inputs. Now GG has serious concerns with the model and a few of the data points in the figure below, but that is beside the present point. Here they plot their model’s output (the solid colored line) against some observational points [a couple of which are, um, misplotted, but again, let’s just go with the flow here]:
The time scale is from today on the left edge to 260 million years ago on the right. The dashed line is apparently their intuitive curve to connect the points (it was never mentioned in the caption). What is exciting about this? Well the paper essentially says “hey we predicted most of what happened!” (well, what they wrote was “The simulations capture the first-order Mesozoic- Cenozoic histories of crustal thickness, elevation and erosion…”)–but that is not the story. The really cool thing is that vertically hatched area labeled “mismatch”. Basically their model demands that things got quite high about 180 Ma but the observations say that isn’t the case.
What the authors said is this: “Although we could tweak the model to make the simulation results more close to observations (e.g., set Jurassic extension event temporally slightly earlier and add more extensional strain in Early-Middle Jurassic), we don’t want to tune the model to observations since our model is simplified and one-dimensional and thus exact matches to observations are not expected.” Actually there are a lot more knobs to play with than extensional strain: there might have been better production of a high-density root than their model allowed, there might have been a strong signal from dynamic topography, there might be some bias in Jurassic pluton estimates…in essence, there is something we didn’t expect to be true. This failure is far more interesting than the success.
A second example is from the highly cited paper by Lujan Liu and colleagues in 2008. Here they took seismic tomography and converted it to density contrasts (again, a place fraught with potential problems) and then they ran a series of reverse convection runs, largely to see where a high wavespeed under the easternmost U.S. . The result? The anomaly thought to be the Farallon plate rises up to appear…under the western Atlantic Ocean. “Essentially, the present Farallon seismic anomaly is too far to the east to be simply connected to the Farallon-North American boundary in the Mesozoic, a result implicit in forward models.”
This is, again, a really spectacular result, especially as “this cannot be overcome either by varying the radial viscosity structure or by performing additional forward-adjoint iterations...” It means that the model, as envisioned by these authors, is missing something important. That, to GG, is the big news here, but it isn’t what the authors wanted to explore: they wanted to look at the evolution of dynamic topography and its role in the Western Interior Seaway–so they patched the model, introducing what they called a stress guide, but which really looks like a sheet of teflon on the bottom of North America so that the anomaly would rise up in the right place, namely the west side of North America. While that evidently is a solution that can work (and makes a sort of testable hypothesis), it might not be the only one. For instance, the slab might have been delayed in reaching the lower mantle as it passed through the transition zone near 660 km depth, meaning that the model either neglected those forces or underestimated them. Exploring all the possible solutions to this rather profound misfit of the model would have seemed the really cool thing to do.
Finally a brief mention of probably the biggest model failure and its amazingly continued controversial life. One of the most famous derivations is the calculation of the elevation of the sea floor based on the age of the oceanic crust; the simplest model is that of a cooling half space, and it does a pretty good job of fitting ocean floor depths out to about 70 million years in age. Beyond that, most workers find that the seafloor is too shallow:
This has spawned a fairly long list of papers seeking to explain the discrepancy (some by resampling the data to find the original curve can fit, others by using a cooling plate instead of a half space, others invoking the development of convective instabilities that cause the bottom of the plate to fall off, others invoke some flavor of dynamic topography, and more). In this case, the failure of the model was the focus of the community–that this remains controversial is a bit of a surprise but goes to show how interesting a model’s failure can be.