Back when I was looking into graduate schools in early 1981, I was told rather firmly by several eminent seismologists that there was no future in field seismology. The future of the field was in the lab, using recordings from permanent stations around the world. Now I wasn’t convinced of that and ended up at MIT and part of my PhD thesis was helping to run a field deployment of seismometers. Since then I’ve helped or run many other experiments, and it has been interesting to see just how the equipment and the beliefs of the community have evolved over time. [This does get rather long–just so you are warned].
It’s been awhile since we visited fracking. As a reminder, fracking is injecting high pressure fluids into rock to break open fractures for oil and/or gas to migrate into the well; however, the term has been converted in popular culture to more or less be synonymous with oil and gas development in general. This misappropriation has largely been because of renewed oil and gas development fueled by the application of horizontal drilling into “tight” source rocks that have to be fracked to produce petroleum. (“Tight” formations are usually the source rock of petroleum but lack the porosity for the fluids or gas to migrate into a traditional reservoir; these are usually shales). You could just as well call it “horizontal drilling” as “fracking” and remain exactly as accurate.
GG has argued that fracking (senso stricto) itself isn’t nearly the problem; most of the complaints really come from having industrial activities (with associated noise and air pollution) near residential areas along with occasional failures of the packing of the well that can release drilling fluids or produced water into shallow aquifers. The other hazard that developed is the huge increase in produced fluids (the foul waters that accompany oil and gas development) that are usually disposed in injection wells, which has led in some places to pretty considerable increases in earthquakes, most notably in Oklahoma.
But as more and more operations are active, we’re seeing more incidents of the rarer side effects of fracking itself, in this case earthquakes generated by fracking activities. The most recent case is in England, where the sole fracking operation in the country now is at a standstill after producing a M2.9 earthquake. Although it isn’t entirely clear whether there is a separate injection well in this system, it does seem from the reports online that this was caused directly by fracking. Following fracking induced events in Canada, Ohio, and Oklahoma, it seems that this worry can’t be entirely anticipated but can be managed by being ready to stop. In England, though, this might mean the end of attempts to use fracking to develop tight oil and gas in that country.
In the U.S., several Democratic candidates have called for an end to fracking, generally with little clarity of what exactly that might mean. For instance, Elizabeth Warren tweeted out that she would “ban fracking- everywhere” on her first day in office (um, no, not a power the president has). However, fracking is not mentioned at all on her rather meaty website, suggesting that her program might be a bit more nuanced than her tweet, but the rationale for at least some of the candidates is to end oil and gas development as a means of addressing climate change, which is a more scientifically literate reason for opposing all new development.
But one should be careful in these matters. While we certainly need to leave a lot of carbon in the ground (barring a major success in CO2 scrubbing from the atmosphere), one would want to make sure that, say, banning fracking in areas where the technique is well developed not lead to new conventional development in fields presently untouched.
The British experience, though, is suggesting that exporting the U.S.’s success in tight oil and gas development might not go as smoothly as many in industry had hoped. Whether that is a good or bad thing depends on your perspective.
So Howard Lee over at Ars Technica took a swing at how our understanding of global tectonics has been changing over the past 40 or 50 years and wrote a lengthy article on it. It is full of quotes and assertions that really don’t hang together very well, making a certain geophysicist kind of grumpy. It doesn’t seem that any of the scientists quoted were really saying anything wrong, but the assembly in the article, which doesn’t seem to recognize the discrepancies nor fully master the techniques being used, can lead to a sense of “WTF?”
Geophysical inverse problems have an interesting set of difficulties. Firstly, they routinely suffer from being mixed determination–some parameters overdetermined from the available data, others underdetermined. Second, they are usually highly non-linear, meaning that it is easy to get trapped somewhere in model space you don’t want to be. The combination presents some problems that aren’t as trivially solved as is often viewed, in particular the difficulties posed by use of damping and smoothing constraints in inversions.
One of the bread-and-butter things seismologists do is locate earthquakes. There are kind of two main flavors of this: one is global and the other is local/regional. GG doesn’t do global relocations (but will point out that depth of such locations often relies on the presence of depth phases like pP reflections from the surface or the relative strength of surface waves) but has done local event locations. And there are gotchas out there that often aren’t as appreciated as they should be because all too often data is pitched into an inversion code and the resulting output is accepted as correct.
We’ll start with some simple things and move into somewhat complex stuff a bit but will stop short of the really ugly problems of locating events in a 3-D structure that is in part determined by those same travel times.
“We’d like to think we know about all of the faults of that size and their prehistory, but here we missed it,” Dr. [Ross] Stein said.
“The geologists in this area are the very best — people aren’t asleep at the wheel,” he said. “But there are real opportunities for young scientists to come in and learn how to do this better.”–New York Times story on Ridgecrest earthquake
We missed it? As one who has worked in this area, GG didn’t feel that way, though he was never asked beforehand if a M7 was possible there. There were mapped scarps in very young alluvium along a pretty well established seismic lineament. That this could be one connected fault seemed pretty darn obvious, but close study was always a challenge due to the presence of the China Lake Naval Weapons Center. It even had a name–the Airport Lake fault zone. And frankly, there are many others like this kicking around in the west.
There is in point of fact a very long list of geoscientists “missing it” out there, including most prominently these:
- When GG was an undergraduate he was taught that all earthquakes in California with a magnitude above about 6 would produce ground rupture. This was then followed in short order by the Coalinga earthquake (1983, M6.7), the Whittier Narrows earthquake (1987 M5.9), the Loma Prieta earthquake (1989 M6.9), and the Northridge earthquake (1994, M6.7), none of which produced the kind of dramatic surface rupture expected. (While there was some surface deformation in Loma Prieta, it isn’t clear that any of it was from the main fault). Frankly, the peculiar relation between the surface rupture and fault rupture of the 1952 Kern County (Arvin-Tehachapi) earthquake should have been a hint that surface rupture wasn’t a given.
- Seismic hazard assessments assumed that the biggest earthquake you could get associated with slip on a fault was related to the length of that fault. Then we got the Landers (1992 M7.3) earthquake, which ruptured several unconnected but similar faults. This should have been seen coming, though, as the Dixie Valley/Fairview Peak earthquakes in 1954 demonstrated much the same kind of behavior. A related misjudgment was that big faults were segmented and thus there was a maximum earthquake that could be inferred from past ruptures. Tohoku (M9.1, 2011) underscored that as a bad interpretation.
- Seismologists often would say that earthquakes don’t trigger distant earthquakes because the finite stress changes don’t go out that far. The Landers event triggered seismicity as much as 1250 km away, mainly (it seems) from the dynamic stresses associated with the surface waves from that event. This has now been observed in other large events. There are suggestions that other stress transfer mechanisms might be out there that led, for instance, to the Little Skull Mountain earthquake and the much later Hector Mine (M7.1) earthquake after Landers.
- Not as clearly stated but clearly in the mindset of seismologists was that big earthquakes are of one dominant motion. So while Landers was on several faults, they were all pretty much strike-slip faults and the feeling was they were connected at depth. But we then got the Kaikoura earthquake (M7.8, 2016) (among others), which spectacularly lit up a large number of individual faults with wildly different styles of slip. Frankly, the Big Bear earthquake (M6.3) that shortly followed Landers but was a totally separate and very different orientation should have hinted that very complex earthquakes were possible.
So frankly having a seismic zone with scattered preserved scarps in an alluviating environment be the hints of a through-going fault is hardly a shock. GG thinks that a better interview target would have been Egill Hauksson, who has studied the seismicity of the Coso region in particular (something that Ross Stein had not prior to this event) to see if he felt that this was “missed.”
Given all this, what are some of the under-appreciated hazards out there? After all, the Big One is supposed to be a rerun of the 1857 Ft. Tejon earthquake. GG thinks worse could be out there. You want a really big one? What if the Malibu Coast, Hollywood Hills, Raymond Hills and Sierra Madre faults all went as one event? They all are doing the same sort of thing, but hazard mappers consider each to be independent. And while that is probably true for the average surface rupturing earthquake (as, for instance, 1971 San Fernando was separate from the kinematically similar and adjacent Northridge earthquake), that is no guarantee. Maybe you wouldn’t exceed M8, but a rupture like that would pound LA like nothing else. Or maybe multiple segments of the Wasatch Fault go as one (though frankly even the one segment in Salt Lake City would be devastating). There are no end of partially buried, poorly studied structures across the whole of the Basin and Range. Lots of stuff could be hiding in the forests of the Cascades as well.
Basically, when we look as geologists at the Earth, we are seeing only the top surface of a deforming medium. That top surface is constantly being modified by other processes (mainly erosion, deposition and urbanization). Toss in that major earthquake faults are not razor sharp planes penetrating the earth but are a complex creation of a network of smaller faults that have coalesced in some manner and you expect it to be hard to pick out all the big faults. Even adding subsurface information (which is often quite deficient in these areas) and faults can hide. Go farther east and it gets even hazier as recurrence times get really long and so hints of past activity hide from view. Frankly, there are probably some truly great misses out there; Ridgecrest really isn’t that far off the mark from what we might have expected.
Update 7/14/19. Things are steadily quieting down in this area, though there are still a lot of small (M<2.5) quakes just west of the rhyolite domes. This spot and the area near Little Cactus Flat to the north remain the most active areas outside of the original ruptures.
Update 7/11/19. While the number of quakes in this area is declining, there was a M4.3 that also had a large non-double couple mechanism–according to Caltech. The USGS-NEIC also estimated a solution and got something much more like regular fault slip. Which indicates that getting mechanisms for very shallow M4s can be tweaky. While more action is now farther north, those events look more fault like–though those mechanisms are also from NEIC, so could be NEIC’s procedures tend towards double-couple solutions more than CIT’s. And as an aside, it is a bit surprising how little activity has been at Mammoth–it is an area that has had seismicity triggered by surface waves in the past, but has remained fairly quiet this go round.
Original post: One thing GG has kind of been looking for is whether the M7.1 Ridgecrest event is triggering things near the Coso volcanic field. And it seems there is something worth being concerned about going on.
Seismicity in this area is traditionally shallow, meaning above 5 km depth (Monastero et al., 2005). The tight cluster of orange dots include 2 M4+ earthquakes. This area is at the west edge of a seismic discontinuity at about 5 km depth inferred to represent the top of a magma chamber (Wilson et al., 2003). While there has certainly been seismicity in this region before, given the proximity to fairly recent volcanic activity, one has to wonder if there is magma on the move. Supporting that are the focal mechanisms for the two M4 earthquakes, both of which have substantial non-double couple components (indeed, the mechanism for one looks very much like a diking event). Given that all these events are being located in the top 2 km (probably relative to sea level, so top 3 km of crust), this could get pretty interesting pretty fast.
As background, the central core of the Coso volcanic field are silica-rich rhyolites that appear as blister-like bodies in the image above. Surrounding this core area that overlies the seismically inferred magma body are basaltic eruptions (like Red Cone, in lower left corner). The troubling seismicity is directly on the road into the geothermal area from Coso Junction to the west.
An overview of the M7.1 with the first InSAR image of the 7.1 rupture is at Temblor.com. This also discusses seismicity in this area, but with less consideration of volcanic activity.
We’ve discussed isostasy a few times here, but today let’s stand back and ask the question, how do we determine what has led to the creation of isostatically supported topography? We will for today put aside the discussions of dynamic topography and just concern ourselves with isostatically supported topography, which seems likely to describe much of the US Cordillera. For this post, we’ll just focus on the crustal part of the problem, leaving the mantle for another day.
OK, first up is that isostasy means that the integral of density from the surface to some depth of compensation (usually somewhere in the asthenosphere) is constant. So how do we get at density at such great depths? At first blush you might think “gravity” as that is the geophysical observable produced by mass. The problem is that gravity is non-unique: you can recreate any gravity field by having a thin surface layer varying in density. Gravity gradients tell you of the maximum depth an anomaly can lie, and the integral over a broad region tells you of the total mass surplus or deficit relative to some reference. Those integrals support isostasy, but the gradients are tough to work with because isostasy is only thought to work well at long enough wavelengths that the strength of the lithosphere becomes irrelevant. So in essence you need to smooth gravity out to appropriate wavelengths–and once you do that, the depth limits in the raw gravity are pretty much gone.
So with gravity being relatively useless, where do we go? Keep in mind that we’ll be wanting to compare two columns to be able to discern what happened at one column relative to the other to produce a difference in elevation.
Recently NSF’s EarthScope program office put out a media announcement with the top ten discoveries they attributed to the soon-to-end program. (EarthScope, for those unfamiliar with the program, originally had three main legs: the Transportable Array (TA) + Flex Array collection of seismometers, the Plate Boundary Observatory (PBO) network of GPS stations, and the San Andreas Fault Observatory at Depth (SAFOD), a drill hole through the fault). What struck GG about this collection was just how little we learned about tectonics, which was a selling point of sorts for the program prior to its start.
Now some of the “discoveries” are not discoveries at all–one listed is that there is a lot of open data. Folks, that was a *design*, not a discovery. A couple are so vague as to be pointless–North America is “under pressure” and there are “ups and downs” in drought–stuff we knew well before EarthScope, so these bullets give little insight to what refinements arose from EarthScope. And then the use of LIDAR to look at displacements of the El Mayor-Cucapah earthquake was hardly a core EarthScope tool or goal even as the program might have contributed funds. So the more substantive stuff might amount to 5 or 6 points.
Arguably PBO has more than delivered and SAFOD disappointed, but GG would like to consider the TA’s accomplishments–or non-accomplishments. TA-related “discoveries” in this list are actually a single imaging result and two technique developments (ambient noise tomography, which emerged largely by happy coincidence, and source back projection for earthquake slip, which is largely a continued growth of preexisting techniques). So in terms of learning about the earth, we are really looking at one result worthy of inclusion.
So FiveThirtyEight has a story about how inadequate hurricane intensity numbers (Saffir-Simpson scale categories) are. Basically the destructive potential of a hurricane is poorly linked to that number. But the funny thing in reading the piece is that you could substitute Richter magnitude for Saffir-Simpson scale and make almost no other changes and the article would sound about right. Richter magnitudes (as popularly understood; the numbers reported for events are usually moment magnitudes these days) tell you almost nothing about the destructive potential of an earthquake.
Just as with hurricanes, where an earthquake strikes is critical in determining its damage. Magnitude 8 earthquakes 600 km under western Brazil are barely even noticed, while a M5.9 in northern Haiti kills over a dozen. The details can be amazingly important: a M6.3 earthquake in Christchurch devastated the city center and killed nearly 200 but the earlier M7.0 earthquake only a few miles away produced little damage and no fatalities.
Just as with hurricanes, the details of the earthquake will affect its ability to do damage. When an earthquake ruptures in one direction, damage will be greater in that direction than 180 degrees away. Another New Zealand quake, the 2016 M7.8 Kaikoura earthquake, ruptured from south to north, sparing areas closer to the epicenter but causing enough shaking in Wellington, across Cook Strait from the event, that several buildings had to be torn down. Toss in intrinsic variations in frequencies due to variations in stress drop and it is clear that a magnitude by itself doesn’t carry the whole story.
A popular pastime in southern California is guessing the magnitude of an earthquake solely from what was felt. GG recalls a radio news program years ago when there was an earthquake near GG, who felt the quake before the radio broadcaster did. Callers speculated on where and how large this was: “I’m in San Bernardino and it was a slow rolling event so probably on the San Andreas to the north” “It was a sharp event that must have been a magnitude 6” and so on. (In fact, when you are close you tend to get a very sharp movement from the P-waves, but farther away it is the surface wave train that produces a more rolling movement).
The Richter magnitude is about forty years older than the Saffir-Simpson scale and as a result, seismologists have had that much more time to try and clarify all the things that go into earthquake damage. Look into the earthquakes.usgs.gov page at a recent large event and you see far more than the magnitude. Their Pager page tries to estimate damage and deaths almost immediately after an event to help gauge the need for emergency assistance. Stories about the “Big One” that dominated California media for decades are being replaced with more nuanced stories highlighting the risk from faults through urban areas like the Malibu Coast/Hollywood Hills fault system or the Hayward Fault. And the interaction with the engineering community is far more sophisticated than 40 or 50 years ago, with power spectra and 50 year exceedence criteria being passed on from the seismological community.
And yet we get stories about the earthquake proof house that can withstand “an earthquake registering up to 9.0 on the Richter scale”. Well, GG’s house survived a M9 earthquake–sure, it was across the globe, but the point is that distance and environment matter. Would these buildings make it if right on a 20m fault rupture? Doubtful. That surviving a M9 means nothing. Surviving some threshold of ground motion? That might be useful, but probably the public wouldn’t get a max acceleration of 2g as a useful number.
So good luck meteorologists. Your best hope might be in scaling total kinetic energy in a hurricane to a level from 1 to 5, where you could add decimals. Oh wait, they’ve done that. So why isn’t this on TV and the web now?