So Howard Lee over at Ars Technica took a swing at how our understanding of global tectonics has been changing over the past 40 or 50 years and wrote a lengthy article on it. It is full of quotes and assertions that really don’t hang together very well, making a certain geophysicist kind of grumpy. It doesn’t seem that any of the scientists quoted were really saying anything wrong, but the assembly in the article, which doesn’t seem to recognize the discrepancies nor fully master the techniques being used, can lead to a sense of “WTF?”
Geophysical inverse problems have an interesting set of difficulties. Firstly, they routinely suffer from being mixed determination–some parameters overdetermined from the available data, others underdetermined. Second, they are usually highly non-linear, meaning that it is easy to get trapped somewhere in model space you don’t want to be. The combination presents some problems that aren’t as trivially solved as is often viewed, in particular the difficulties posed by use of damping and smoothing constraints in inversions.
One of the bread-and-butter things seismologists do is locate earthquakes. There are kind of two main flavors of this: one is global and the other is local/regional. GG doesn’t do global relocations (but will point out that depth of such locations often relies on the presence of depth phases like pP reflections from the surface or the relative strength of surface waves) but has done local event locations. And there are gotchas out there that often aren’t as appreciated as they should be because all too often data is pitched into an inversion code and the resulting output is accepted as correct.
We’ll start with some simple things and move into somewhat complex stuff a bit but will stop short of the really ugly problems of locating events in a 3-D structure that is in part determined by those same travel times.
“We’d like to think we know about all of the faults of that size and their prehistory, but here we missed it,” Dr. [Ross] Stein said.
“The geologists in this area are the very best — people aren’t asleep at the wheel,” he said. “But there are real opportunities for young scientists to come in and learn how to do this better.”–New York Times story on Ridgecrest earthquake
We missed it? As one who has worked in this area, GG didn’t feel that way, though he was never asked beforehand if a M7 was possible there. There were mapped scarps in very young alluvium along a pretty well established seismic lineament. That this could be one connected fault seemed pretty darn obvious, but close study was always a challenge due to the presence of the China Lake Naval Weapons Center. It even had a name–the Airport Lake fault zone. And frankly, there are many others like this kicking around in the west.
There is in point of fact a very long list of geoscientists “missing it” out there, including most prominently these:
- When GG was an undergraduate he was taught that all earthquakes in California with a magnitude above about 6 would produce ground rupture. This was then followed in short order by the Coalinga earthquake (1983, M6.7), the Whittier Narrows earthquake (1987 M5.9), the Loma Prieta earthquake (1989 M6.9), and the Northridge earthquake (1994, M6.7), none of which produced the kind of dramatic surface rupture expected. (While there was some surface deformation in Loma Prieta, it isn’t clear that any of it was from the main fault). Frankly, the peculiar relation between the surface rupture and fault rupture of the 1952 Kern County (Arvin-Tehachapi) earthquake should have been a hint that surface rupture wasn’t a given.
- Seismic hazard assessments assumed that the biggest earthquake you could get associated with slip on a fault was related to the length of that fault. Then we got the Landers (1992 M7.3) earthquake, which ruptured several unconnected but similar faults. This should have been seen coming, though, as the Dixie Valley/Fairview Peak earthquakes in 1954 demonstrated much the same kind of behavior. A related misjudgment was that big faults were segmented and thus there was a maximum earthquake that could be inferred from past ruptures. Tohoku (M9.1, 2011) underscored that as a bad interpretation.
- Seismologists often would say that earthquakes don’t trigger distant earthquakes because the finite stress changes don’t go out that far. The Landers event triggered seismicity as much as 1250 km away, mainly (it seems) from the dynamic stresses associated with the surface waves from that event. This has now been observed in other large events. There are suggestions that other stress transfer mechanisms might be out there that led, for instance, to the Little Skull Mountain earthquake and the much later Hector Mine (M7.1) earthquake after Landers.
- Not as clearly stated but clearly in the mindset of seismologists was that big earthquakes are of one dominant motion. So while Landers was on several faults, they were all pretty much strike-slip faults and the feeling was they were connected at depth. But we then got the Kaikoura earthquake (M7.8, 2016) (among others), which spectacularly lit up a large number of individual faults with wildly different styles of slip. Frankly, the Big Bear earthquake (M6.3) that shortly followed Landers but was a totally separate and very different orientation should have hinted that very complex earthquakes were possible.
So frankly having a seismic zone with scattered preserved scarps in an alluviating environment be the hints of a through-going fault is hardly a shock. GG thinks that a better interview target would have been Egill Hauksson, who has studied the seismicity of the Coso region in particular (something that Ross Stein had not prior to this event) to see if he felt that this was “missed.”
Given all this, what are some of the under-appreciated hazards out there? After all, the Big One is supposed to be a rerun of the 1857 Ft. Tejon earthquake. GG thinks worse could be out there. You want a really big one? What if the Malibu Coast, Hollywood Hills, Raymond Hills and Sierra Madre faults all went as one event? They all are doing the same sort of thing, but hazard mappers consider each to be independent. And while that is probably true for the average surface rupturing earthquake (as, for instance, 1971 San Fernando was separate from the kinematically similar and adjacent Northridge earthquake), that is no guarantee. Maybe you wouldn’t exceed M8, but a rupture like that would pound LA like nothing else. Or maybe multiple segments of the Wasatch Fault go as one (though frankly even the one segment in Salt Lake City would be devastating). There are no end of partially buried, poorly studied structures across the whole of the Basin and Range. Lots of stuff could be hiding in the forests of the Cascades as well.
Basically, when we look as geologists at the Earth, we are seeing only the top surface of a deforming medium. That top surface is constantly being modified by other processes (mainly erosion, deposition and urbanization). Toss in that major earthquake faults are not razor sharp planes penetrating the earth but are a complex creation of a network of smaller faults that have coalesced in some manner and you expect it to be hard to pick out all the big faults. Even adding subsurface information (which is often quite deficient in these areas) and faults can hide. Go farther east and it gets even hazier as recurrence times get really long and so hints of past activity hide from view. Frankly, there are probably some truly great misses out there; Ridgecrest really isn’t that far off the mark from what we might have expected.
Update 7/14/19. Things are steadily quieting down in this area, though there are still a lot of small (M<2.5) quakes just west of the rhyolite domes. This spot and the area near Little Cactus Flat to the north remain the most active areas outside of the original ruptures.
Update 7/11/19. While the number of quakes in this area is declining, there was a M4.3 that also had a large non-double couple mechanism–according to Caltech. The USGS-NEIC also estimated a solution and got something much more like regular fault slip. Which indicates that getting mechanisms for very shallow M4s can be tweaky. While more action is now farther north, those events look more fault like–though those mechanisms are also from NEIC, so could be NEIC’s procedures tend towards double-couple solutions more than CIT’s. And as an aside, it is a bit surprising how little activity has been at Mammoth–it is an area that has had seismicity triggered by surface waves in the past, but has remained fairly quiet this go round.
Original post: One thing GG has kind of been looking for is whether the M7.1 Ridgecrest event is triggering things near the Coso volcanic field. And it seems there is something worth being concerned about going on.
Seismicity in this area is traditionally shallow, meaning above 5 km depth (Monastero et al., 2005). The tight cluster of orange dots include 2 M4+ earthquakes. This area is at the west edge of a seismic discontinuity at about 5 km depth inferred to represent the top of a magma chamber (Wilson et al., 2003). While there has certainly been seismicity in this region before, given the proximity to fairly recent volcanic activity, one has to wonder if there is magma on the move. Supporting that are the focal mechanisms for the two M4 earthquakes, both of which have substantial non-double couple components (indeed, the mechanism for one looks very much like a diking event). Given that all these events are being located in the top 2 km (probably relative to sea level, so top 3 km of crust), this could get pretty interesting pretty fast.
As background, the central core of the Coso volcanic field are silica-rich rhyolites that appear as blister-like bodies in the image above. Surrounding this core area that overlies the seismically inferred magma body are basaltic eruptions (like Red Cone, in lower left corner). The troubling seismicity is directly on the road into the geothermal area from Coso Junction to the west.
An overview of the M7.1 with the first InSAR image of the 7.1 rupture is at Temblor.com. This also discusses seismicity in this area, but with less consideration of volcanic activity.
We’ve discussed isostasy a few times here, but today let’s stand back and ask the question, how do we determine what has led to the creation of isostatically supported topography? We will for today put aside the discussions of dynamic topography and just concern ourselves with isostatically supported topography, which seems likely to describe much of the US Cordillera. For this post, we’ll just focus on the crustal part of the problem, leaving the mantle for another day.
OK, first up is that isostasy means that the integral of density from the surface to some depth of compensation (usually somewhere in the asthenosphere) is constant. So how do we get at density at such great depths? At first blush you might think “gravity” as that is the geophysical observable produced by mass. The problem is that gravity is non-unique: you can recreate any gravity field by having a thin surface layer varying in density. Gravity gradients tell you of the maximum depth an anomaly can lie, and the integral over a broad region tells you of the total mass surplus or deficit relative to some reference. Those integrals support isostasy, but the gradients are tough to work with because isostasy is only thought to work well at long enough wavelengths that the strength of the lithosphere becomes irrelevant. So in essence you need to smooth gravity out to appropriate wavelengths–and once you do that, the depth limits in the raw gravity are pretty much gone.
So with gravity being relatively useless, where do we go? Keep in mind that we’ll be wanting to compare two columns to be able to discern what happened at one column relative to the other to produce a difference in elevation.
Recently NSF’s EarthScope program office put out a media announcement with the top ten discoveries they attributed to the soon-to-end program. (EarthScope, for those unfamiliar with the program, originally had three main legs: the Transportable Array (TA) + Flex Array collection of seismometers, the Plate Boundary Observatory (PBO) network of GPS stations, and the San Andreas Fault Observatory at Depth (SAFOD), a drill hole through the fault). What struck GG about this collection was just how little we learned about tectonics, which was a selling point of sorts for the program prior to its start.
Now some of the “discoveries” are not discoveries at all–one listed is that there is a lot of open data. Folks, that was a *design*, not a discovery. A couple are so vague as to be pointless–North America is “under pressure” and there are “ups and downs” in drought–stuff we knew well before EarthScope, so these bullets give little insight to what refinements arose from EarthScope. And then the use of LIDAR to look at displacements of the El Mayor-Cucapah earthquake was hardly a core EarthScope tool or goal even as the program might have contributed funds. So the more substantive stuff might amount to 5 or 6 points.
Arguably PBO has more than delivered and SAFOD disappointed, but GG would like to consider the TA’s accomplishments–or non-accomplishments. TA-related “discoveries” in this list are actually a single imaging result and two technique developments (ambient noise tomography, which emerged largely by happy coincidence, and source back projection for earthquake slip, which is largely a continued growth of preexisting techniques). So in terms of learning about the earth, we are really looking at one result worthy of inclusion.