We’ve discussed isostasy a few times here, but today let’s stand back and ask the question, how do we determine what has led to the creation of isostatically supported topography? We will for today put aside the discussions of dynamic topography and just concern ourselves with isostatically supported topography, which seems likely to describe much of the US Cordillera. For this post, we’ll just focus on the crustal part of the problem, leaving the mantle for another day.
OK, first up is that isostasy means that the integral of density from the surface to some depth of compensation (usually somewhere in the asthenosphere) is constant. So how do we get at density at such great depths? At first blush you might think “gravity” as that is the geophysical observable produced by mass. The problem is that gravity is non-unique: you can recreate any gravity field by having a thin surface layer varying in density. Gravity gradients tell you of the maximum depth an anomaly can lie, and the integral over a broad region tells you of the total mass surplus or deficit relative to some reference. Those integrals support isostasy, but the gradients are tough to work with because isostasy is only thought to work well at long enough wavelengths that the strength of the lithosphere becomes irrelevant. So in essence you need to smooth gravity out to appropriate wavelengths–and once you do that, the depth limits in the raw gravity are pretty much gone.
So with gravity being relatively useless, where do we go? Keep in mind that we’ll be wanting to compare two columns to be able to discern what happened at one column relative to the other to produce a difference in elevation.
Recently NSF’s EarthScope program office put out a media announcement with the top ten discoveries they attributed to the soon-to-end program. (EarthScope, for those unfamiliar with the program, originally had three main legs: the Transportable Array (TA) + Flex Array collection of seismometers, the Plate Boundary Observatory (PBO) network of GPS stations, and the San Andreas Fault Observatory at Depth (SAFOD), a drill hole through the fault). What struck GG about this collection was just how little we learned about tectonics, which was a selling point of sorts for the program prior to its start.
Now some of the “discoveries” are not discoveries at all–one listed is that there is a lot of open data. Folks, that was a *design*, not a discovery. A couple are so vague as to be pointless–North America is “under pressure” and there are “ups and downs” in drought–stuff we knew well before EarthScope, so these bullets give little insight to what refinements arose from EarthScope. And then the use of LIDAR to look at displacements of the El Mayor-Cucapah earthquake was hardly a core EarthScope tool or goal even as the program might have contributed funds. So the more substantive stuff might amount to 5 or 6 points.
Arguably PBO has more than delivered and SAFOD disappointed, but GG would like to consider the TA’s accomplishments–or non-accomplishments. TA-related “discoveries” in this list are actually a single imaging result and two technique developments (ambient noise tomography, which emerged largely by happy coincidence, and source back projection for earthquake slip, which is largely a continued growth of preexisting techniques). So in terms of learning about the earth, we are really looking at one result worthy of inclusion.
So FiveThirtyEight has a story about how inadequate hurricane intensity numbers (Saffir-Simpson scale categories) are. Basically the destructive potential of a hurricane is poorly linked to that number. But the funny thing in reading the piece is that you could substitute Richter magnitude for Saffir-Simpson scale and make almost no other changes and the article would sound about right. Richter magnitudes (as popularly understood; the numbers reported for events are usually moment magnitudes these days) tell you almost nothing about the destructive potential of an earthquake.
Just as with hurricanes, where an earthquake strikes is critical in determining its damage. Magnitude 8 earthquakes 600 km under western Brazil are barely even noticed, while a M5.9 in northern Haiti kills over a dozen. The details can be amazingly important: a M6.3 earthquake in Christchurch devastated the city center and killed nearly 200 but the earlier M7.0 earthquake only a few miles away produced little damage and no fatalities.
Just as with hurricanes, the details of the earthquake will affect its ability to do damage. When an earthquake ruptures in one direction, damage will be greater in that direction than 180 degrees away. Another New Zealand quake, the 2016 M7.8 Kaikoura earthquake, ruptured from south to north, sparing areas closer to the epicenter but causing enough shaking in Wellington, across Cook Strait from the event, that several buildings had to be torn down. Toss in intrinsic variations in frequencies due to variations in stress drop and it is clear that a magnitude by itself doesn’t carry the whole story.
A popular pastime in southern California is guessing the magnitude of an earthquake solely from what was felt. GG recalls a radio news program years ago when there was an earthquake near GG, who felt the quake before the radio broadcaster did. Callers speculated on where and how large this was: “I’m in San Bernardino and it was a slow rolling event so probably on the San Andreas to the north” “It was a sharp event that must have been a magnitude 6” and so on. (In fact, when you are close you tend to get a very sharp movement from the P-waves, but farther away it is the surface wave train that produces a more rolling movement).
The Richter magnitude is about forty years older than the Saffir-Simpson scale and as a result, seismologists have had that much more time to try and clarify all the things that go into earthquake damage. Look into the earthquakes.usgs.gov page at a recent large event and you see far more than the magnitude. Their Pager page tries to estimate damage and deaths almost immediately after an event to help gauge the need for emergency assistance. Stories about the “Big One” that dominated California media for decades are being replaced with more nuanced stories highlighting the risk from faults through urban areas like the Malibu Coast/Hollywood Hills fault system or the Hayward Fault. And the interaction with the engineering community is far more sophisticated than 40 or 50 years ago, with power spectra and 50 year exceedence criteria being passed on from the seismological community.
And yet we get stories about the earthquake proof house that can withstand “an earthquake registering up to 9.0 on the Richter scale”. Well, GG’s house survived a M9 earthquake–sure, it was across the globe, but the point is that distance and environment matter. Would these buildings make it if right on a 20m fault rupture? Doubtful. That surviving a M9 means nothing. Surviving some threshold of ground motion? That might be useful, but probably the public wouldn’t get a max acceleration of 2g as a useful number.
So good luck meteorologists. Your best hope might be in scaling total kinetic energy in a hurricane to a level from 1 to 5, where you could add decimals. Oh wait, they’ve done that. So why isn’t this on TV and the web now?
Having just remembered the 1906 San Francisco Earthquake brings to mind Harry Fielding Reid’s model of elastic rebound for earthquakes developed from observations of that 1906 quake. The idea that the earth’s surface was slowly moving in opposite directions across a fault over a long time period, straining the rocks near the fault until a critical point was reached when the strained rocks would cause the fault to rupture, allowing each side of the fault to “catch up” with the more distant parts of the earth’s surface farther away.
Much later, when plate tectonics was developed, earth scientists could tell what the average velocity of plates were over a couple million years from analysis of magnetic anomalies on the seafloor. When space-based geodesy came along, first with VLBI and then with GPS, geodesists found that the plates were moving today at a rate equal to that seen over millions of years. It seemed as though the earth ran at a smooth and even pace.
The combination of ideas would suggest that one hope expressed about a hundred years ago was that faults would be triggered like clockwork. Every so many years, termed the recurrence interval, a fault would rupture with what would be called a characteristic earthquake. Ideally you could then predict the next earthquake if you knew when the last couple had happened.
This ideal view of the earthquake world has gradually unravelled, with a couple of observations in the past decade indicating that there really is something more variable in how geologic strain is created than the elastic rebound model and smooth plate motions would have suggested.
The New York Times has a very nice piece on the risks of high rises in San Francisco. And although the story focuses on San Francisco, the issues brought up apply as well to Los Angeles (which, for a very long time, forbid any building to be taller than 13 stories, the exception being city hall). [Later note: the LA Times also celebrated the 112th anniversary of the 18 April 1906 earthquake with a story on preparedness and a look at the potential for disaster on the Hayward Fault in the East Bay].
Some of the quotes in the New York Times article are priceless:
“Buildings falling on top of other buildings — that’s not going to happen,” Mr. Klemencic [the chief executive of Magnusson Klemencic Associates, designer of the tallest SF skyscraper] said.
Er, has he looked at what has happened?
(And those are just a few of the collapses in M6-ish EQs. Just wait for the right M7+, where the ground shaking will last longer).
GG has heard this kind of hubris before, and it is not comforting. Buildings do fall over, and if they are close enough together, they do fall over onto other buildings. All too often this is because the foundation is compromised by liquefaction–which is the very risk in the part of San Francisco where the building Mr. Klemencic’s firm designed sits–next to another building already tilting and sinking.
(Engineering certainty probably increases the closer to a CEO a person sits, but many Japanese were confident they would not see structural failures like California saw in the 1989 Loma Prieta or 1994 Northridge earthquakes until the 1995 Kobe earthquake proved them wrong. And Americans were able to design an overpass that failed not in just one earthquake, but two: the high overpass of the I-5/SR14/I-210 interchange failed in both the 1971 Sylmar quake and the 1994 Northridge earthquake.)
Mr. Macris [who led the planning board under four mayors] said the issue of seismic safety of high rises was “never a factor” in the redevelopment plans of the South of Market area.
Astounding. Look at the USGS liquefaction susceptibility map. The whole area is at an extreme risk. How–HOW!–could seismic safety NOT be a factor in a community that as recently as 1989 saw much smaller structures destroyed in the Marina District from the Loma Prieta earthquake.
All this is amazing. Years ago, there was the 1971 documentary, “The City that Waits to Die” about San Francisco’s disinterest in any kind of seismic safety. GG remembers seeing this long ago, and it was just amazing that San Franciscans were so oblivious to the obvious risk. This extends to how they interpret their own history: the 1906 quake is often buried under the subsequent fire, with claims of fatalities both downplayed (probably more than 3000 died instead of the 498 claimed at the time) and tied to the fire (a more common urban problem seemingly cured with brick buildings). All this was to make San Francisco appear safer to businesses and visitors from other places.
And if you want to feel safe in LA, don’t. The welding failure mentioned in the NY Times piece was recognized from the 1994 Northridge earthquake–but the failures were not remediated. The broken welds are not obvious in buildings that did not fail in that quake, but most likely will in then next one. (If you want the gory details, look at Tom Heaton’s notes from a class he taught).
The overconfidence and denial evident in the construction habits in San Francisco are probably not limited to that jurisdiction. There will be deaths and monetary damages that could be devastating if the right quake is the next one to occur. Maybe San Francisco will get a solid warning shot across its bow if, say, the Hayward Fault on the east side of the Bay ruptures from north to south–shaking San Francisco enough to maybe demonstrate enough that these problems are real to make the city take care to prepare better while not producing the truly devastating outcome that seems possible.
In part one, we saw that there are often differences between seismic tomographies of an area, and the suggestion was made that on occasion a tomographer might choose to make a big deal about an anomaly that in fact is noise or an artifact (GG does have a paper in mind but thinks it was entirely an honest interpretation). Playing with significance criteria (or not even having some) could allow an unscrupulous seismologist a chance to make a paper seem to have a lot more impact than it deserves.
Yet this is not really where the worst potential for abuse lies.
The worst is when others use the tomographic models as input for some other purpose. At present, this is most likely in geodynamics, but no doubt there are other applications. Which model should you use? If you run your geodynamic model with several tomographies and one yields the exciting result you were wanting to see, what do you do? Hopefully you share all the results, but it would be easy not to and instead provide some after the fact explanation for why you chose that model.
Has this happened? GG has heard accusations.
It’s not like the community is unaware of differences. Thorsten Becker published a paper in 2012 showing that in the western U.S. that seismic models were pretty similar except for amplitude–but “pretty similar” described correlation coefficients of 0.6-0.7. (That amplitude part is pretty important, BTW). About the same time (but less explicitly in addressing the geodynamics modeling community) Gary Pavlis and coauthors similarly compared things in the western U.S. and reached a similar conclusion. But this only provides a start; the key is, just how sensitive are geodynamic results to the differences in seismic tomography?
Frankly, earth science has faced issues for a long time as workers in one specialty had need of results from another. Usually this meant choosing between interpretations of some kind (that volcanic is really sourced from the mantle, not the crust, or that paleomagnetism is good and this other is bad). But the profusion of seismic models and their role as direct starting points for even more complex numerical modeling seems to pose a bigger challenge than radiometric dates or geologic maps, which never were so overabundant that you could imagine finding the one that worked best for your hypothesis. When you toss in some equal ambiguity about viscosity models in the earth, it can seem difficult to know just how robust the conclusions of a geodynamic model are.
Heaven help you if you are then picking between geodynamic models for anything–say like plate motion histories. You could be a victim of a double vp hack….
Maybe its just that February is finally ending, but GG has been navel gazing a bit after reading the exploits of some folks who really don’t understand what science is really for but who get to portray scientists in real life. If you have the stomach for it, Buzzfeed’s review of Brian Wansink’s rather unpleasant history of p-hacking at levels rarely seen is worth a read. Or you can see Retraction Watch’s ongoing accumulation of his retractions and revisions.
Those of us in geophysics pat ourselves on the back and are quietly happy that we don’t have hundreds of independent variables to go fishing in to find something marginally significant. But maybe we have issues that, while not as unscrupulous, are a means of finding something publishable in a pile of dreck.
So let’s go vp-hacking. (And yes, we’ll get in the weeds a bit here).