Having just remembered the 1906 San Francisco Earthquake brings to mind Harry Fielding Reid’s model of elastic rebound for earthquakes developed from observations of that 1906 quake. The idea that the earth’s surface was slowly moving in opposite directions across a fault over a long time period, straining the rocks near the fault until a critical point was reached when the strained rocks would cause the fault to rupture, allowing each side of the fault to “catch up” with the more distant parts of the earth’s surface farther away.
Much later, when plate tectonics was developed, earth scientists could tell what the average velocity of plates were over a couple million years from analysis of magnetic anomalies on the seafloor. When space-based geodesy came along, first with VLBI and then with GPS, geodesists found that the plates were moving today at a rate equal to that seen over millions of years. It seemed as though the earth ran at a smooth and even pace.
The combination of ideas would suggest that one hope expressed about a hundred years ago was that faults would be triggered like clockwork. Every so many years, termed the recurrence interval, a fault would rupture with what would be called a characteristic earthquake. Ideally you could then predict the next earthquake if you knew when the last couple had happened.
This ideal view of the earthquake world has gradually unravelled, with a couple of observations in the past decade indicating that there really is something more variable in how geologic strain is created than the elastic rebound model and smooth plate motions would have suggested.
The New York Times has a very nice piece on the risks of high rises in San Francisco. And although the story focuses on San Francisco, the issues brought up apply as well to Los Angeles (which, for a very long time, forbid any building to be taller than 13 stories, the exception being city hall). [Later note: the LA Times also celebrated the 112th anniversary of the 18 April 1906 earthquake with a story on preparedness and a look at the potential for disaster on the Hayward Fault in the East Bay].
Some of the quotes in the New York Times article are priceless:
“Buildings falling on top of other buildings — that’s not going to happen,” Mr. Klemencic [the chief executive of Magnusson Klemencic Associates, designer of the tallest SF skyscraper] said.
Er, has he looked at what has happened?
(And those are just a few of the collapses in M6-ish EQs. Just wait for the right M7+, where the ground shaking will last longer).
GG has heard this kind of hubris before, and it is not comforting. Buildings do fall over, and if they are close enough together, they do fall over onto other buildings. All too often this is because the foundation is compromised by liquefaction–which is the very risk in the part of San Francisco where the building Mr. Klemencic’s firm designed sits–next to another building already tilting and sinking.
(Engineering certainty probably increases the closer to a CEO a person sits, but many Japanese were confident they would not see structural failures like California saw in the 1989 Loma Prieta or 1994 Northridge earthquakes until the 1995 Kobe earthquake proved them wrong. And Americans were able to design an overpass that failed not in just one earthquake, but two: the high overpass of the I-5/SR14/I-210 interchange failed in both the 1971 Sylmar quake and the 1994 Northridge earthquake.)
Mr. Macris [who led the planning board under four mayors] said the issue of seismic safety of high rises was “never a factor” in the redevelopment plans of the South of Market area.
Astounding. Look at the USGS liquefaction susceptibility map. The whole area is at an extreme risk. How–HOW!–could seismic safety NOT be a factor in a community that as recently as 1989 saw much smaller structures destroyed in the Marina District from the Loma Prieta earthquake.
All this is amazing. Years ago, there was the 1971 documentary, “The City that Waits to Die” about San Francisco’s disinterest in any kind of seismic safety. GG remembers seeing this long ago, and it was just amazing that San Franciscans were so oblivious to the obvious risk. This extends to how they interpret their own history: the 1906 quake is often buried under the subsequent fire, with claims of fatalities both downplayed (probably more than 3000 died instead of the 498 claimed at the time) and tied to the fire (a more common urban problem seemingly cured with brick buildings). All this was to make San Francisco appear safer to businesses and visitors from other places.
And if you want to feel safe in LA, don’t. The welding failure mentioned in the NY Times piece was recognized from the 1994 Northridge earthquake–but the failures were not remediated. The broken welds are not obvious in buildings that did not fail in that quake, but most likely will in then next one. (If you want the gory details, look at Tom Heaton’s notes from a class he taught).
The overconfidence and denial evident in the construction habits in San Francisco are probably not limited to that jurisdiction. There will be deaths and monetary damages that could be devastating if the right quake is the next one to occur. Maybe San Francisco will get a solid warning shot across its bow if, say, the Hayward Fault on the east side of the Bay ruptures from north to south–shaking San Francisco enough to maybe demonstrate enough that these problems are real to make the city take care to prepare better while not producing the truly devastating outcome that seems possible.
In part one, we saw that there are often differences between seismic tomographies of an area, and the suggestion was made that on occasion a tomographer might choose to make a big deal about an anomaly that in fact is noise or an artifact (GG does have a paper in mind but thinks it was entirely an honest interpretation). Playing with significance criteria (or not even having some) could allow an unscrupulous seismologist a chance to make a paper seem to have a lot more impact than it deserves.
Yet this is not really where the worst potential for abuse lies.
The worst is when others use the tomographic models as input for some other purpose. At present, this is most likely in geodynamics, but no doubt there are other applications. Which model should you use? If you run your geodynamic model with several tomographies and one yields the exciting result you were wanting to see, what do you do? Hopefully you share all the results, but it would be easy not to and instead provide some after the fact explanation for why you chose that model.
Has this happened? GG has heard accusations.
It’s not like the community is unaware of differences. Thorsten Becker published a paper in 2012 showing that in the western U.S. that seismic models were pretty similar except for amplitude–but “pretty similar” described correlation coefficients of 0.6-0.7. (That amplitude part is pretty important, BTW). About the same time (but less explicitly in addressing the geodynamics modeling community) Gary Pavlis and coauthors similarly compared things in the western U.S. and reached a similar conclusion. But this only provides a start; the key is, just how sensitive are geodynamic results to the differences in seismic tomography?
Frankly, earth science has faced issues for a long time as workers in one specialty had need of results from another. Usually this meant choosing between interpretations of some kind (that volcanic is really sourced from the mantle, not the crust, or that paleomagnetism is good and this other is bad). But the profusion of seismic models and their role as direct starting points for even more complex numerical modeling seems to pose a bigger challenge than radiometric dates or geologic maps, which never were so overabundant that you could imagine finding the one that worked best for your hypothesis. When you toss in some equal ambiguity about viscosity models in the earth, it can seem difficult to know just how robust the conclusions of a geodynamic model are.
Heaven help you if you are then picking between geodynamic models for anything–say like plate motion histories. You could be a victim of a double vp hack….
Maybe its just that February is finally ending, but GG has been navel gazing a bit after reading the exploits of some folks who really don’t understand what science is really for but who get to portray scientists in real life. If you have the stomach for it, Buzzfeed’s review of Brian Wansink’s rather unpleasant history of p-hacking at levels rarely seen is worth a read. Or you can see Retraction Watch’s ongoing accumulation of his retractions and revisions.
Those of us in geophysics pat ourselves on the back and are quietly happy that we don’t have hundreds of independent variables to go fishing in to find something marginally significant. But maybe we have issues that, while not as unscrupulous, are a means of finding something publishable in a pile of dreck.
So let’s go vp-hacking. (And yes, we’ll get in the weeds a bit here).
In keeping with this end-of-the-year theme of what GG is doing wrong, some “crimes against science,” which, as Bob Sharp defined them years ago, was doing some work of interest to the broader community and then not publishing it. (Thankfully, these aren’t the more serious offenses in the expanded criminal ledger GG proposed awhile back).
Now this isn’t an uncommon occurrence: students graduate with thesis chapters not quite ready for publication and discover that life beyond grad school doesn’t provide rewards for getting that stuff into journals. Some other times things just pile up enough that a paper isn’t completed when everything is handy, and it just gets harder to return to as time goes on.
So, in case anybody out there would benefit from some of this stuff, feel free to nudge GG to take some time and share, either informally or by actually publishing some of this. And if nobody seems interested, well, then maybe not much of a criminal act :-). Most of these are in some kind of manuscript form (there is other stuff that didn’t even get that far).
- Geologic map of the Alexander Hills and eastern China Lake basin. Yes, GG mapped while in grad school and actually handed over a copy of his map to Lauren Wright long ago, who included some of it in a never-published update to the SW Tecopa quad (now would be Tecopa 7.5″ quad map). A lot of cool stuff–probably the eastern end of the early Garlock Fault interacting with some low-angle, basin-bottom faults and a pre-China Lake basin history not evident in published maps.
- Seismicity of the Hansel Valley region. GG feel really bad about this, as there were a lot of coauthors on the 1983 experiment, which was one of the densest deployments of seismometers in an extending area. The results are in GG’s PhD thesis but still might merit publication as the data indicates how a low-angle normal fault might interact with ongoing seismic deformation.
- Magnetostratigraphy and some additional paleomag in the Lake Mead region. A collaborator dropped out and so the baton was dropped after a single paper. Some of the data is visible here.
- Paleomagnetic measurements in monoclines of the Colorado Plateau. Joya Tetreault’s thesis has this; substantial vertical-axis rotations exist in some folds (the Grand Hogback being the most dramatic), though the sampling is far less than ideal and some structures seem to make little sense.
- Paleomag and micropolar analysis of seismicity in the Coalinga area. Also part of Tetreault’s thesis. The micropolar work seemed to capture the bending component of folding in the seismicity while the paleomag suggested San Andreas-parallel shear within the fold limbs.
- Earthquakes in the southern Sierra located with the 1988 experiment. Jason Edwards, a CU BA graduate, did some of this work which was never carried farther. It seemed there were events under one of the Recent cinder cones in the s Golden Trout field as well as some deep events in the westernmost foothills of the southern Sierra.
- Geophysics of Panamint Valley and the Ivanpah Valley areas. These were datasets collected by the MIT Geophysics Course in 1987 and 1983, respectively. Both valley present a major challenge because a large basement gravity gradient exists across these valleys, complicating interpretation.
This is all in addition to various half-done projects still seeming to be active as well as datasets that never were fully exploited (for instance, data from a mixed broadband/short period array at Harrisburg Flat in Death Valley plus some more scattered instruments near Dante’s View, or our inability to get anything sensible out of array recordings of deep local events under the northern end of New Zealand’s South Island).
A year ago GG posted on the Kaikoura M7.8 earthquake with the title “Single quake slip partitioning”. With a year past, it seems a quick look at the literature that has appeared is in order. Was this diagnosis correct? In some work, it seems the answer is yes; in others, it seems no.
The most comprehensive overview is probably a paper by Kaiser et al. in Seismological Research Letters. This paper summarizes geologic, seismologic, geodetic, and engineering observations from this quake. They note that 13 separate mapped faults all ruptured together, more than was anticipated prior to the quake. It took about two minutes for things to unwind from south to north along this collection of faults, with substantial step-overs was one strand to the next. Most of the energy released came in two distinct jumps, one 20 seconds into the quake, the next about 70 seconds in.
But as to GG’s hypothesis of slip-partitioning during the quake, the interpretation of the slip history from high-frequency seismic data is no; the faulting was dominantly strike-slip to oblique-slip on land, though the authors do note a period during the rupture when they don’t really locate the source of seismic energy very well.
A second paper comes at this from a different angle. Read More…
UPDATE 2 11/22: GNS has assembled quite a lot of information, and the puzzlement deepens. It appears from the satellite and ground analysis that the bulk of the motion–up to 11 m of slip–was more nearly strike-slip and not the thrusting that appears in the focal mechanism (below). But the uplift of some areas of the coast by 6 meters (!) seems to suggest there is something more.
UPDATE 11/18: A considerable amount of information was put in an article on stuff.co.nz. This includes a map from GNS showing where the faults are that ruptured, a good deal of geodetic information.
Yesterday’s M7.5/7.8 Kaikoura earthquake in New Zealand is one of the more bizarre large earthquakes we have seen in some time. On the face of it, this appears to mostly be rupture of a subduction zone under northeasternmost part of the South Island of New Zealand. But there is a lot of other stuff going on….
First, the main focal mechanism as reported by the USGS:
Now this beachball would suggest a fault dipping to the NW while paralleling the coast. But the appearance that a toddler was not coloring in the lines tells you that there is something more here.
Some of that became apparent when the New Zealand’s GNS Science group went looking to see if there was any slip on earthquake faults. This is what they found:
Rapid field reconnaissance indicates that multiple faults have ruptured:
- Kekerengu Fault at the coast – appears to have had up to 10m of slip
- Newly identified fault at Waipapa Bay
- Hope Fault – seaward segment – minor movement
- Hundalee Fault
I’ve tried to sketch these out from my copies of geologic maps of New Zealand:
(The base map is from Google).
This is where the other shoe drops. The Hope and Kekerengu faults are mapped as strike-slip. Now minor slip on the Hope Fault might not mean much, but 10m on the Kekerengu means there was a lot of slip (I’ve assumed above it is strike-slip, but perhaps there is a thrust component). Plus, the epicenter of the quake–where it started–is somewhere between Cheviot and Rotherham, well to the south (this is why initially this was called the Cheviot earthquake). Toss in a very odd slip history (the moment release was low for a minute and then things really broke) and you get the impression that a relatively small earthquake on an unnamed fault southeast of Rotherham started tripping things off to the north, which eventually tripped off a big rupture.
That big rupture probably is not on the map. It is likely offshore, in the very southern end of the Hikurangi Trench (which is in part responsible for the whale watching that is so popular at Kaikoura). This is the northeast trending thrust fault that the focal mechanism captured and is responsible for the large slip amounts found on the finite-fault map the USGS shares. This is probably also the reason for the ~1m uplift of the seashore at Kaikoura, which led to many photos of paua and crawfish out of the ocean (though uplift at the southwest end of the big strike-slip fault is also possible).
Presumably the large strike-slip faulting on the Hope and Kekerengu faults is what has contaminated the focal mechanism, making it a composite of complex motions instead of the clean double-couple. (Pure strike-slip faulting is seen in many aftershocks.) As such, it seems this earthquake might well have captured both major thrust motion on the subduction zone and strike-slip on the upper plate faults, a form of slip-partitioning in a single event that is quite striking.
It will be interesting to see how the seismological and geological analysis continues; the main seismological slip appears north of these faults and so there could well be more to be found. But rain is in the forecast, which tends to ruin the easiest of signals to see.