Archive | seismology RSS for this section

My Science Crimes

In keeping with this end-of-the-year theme of what GG is doing wrong, some “crimes against science,” which, as Bob Sharp defined them years ago, was doing some work of interest to the broader community and then not publishing it. (Thankfully, these aren’t the more serious offenses in the expanded criminal ledger GG proposed awhile back).

Now this isn’t an uncommon occurrence: students graduate with thesis chapters not quite ready for publication and discover that life beyond grad school doesn’t provide rewards for getting that stuff into journals.  Some other times things just pile up enough that a paper isn’t completed when everything is handy, and it just gets harder to return to as time goes on.

So, in case anybody out there would benefit from some of this stuff, feel free to nudge GG to take some time and share, either informally or by actually publishing some of this.  And if nobody seems interested, well, then maybe not much of a criminal act :-). Most of these are in some kind of manuscript form (there is other stuff that didn’t even get that far).

  • Geologic map of the Alexander Hills and eastern China Lake basin. Yes, GG mapped while in grad school and actually handed over a copy of his map to Lauren Wright long ago, who included some of it in a never-published update to the SW Tecopa quad (now would be Tecopa 7.5″ quad map). A lot of cool stuff–probably the eastern end of the early Garlock Fault interacting with some low-angle, basin-bottom faults and a pre-China Lake basin history not evident in published maps.
  • Seismicity of the Hansel Valley region.  GG feel really bad about this, as there were a lot of coauthors on the 1983 experiment, which was one of the densest deployments of seismometers in an extending area.  The results are in GG’s PhD thesis but still might merit publication as the data indicates how a low-angle normal fault might interact with ongoing seismic deformation.
  •  Magnetostratigraphy and some additional paleomag in the Lake Mead region. A collaborator dropped out and so the baton was dropped after a single paper. Some of the data is visible here.
  • Paleomagnetic measurements in monoclines of the Colorado Plateau.  Joya Tetreault’s thesis has this; substantial vertical-axis rotations exist in some folds (the Grand Hogback being the most dramatic), though the sampling is far less than ideal and some structures seem to make little sense.
  • Paleomag and micropolar analysis of seismicity in the Coalinga area.  Also part of Tetreault’s thesis. The micropolar work seemed to capture the bending component of folding in the seismicity while the paleomag suggested San Andreas-parallel shear within the fold limbs.
  • Earthquakes in the southern Sierra located with the 1988 experiment. Jason Edwards, a CU BA graduate, did some of this work which was never carried farther. It seemed there were events under one of the Recent cinder cones in the s Golden Trout field as well as some deep events in the westernmost foothills of the southern Sierra.
  • Geophysics of Panamint Valley and the Ivanpah Valley areas.  These were datasets collected by the MIT Geophysics Course in 1987 and 1983, respectively.  Both valley present a major challenge because a large basement gravity gradient exists across these valleys, complicating interpretation.

This is all in addition to various half-done projects still seeming to be active as well as datasets that never were fully exploited (for instance, data from a mixed broadband/short period array at Harrisburg Flat in Death Valley plus some more scattered instruments near Dante’s View, or our inability to get anything sensible out of array recordings of deep local events under the northern end of New Zealand’s South Island).

Kaikoura A Year Later

A year ago GG posted on the Kaikoura M7.8 earthquake with the title “Single quake slip partitioning”. With a year past, it seems a quick look at the literature that has appeared is in order.  Was this diagnosis correct?  In some work, it seems the answer is yes; in others, it seems no.

The most comprehensive overview is probably a paper by Kaiser et al. in Seismological Research Letters. This paper summarizes geologic, seismologic, geodetic, and engineering observations from this quake. They note that 13 separate mapped faults all ruptured together, more than was anticipated prior to the quake.  It took about two minutes for things to unwind from south to north along this collection of faults, with substantial step-overs was one strand to the next. Most of the energy released came in two distinct jumps, one 20 seconds into the quake, the next about 70 seconds in.

KaikouraFaults-Kaiser

Faults ruptures in the Kaikoura earthquake, from Kaiser et al, SRL, 2017.

But as to GG’s hypothesis of slip-partitioning during the quake, the interpretation of the slip history from high-frequency seismic data is no; the faulting was dominantly strike-slip to oblique-slip on land, though the authors do note a period during the rupture when they don’t really locate the source of seismic energy very well.

A second paper comes at this from a different angle.   Read More…

Single Quake Slip Partitioning?

UPDATE 2 11/22: GNS has assembled quite a lot of information, and the puzzlement deepens. It appears from the satellite and ground analysis that the bulk of the motion–up to 11 m of slip–was more nearly strike-slip and not the thrusting that appears in the focal mechanism (below). But the uplift of some areas of the coast by 6 meters (!) seems to suggest there is something more.

UPDATE 11/18: A considerable amount of information was put in an article on stuff.co.nz.  This includes a map from GNS showing where the faults are that ruptured, a good deal of geodetic information.

Yesterday’s M7.5/7.8 Kaikoura earthquake in New Zealand is one of the more bizarre large earthquakes we have seen in some time. On the face of it, this appears to mostly be rupture of a subduction zone under northeasternmost part of the South Island of New Zealand. But there is a lot of other stuff going on….

First, the main focal mechanism as reported by the USGS:

kaikouramt

Now this beachball would suggest a fault dipping to the NW while paralleling the coast. But the appearance that a toddler was not coloring in the lines tells you that there is something more here.

Some of that became apparent when the New Zealand’s GNS Science group went looking to see if there was any slip on earthquake faults.  This is what they found:

Rapid field reconnaissance indicates that multiple faults have ruptured:

  • Kekerengu Fault at the coast – appears to have had up to 10m of slip
  • Newly identified fault at Waipapa Bay
  • Hope Fault – seaward segment – minor movement
  • Hundalee Fault

I’ve tried to sketch these out from my copies of geologic maps of New Zealand:

kaikouramap

(The base map is from Google).

This is where the other shoe drops. The Hope and Kekerengu faults are mapped as strike-slip.  Now minor slip on the Hope Fault might not mean much, but 10m on the Kekerengu means there was a lot of slip (I’ve assumed above it is strike-slip, but perhaps there is a thrust component).  Plus, the epicenter of the quake–where it started–is somewhere between Cheviot and Rotherham, well to the south (this is why initially this was called the Cheviot earthquake). Toss in a very odd slip history (the moment release was low for a minute and then things really broke) and you get the impression that a relatively small earthquake on an unnamed fault southeast of Rotherham started tripping things off to the north, which eventually tripped off a big rupture.

That big rupture probably is not on the map.  It is likely offshore, in the very southern end of the Hikurangi Trench (which is in part responsible for the whale watching that is so popular at Kaikoura).  This is the northeast trending thrust fault that the focal mechanism captured and is responsible for the large slip amounts found on the finite-fault map the USGS shares. This is probably also the reason for the ~1m uplift of the seashore at Kaikoura, which led to many photos of paua and crawfish out of the ocean (though uplift at the southwest end of the big strike-slip fault is also possible).

Presumably the large strike-slip faulting on the Hope and Kekerengu faults is what has contaminated the focal mechanism, making it a composite of complex motions instead of the clean double-couple. (Pure strike-slip faulting is seen in many aftershocks.) As such, it seems this earthquake might well have captured both major thrust motion on the subduction zone and strike-slip on the upper plate faults, a form of slip-partitioning in a single event that is quite striking.

It will be interesting to see how the seismological and geological analysis continues; the main seismological slip appears north of these faults and so there could well be more to be found.  But rain is in the forecast, which tends to ruin the easiest of signals to see.

Oklahoma Dreamin’

Back in September, Oklahoma had a M5.6.  Some of you might recall the difference in opinion between USGS scientist Dan McNamara, who expected continued seismicity, and Oklahoma Geological Survey director Jeremy Boak, who said “I’d be surprised if we had another 5.0 this year.”

Well, Director Boak hopefully was in the vicinity to be surprised in person by the M5.0 today that damaged buildings in Cushing, OK, site of the largest oil storage facility in the country (which at least apparently escaped any damage). Yeah, once more wishful thinking trumped by actual scientific examination….increasingly it seems the branch McNamara has climbed out on is the real stout one while the hopes of the Oklahoma injection operators rest on thin reeds.

At least nobody has died, but when you are evacuating a senior housing facility in the night and cancelling school, you know you are playing with fire.

And hey, we aren’t even done with 2016 yet.

The necessity of uncertainty: Part 2

OK, so error bars are good things, if you believe the last post. So what else is there?

Simply put, in many cases uncertainty is not a scalar, it is a matrix, and often a really big matrix. To know the absolute uncertainty on one point, you have to know how it covaries with other points. To be able to manipulate results, you really are helped by knowing the full set of covariances.

In tomography, for instance, it is matrix the size of the model space for each point; how the uncertainty at one point covaries with other points matters. Consider for starters a trivial case: a block of rock with two parts and measurements are made of the travel time of a seismic wave through the block:

seismic2block

The time for the seismic wave to travel through the blocks, t, is (w/v1) + (w/v2). Seismologists often work in slownesses, which are 1/velocity, so an equivalent expression is t = w*s1+w*s2. Obviously with one equation and two unknowns, we cannot say much of anything about s1 or s2. But let’s say we have an idea for v1 and an uncertainty, so then we have s2=t/w-s1, and if we stick in that uncertainty for s1 we can get an uncertainty for s2. But here’s the catch: if s1 is actually higher than the value we estimated, then the value for s2 must be lower: they are not independent.

This might be slightly more apparent with numbers. If w=12 and t is 7, let s1 be 0.33 +/- 0.08. From this we get s2=0.25 +/- 0.08. But here’s the thing: if s1 is actually 0.25, then s2 must be 0.33 if t is perfectly known.  The errors are correlated–they covary. If s1 was estimated to be higher than its actual value, then s2 is lower.

OK, well how is that helpful? Imagine that somebody does the inversion and reports s1=0.33 +/- 0.08 and s2= 0.25 +/- 0.08 and some other worker just needs the average velocity.  Well, they take the published numbers and get 0.29 +/- 0.06, where the uncertainty simply reflects the assumption that the individual uncertainties are independent–but they are not. The thing is, we know very precisely what the average slowness is: it is 7/24 or a bit above 0.29–but the uncertainty is near zero, not ~20%. Because of the high correlation (or covariance) of the uncertainties, the uncertainty of the total is far smaller than of individual pieces.

This exact logic can be applied to receiver functions (which, for those who haven’t seen them, are kind of a spike train representing P to S converted energy in a seismogram); long ago GG generated receiver functions with uncertainties.  These were quite large and indicated there were no signals outside of noise in the receiver function. But when you applied a moving average and kept track of these covariances, the uncertainties rapidly dwindled, revealing a number of significant signals that were actually present in the receiver functions.

Not surprisingly, similar issues exist in tomography. Say we have a tomographic model based off of local earthquake travel times with nodes every 100m.  Odds are pretty good that in parts of the model, you can fit all the data just as well if you increase the wavespeed at one node and decrease it at an adjacent node. Your uncertainty at that point might be awful. But if you want the average wavespeed over a volume of points, that might prove to be pretty robust as the covariances collapse.

Unfortunately the creation of these kinds of covariance matrices is not common (they fell out of favor in tomography, for instance, once fast and compact matrix inversions made the creation of these awkward and time consuming), but even if they were around, they are not trivially presented in a paper. The replacement in tomography has been “checkerboard tests” and spike tests, where a simple anomaly is introduced, observations from such an anomaly are calculated, and then these fake observations are inverted to see how well the inversion recovers them. But the degree to which the anomalies are recovered depends on the geometry, and it is impossible to test all geometries, so in some cases the tests make it seem that the inversion is far worse than it is, and in some cases far better. (A test GG has never seen or done himself would be the opposite of the spike test: introducing a smooth gradient across a model and seeing if the inversion could catch it.  GG suspects there is a minimum gradient necessary to produce a signal.  Maybe something to try one day when there is spare time…)

This isn’t limited to inversions of observations. Complex models often have internal dependencies that effectively produce covarying signals from a common error.  Of course, these can in theory be estimated from the underlying mathematics, but many modern models are so complex that it isn’t remotely obvious in many cases just how things are tangled up. For instance, dynamic topography models essentially depend upon a load and a rheology, but if both are being estimated from a seismic model and the seismic model depends upon an a priori  crustal model, then errors in the crustal model can bleed all the way through to topography in a fairly non-intuitive manner.

So GG suggests that a frontier associated with Big Data and Big Models and Big Computing is the understanding of, ahem, Big Uncertainty. Because without that understanding we are left with…yeah, you can see it coming, big uncertainty.

Sensitivity Testing (tap tap…1..2..3..)

No, this is not about being careful in what you say, or how quickly you jump if tapped on the shoulder. This is testing for how well an inversion can convince you of the presence of an anomaly.

Seismic tomography is one of those windows into the earth that is either a huge advance or a hall of mirrors.  The single greatest challenge is to show that some high- or low-velocity blob is real.  Sometimes you can do this by looking at raw travel time residuals, but most of the stuff we are looking at these days is lost in the noise in raw data–it takes the blending of tons of data to get to the anomalies in question.  (Seismologists have been wading in big data for awhile now).

Probably the most convincing test is some kind of sensitivity test (or, if you do the full matrix inversion properly, an a posteriori covariance or resolution matrix–but with the numbers of degrees of freedom in most tomographic studies, these are few and far between).  A simple form is a checkerboard, but let’s consider a better one, a hypothesis test. As we’ll see, there are unexpected pitfalls.

Read More…

Red-ite Rate Mistakes

At times of late folks have decided to term volumes of the mantle (in particular) as being “red-ite” and “blue-ite” to avoid over interpreting such bodies as being hot or of some material.  Even so, the general assumption in the upper mantle is that red-ite is hot and blue-ite cold.  So what does this tell you about surface uplift rates?

Precisely nothing.

And yet there is quite a literature where the presence of a red blob in tomography is taken to mean that overlying crust is rising, or a blue blob means it is sinking.  This is nonsense for multiple reasons.  (GG is here refraining from identifying some guilty parties, but it shouldn’t be hard to find some).

First, it would be the rate of change of buoyancy that would matter to start with.  A present-day hot body (say, for instance, a pluton) would be in isostatic balance (as much as the flexural strength of the lithosphere would allow).  If the pluton were simply sitting there, slowly cooling, little would happen until the thermal front from the pluton were to fade out enough that the whole volume of pluton and surrounding rock was losing heat. There would be no uplift; eventually there would even be subsidence, even as the pluton might remain somewhat hotter than its surroundings. For there to be uplift, the pluton either needs to get hotter or bigger. Seismic tomography has no temporal history; if you want to go there, you have to make a bunch of assumptions and then model processes.

Second, the assumption of buoyancy for a red blob, while defensible, is hardly certain.

Third, there are processes that can interfere with the expression of a mantle anomaly’s buoyancy at the surface.  Several papers studying Rayleigh-Taylor instabilities have shown that the crust can flow in above a growing instability to produce uplift even as the anti-buoyant drip grows below.

Mistaking a rate for a level value is a blunder that earns rapid and widespread approbation in economics; perhaps similar blunders should be called out in earth science.