Archive | seismology RSS for this section

Single Quake Slip Partitioning?

UPDATE 2 11/22: GNS has assembled quite a lot of information, and the puzzlement deepens. It appears from the satellite and ground analysis that the bulk of the motion–up to 11 m of slip–was more nearly strike-slip and not the thrusting that appears in the focal mechanism (below). But the uplift of some areas of the coast by 6 meters (!) seems to suggest there is something more.

UPDATE 11/18: A considerable amount of information was put in an article on stuff.co.nz.  This includes a map from GNS showing where the faults are that ruptured, a good deal of geodetic information.

Yesterday’s M7.5/7.8 Kaikoura earthquake in New Zealand is one of the more bizarre large earthquakes we have seen in some time. On the face of it, this appears to mostly be rupture of a subduction zone under northeasternmost part of the South Island of New Zealand. But there is a lot of other stuff going on….

First, the main focal mechanism as reported by the USGS:

kaikouramt

Now this beachball would suggest a fault dipping to the NW while paralleling the coast. But the appearance that a toddler was not coloring in the lines tells you that there is something more here.

Some of that became apparent when the New Zealand’s GNS Science group went looking to see if there was any slip on earthquake faults.  This is what they found:

Rapid field reconnaissance indicates that multiple faults have ruptured:

  • Kekerengu Fault at the coast – appears to have had up to 10m of slip
  • Newly identified fault at Waipapa Bay
  • Hope Fault – seaward segment – minor movement
  • Hundalee Fault

I’ve tried to sketch these out from my copies of geologic maps of New Zealand:

kaikouramap

(The base map is from Google).

This is where the other shoe drops. The Hope and Kekerengu faults are mapped as strike-slip.  Now minor slip on the Hope Fault might not mean much, but 10m on the Kekerengu means there was a lot of slip (I’ve assumed above it is strike-slip, but perhaps there is a thrust component).  Plus, the epicenter of the quake–where it started–is somewhere between Cheviot and Rotherham, well to the south (this is why initially this was called the Cheviot earthquake). Toss in a very odd slip history (the moment release was low for a minute and then things really broke) and you get the impression that a relatively small earthquake on an unnamed fault southeast of Rotherham started tripping things off to the north, which eventually tripped off a big rupture.

That big rupture probably is not on the map.  It is likely offshore, in the very southern end of the Hikurangi Trench (which is in part responsible for the whale watching that is so popular at Kaikoura).  This is the northeast trending thrust fault that the focal mechanism captured and is responsible for the large slip amounts found on the finite-fault map the USGS shares. This is probably also the reason for the ~1m uplift of the seashore at Kaikoura, which led to many photos of paua and crawfish out of the ocean (though uplift at the southwest end of the big strike-slip fault is also possible).

Presumably the large strike-slip faulting on the Hope and Kekerengu faults is what has contaminated the focal mechanism, making it a composite of complex motions instead of the clean double-couple. (Pure strike-slip faulting is seen in many aftershocks.) As such, it seems this earthquake might well have captured both major thrust motion on the subduction zone and strike-slip on the upper plate faults, a form of slip-partitioning in a single event that is quite striking.

It will be interesting to see how the seismological and geological analysis continues; the main seismological slip appears north of these faults and so there could well be more to be found.  But rain is in the forecast, which tends to ruin the easiest of signals to see.

Oklahoma Dreamin’

Back in September, Oklahoma had a M5.6.  Some of you might recall the difference in opinion between USGS scientist Dan McNamara, who expected continued seismicity, and Oklahoma Geological Survey director Jeremy Boak, who said “I’d be surprised if we had another 5.0 this year.”

Well, Director Boak hopefully was in the vicinity to be surprised in person by the M5.0 today that damaged buildings in Cushing, OK, site of the largest oil storage facility in the country (which at least apparently escaped any damage). Yeah, once more wishful thinking trumped by actual scientific examination….increasingly it seems the branch McNamara has climbed out on is the real stout one while the hopes of the Oklahoma injection operators rest on thin reeds.

At least nobody has died, but when you are evacuating a senior housing facility in the night and cancelling school, you know you are playing with fire.

And hey, we aren’t even done with 2016 yet.

The necessity of uncertainty: Part 2

OK, so error bars are good things, if you believe the last post. So what else is there?

Simply put, in many cases uncertainty is not a scalar, it is a matrix, and often a really big matrix. To know the absolute uncertainty on one point, you have to know how it covaries with other points. To be able to manipulate results, you really are helped by knowing the full set of covariances.

In tomography, for instance, it is matrix the size of the model space for each point; how the uncertainty at one point covaries with other points matters. Consider for starters a trivial case: a block of rock with two parts and measurements are made of the travel time of a seismic wave through the block:

seismic2block

The time for the seismic wave to travel through the blocks, t, is (w/v1) + (w/v2). Seismologists often work in slownesses, which are 1/velocity, so an equivalent expression is t = w*s1+w*s2. Obviously with one equation and two unknowns, we cannot say much of anything about s1 or s2. But let’s say we have an idea for v1 and an uncertainty, so then we have s2=t/w-s1, and if we stick in that uncertainty for s1 we can get an uncertainty for s2. But here’s the catch: if s1 is actually higher than the value we estimated, then the value for s2 must be lower: they are not independent.

This might be slightly more apparent with numbers. If w=12 and t is 7, let s1 be 0.33 +/- 0.08. From this we get s2=0.25 +/- 0.08. But here’s the thing: if s1 is actually 0.25, then s2 must be 0.33 if t is perfectly known.  The errors are correlated–they covary. If s1 was estimated to be higher than its actual value, then s2 is lower.

OK, well how is that helpful? Imagine that somebody does the inversion and reports s1=0.33 +/- 0.08 and s2= 0.25 +/- 0.08 and some other worker just needs the average velocity.  Well, they take the published numbers and get 0.29 +/- 0.06, where the uncertainty simply reflects the assumption that the individual uncertainties are independent–but they are not. The thing is, we know very precisely what the average slowness is: it is 7/24 or a bit above 0.29–but the uncertainty is near zero, not ~20%. Because of the high correlation (or covariance) of the uncertainties, the uncertainty of the total is far smaller than of individual pieces.

This exact logic can be applied to receiver functions (which, for those who haven’t seen them, are kind of a spike train representing P to S converted energy in a seismogram); long ago GG generated receiver functions with uncertainties.  These were quite large and indicated there were no signals outside of noise in the receiver function. But when you applied a moving average and kept track of these covariances, the uncertainties rapidly dwindled, revealing a number of significant signals that were actually present in the receiver functions.

Not surprisingly, similar issues exist in tomography. Say we have a tomographic model based off of local earthquake travel times with nodes every 100m.  Odds are pretty good that in parts of the model, you can fit all the data just as well if you increase the wavespeed at one node and decrease it at an adjacent node. Your uncertainty at that point might be awful. But if you want the average wavespeed over a volume of points, that might prove to be pretty robust as the covariances collapse.

Unfortunately the creation of these kinds of covariance matrices is not common (they fell out of favor in tomography, for instance, once fast and compact matrix inversions made the creation of these awkward and time consuming), but even if they were around, they are not trivially presented in a paper. The replacement in tomography has been “checkerboard tests” and spike tests, where a simple anomaly is introduced, observations from such an anomaly are calculated, and then these fake observations are inverted to see how well the inversion recovers them. But the degree to which the anomalies are recovered depends on the geometry, and it is impossible to test all geometries, so in some cases the tests make it seem that the inversion is far worse than it is, and in some cases far better. (A test GG has never seen or done himself would be the opposite of the spike test: introducing a smooth gradient across a model and seeing if the inversion could catch it.  GG suspects there is a minimum gradient necessary to produce a signal.  Maybe something to try one day when there is spare time…)

This isn’t limited to inversions of observations. Complex models often have internal dependencies that effectively produce covarying signals from a common error.  Of course, these can in theory be estimated from the underlying mathematics, but many modern models are so complex that it isn’t remotely obvious in many cases just how things are tangled up. For instance, dynamic topography models essentially depend upon a load and a rheology, but if both are being estimated from a seismic model and the seismic model depends upon an a priori  crustal model, then errors in the crustal model can bleed all the way through to topography in a fairly non-intuitive manner.

So GG suggests that a frontier associated with Big Data and Big Models and Big Computing is the understanding of, ahem, Big Uncertainty. Because without that understanding we are left with…yeah, you can see it coming, big uncertainty.

Sensitivity Testing (tap tap…1..2..3..)

No, this is not about being careful in what you say, or how quickly you jump if tapped on the shoulder. This is testing for how well an inversion can convince you of the presence of an anomaly.

Seismic tomography is one of those windows into the earth that is either a huge advance or a hall of mirrors.  The single greatest challenge is to show that some high- or low-velocity blob is real.  Sometimes you can do this by looking at raw travel time residuals, but most of the stuff we are looking at these days is lost in the noise in raw data–it takes the blending of tons of data to get to the anomalies in question.  (Seismologists have been wading in big data for awhile now).

Probably the most convincing test is some kind of sensitivity test (or, if you do the full matrix inversion properly, an a posteriori covariance or resolution matrix–but with the numbers of degrees of freedom in most tomographic studies, these are few and far between).  A simple form is a checkerboard, but let’s consider a better one, a hypothesis test. As we’ll see, there are unexpected pitfalls.

Read More…

Red-ite Rate Mistakes

At times of late folks have decided to term volumes of the mantle (in particular) as being “red-ite” and “blue-ite” to avoid over interpreting such bodies as being hot or of some material.  Even so, the general assumption in the upper mantle is that red-ite is hot and blue-ite cold.  So what does this tell you about surface uplift rates?

Precisely nothing.

And yet there is quite a literature where the presence of a red blob in tomography is taken to mean that overlying crust is rising, or a blue blob means it is sinking.  This is nonsense for multiple reasons.  (GG is here refraining from identifying some guilty parties, but it shouldn’t be hard to find some).

First, it would be the rate of change of buoyancy that would matter to start with.  A present-day hot body (say, for instance, a pluton) would be in isostatic balance (as much as the flexural strength of the lithosphere would allow).  If the pluton were simply sitting there, slowly cooling, little would happen until the thermal front from the pluton were to fade out enough that the whole volume of pluton and surrounding rock was losing heat. There would be no uplift; eventually there would even be subsidence, even as the pluton might remain somewhat hotter than its surroundings. For there to be uplift, the pluton either needs to get hotter or bigger. Seismic tomography has no temporal history; if you want to go there, you have to make a bunch of assumptions and then model processes.

Second, the assumption of buoyancy for a red blob, while defensible, is hardly certain.

Third, there are processes that can interfere with the expression of a mantle anomaly’s buoyancy at the surface.  Several papers studying Rayleigh-Taylor instabilities have shown that the crust can flow in above a growing instability to produce uplift even as the anti-buoyant drip grows below.

Mistaking a rate for a level value is a blunder that earns rapid and widespread approbation in economics; perhaps similar blunders should be called out in earth science.

The silver bullet that ricocheted

[W]e note that if [elastically accommodated grain-boundary sliding] were as ubiquitous as theory implies, then the interpretation of seismological observations of any hot, solid regions of Earth based on single crystal elasticity would require a significant revision.-Karato et al., 2015

This concluding sentence from a recent paper suggests that a lot of seismological interpretations out there are wrong.  Fully understanding what is going on is worthwhile but takes a bit of background. Unfortunately their press release is so tied up in knots that it hides what could be a really significant contribution.

One of the key elements in plate tectonics is, not surprisingly, plates.  While the bulk of the mantle convects as a viscous fluid, some of it near the surface cools enough to essentially remain undeformed.  This mantle tends to stay attached to the crust above it; it deforms more simply as an elastic material than a viscous one.  Together, that uppermost part of the mantle and the crust form the lithosphere.  And the lithosphere is basically where the plates are [let us set aside tectosphere arguments for today]. This paper in essence explores the failure of a promising approach to figuring out the thickness of the lithosphere and in so doing might undercut a fair amount of current understanding of the physical state of the shallow mantle.

Read More…