Rant: CRUST5.1, 2.0, 1.0

IF you know what those things are (CRUST 5.1, 2.0 and 1.0), then you probably know where this is going.  And well, if not, hang on.

Quickly, what they are are models of Earth’s crust within 5°x5° boxes (CRUST5.1), 2°x2° boxes (CRUST2.0) and 1°x1° boxes (CRUST1.0). Unlike software, lower numbers are more recent versions. These are an outgrowth of a compilation of seismic refraction lines at the USGS under the auspices of Walter Mooney; the later updates have been led by a group at IGPP in San Diego, Gabi Laske taking the lead. Generally the most robust aspect of the dataset has been an attempt to estimate crustal thickness but the model also includes sediment thickness, upper, middle and lower crust thickness and seismic wavespeeds and density.

The model is highly popular in various geodynamic and some seismological circles, and it is easy to see why: the crust is highly heterogeneous and so to estimate signals from the mantle (seismic travel times, gravity, dynamic topography, sometimes heat production) you need to correct for the crust. When you are trying to solve the globe, you want a global dataset that allows you to remove the crust.  Hence the popularity of this model.

OK, so why the rant?

The compilation of the seismic profiles used in building these models is a major task and a great contribution–if it was shared.  So far as GG knows (he is waiting for somebody to point out the link he overlooked) the compilation itself is unpublished, apparently because of restrictions on sharing some of the data.  The closest this information seems to exist is in a couple of papers early on, Christensen and Mooney’s 1995 JGR global analysis of continental crust and Mooney et al.’s 1998 JGR paper explaining CRUST5.1. After that users of the datasets are asked to cite two abstracts. So the massive dataset underlying these models is (apparently) invisible.  The closest we get is a map of where CRUST5.1 data came from:

Locations of refraction profiles used in making CRUST5.1.  From Mooney et al., JGR 1998.

Locations of refraction profiles used in making CRUST5.1. From Mooney et al., JGR 1998.

Since we don’t know what went in, we aren’t too sure how things are combined.  Some areas are covered by multiple profiles within a given box: are the profiles within a box averaged? Is there some quality assignment? Are these areas simply assigned the generic structure used for that type of crust? There are examples of single profiles being interpreted quite differently by different researchers–how are these dealt with?

Even more magical is what happens where there is little or no data. Are there blank spots? Why, no.  All areas are assigned to one of fourteen basic types in CRUST5.1 (CRUST1.0 claims something under 40 basic crustal types). Adjustments are made for variations in sediment thickness, ice thickness and water depth (and, GG suspects, for crustal thicknesses where known or inferred). A statistical estimate of the characteristics of these crustal types is made from areas where there is data; these are then applied to similar crustal areas where there are no observations. In essence, if west Africa looks like the same stuff as northern Canada, it is assumed to look the same and those values are stuck into the model.

This presumes that there are no systematic differences between, say, North American cratons and African cratons. It assumes a uniformity within basic crustal types without really considering the variations within those types–variations that are quite clearly evident in the histograms in Mooney et al. 1998.

Well, you cry, we need to have something, right? Maybe so, but there are two points to make.  First, even if you accept that this is the best we can do right now, we need to add some information on the uncertainty of each value.  In places where there are multiple refraction lines, this is probably a standard deviation of values of the relevant profiles. In places where there is no control, the uncertainty should probably be some multiple of the relevant standard deviation associated with that crustal type–so for instance, many if the crustal velocities shown by Mooney et al. have a standard deviation of 0.2-0.3 km/s. Reasonable uncertainties might be about 0.4-0.6 km/s. And ideally there would be an index showing the number of data points providing observations in that pixel.

How good is CRUST? We don’t often get measures of it, but consider this comparison with receiver function Moho depths made by Gilbert  (Geosphere 2012)

Receiver function crustal thickness vs. CRUST2.0 crustal thickness in western U.S., Gilbert, Geosphere, 2012

Receiver function crustal thickness vs. CRUST2.0 crustal thickness in western U.S., Gilbert, Geosphere, 2012

Crustal thicknesses vary by about 15 km for a given crustal thickness from receiver functions.  However, the standard deviation Mooney et al. found for crustal thickness in an orogen was 10 km, so we’d expect a 2-sigma (95%) range of +/-20 km (!) to include 95% of all the points.  Arguably we’re doing better here than expected! It seems quite likely that nearly all the scatter here is consistent with the observations made by Mooney et al. The simple truth is that a single crustal type will have a lot of variation within it. Providing that information as part of CRUSTXX.X seems appropriate.

Second, if you use this as a correction for something else, you should be considering just how sensitive your results are to errors in CRUST1.0 (or whatever version you use). What happens if, instead of using the exact values provided, you draw values from the distributions evident in Mooney et al for each value and redo your inversion or analysis? If your analysis really changes, best to be careful: your sensitivity to CRUST1.0 is too great for the analysis you want.  A poorly explored aspect is the lateral covariance of deviations from the mean for different properties.  That means, maybe it is quite likely that if one pixel has, say, an unusually thick crust, that the neighboring pixel is also unusually thick. If that likelihood is higher than the original distribution, then you have to account for covariance in estimating error.

Is any of this done today? Maybe, but it doesn’t show up well in the literature.  Some workers will compare results from CRUST2.0 with a vanilla crust that is assumed to be in Airy isostatic equilibrium to get a handle on the magnitude of the effects from the crustal model, and that is a step in the right direction. But we know Airy isostasy is a poor approximation, so there is always a bit of a bias towards CRUST model results.

None of this gets at the last two parts of CRUSTX.X models: the shear velocities in CRUST5.1 were derived from a P-wave to S-wave empirical regression, as were the density values.  This basically means that the only remotely independent variable in CRUSTX.X is the P-wave velocity.  Paying much attention to the S-wave or density values is probably misplaced.

Given the use of CRUSTX.X, perhaps it is something worth fixing.  If so, GG would like to offer some prescriptions for improving the situation:

  1. Make the raw data being used public.  Yes, there are some points that would vanish because some folks provided data with restrictions, but the inflow of new observations should overwhelm that.  Ideally a way for individuals to contribute their own observations would help.  Yeah, something like this is probably developing in IRIS or with the various NSF digital earth initiatives like EarthCube. Make it obviously linked to CRUST.
  2. Describe how the dataset is built.  Yeah, sausage factory, GG understands. But the Mooney et al. paper isn’t exceptionally clear on all points, and presumably some things are now being done differently in CRUST2.0 and CRUST1.0. If we know what is going on, we know what we can trust and what we shouldn’t trust.
  3. Quit distributing derived parameters with CRUST.  If this is a P-wave model, then just distribute it that way.  Providing tools to allow workers to convert P-velocities to other parameters is fine (and allows for experimentation in sensitivity there too, e.g., using Brocher instead of Christensen and Mooney), but unless separate constraints are used, let’s not make it seem as though there is more to this than there really is. With the rapid growth of shear wave crustal models, though, it is quite possible that an independent S-wave version of CRUST could be made.
  4. Provide uncertainties with CRUST releases. Don’t be afraid of large numbers–if we don’t know, we don’t know.
  5. Begin to research the spatial statistics of these values. Is there a spatial coherence? At what distances do they go away? Are there any negative covariances? You need (1) done for anybody to make headway with this.
  6. Develop tools to propagate uncertainties related to CRUSTX.X. For instance, some teleseismic body wave tomographers use CRUST2.0 to remove the crust. If we have the uncertainties from (4) without covariances, we might blow the uncertainties on our residual travel times.  For many refraction profiles, for instance, the intercept time from the Pn branch is well constrained, which means that while lower crustal velocities and thicknesses might be uncertain, the travel time across the lower crust is pretty well constrained and so the covariance between parameters can collapse the total uncertainty on some parameters like vertical travel time.

Easy? No.  Crust is an ugly mess and conveying how well or poorly we know it is hard work.  Clearly most researchers don’t want to reproduce the data gathering Mooney and associates did or Laske and associates (presumably) continue to do.

Tags: ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: