The necessity of uncertainty: Part 1

GG was recently dismayed by student “error analyses” in some reports that simply amounted to “well, we could have made a mistake”.  As awful as these are, they are better than some of what is published in the professional literature these days.

We have so much data, so many big computers, so many clever coders that we can crunch and  process huge datasets and then, in the end, the answer emerges. There it is, usually in blue and red, the world beneath our feet! Ta-da!

But wait.  One big new model says the world at this point is red, but another says it is blue. Which is it?  How are we to believe one or the other?  All too often, a new model says nothing about why it is better or more believable than a previous model.  In essence what you want is an error bar.  Good luck finding that in a typical tomography paper, or a numerical modeling paper. Error bars are out of fashion.

This is worth a little investigation…

Let’s start with something relatively uncontroversial: Bouguer gravity anomalies. Errors in raw gravity measurements are typically somewhere in the 10s of microgal range, well below the level of concern of all but the most detailed survey. Making ties into absolute gravity backbone surveys adds a bit more, but it is in three corrections that the big uncertainties come in.  First, there is the free-air correction, weighing in near 0.3 mGal/m. In the old days (which is still when most of the gravity data in use was collected) quality codes were assigned to points to reflect the quality of the elevation control.  Surveyed benchmarks good to maybe 10 cm elevation were the gold standard, elevations marked on topographic maps were within maybe 30 cm, and then trying to read off a topo map was near garbage (technically would be +/- half a contour interval, over 10 m in some rugged areas). The Bouguer correction is tied to an assumed density of rock in a slab beneath each station, typically 2.67 g/cc, and elevation, so the elevation error gets reduced a bit but errors in the reduction density can be an issue.  The final problem is the terrain correction, which accounts for the attraction of nearby mountains and the lack of same from nearby valleys.  Traditionally (and again, most gravity data out there was acquired with these traditions) terrain corrections are thought to be only good to +/- 10%.  (Today we can knock these errors way down with very high resolution topography, but little data has been so reduced). Terrain corrections are often in the 5-10 mGal range in reasonably rough terrain but can hit ~100 mGal on local peaks (which have often been targets of gravity surveys because they often have benchmarks, a carryover from the days when all the surveying was done from peaks).

So, if you are well-versed in gravity studies, you know to work with the raw data points and not guided databases, to look for those elevation quality codes and to keep in mind the uncertainty from the terrain correction.  Typically the reduction density isn’t as much of an issue because most models are relative to that value, so you simply add that back in to get your true densities. Given all this, an east-west cross section through, say, the Panamint Range, might have very well defined points out in Death Valley with uncertainties under 1 mGal, but some points in the Panamint Range itself might carry uncertainties of 15 mGal, which is a pretty serious error. The good news is that most of the folks doing detailed gravity analyses know all this and take it into account (though without formal errors, unfortunately) in making their interpretations.

But what about when the system is open, when non-specialists use the data? Consider one of GG’s favorite whipping boys, Crust5.1 and its progeny. The model has cells where multiple seismic profiles provide a pretty robust estimate of crustal structure (but those individual models usually lack any formal uncertainty), but then there are cells with no direct seismic observations where other criteria (geologic history, elevation, maybe number of fast food restaurants) are used to assign a crustal structure. There is no grid with uncertainties, and indeed the underlying dataset has remained dark for far too long. Now geodynamicists are in need of removing the crust from gravity or other seismic observables; presumably it matters that these values vary from place to place or else there would be no point in making the correction. But, of course, in many places the crustal model will be wrong. How wrong? Nobody knows. This information gets incorporated into geodynamic models and produces, say, estimates of dynamic topography (again, no uncertainties). Say this information is selected by a geomorphologist to use in understanding erosion rates within an orogen. As no meaningful uncertainties have been passed along at any point, how could the geomorphologist test the robustness of their interpretation? You could try sticking in other random numbers, but do you know the distribution of the likely errors? It is likely that we have many studies built on such unstable foundations.

Yeah, yeah, yeah, GG hears you mutter.  All well and good for some perfect life, but when have those uncertainties really changed anything? OK, let’s do plate reconstructions.

Once upon a time there were plate circuit reconstructions and “fixed hot spot” reconstructions, and lo, they would give different results.  But when Joanne Stock and Peter Molnar finally plowed through a means of propagating uncertainty in plate circuits, we could actually test the fixed hot spot assumption that hot spots were all fixed. Not because the reconstructions disagreed, which they always had, but because you could show how much uncertainty was within the plate circuit reconstruction and that hot spots had to move. Indeed, you could go further, as Stock and Molnar did, and show that some ocean floor in the Pacific had to have been created in a place currently occupied by continental crust: they put a bound on the minimum amount of extension that had to occur in the southwestern U.S. from a plate circuit reconstruction.  In both cases, without error bars you might attribute the results to any number of problems. With the error bars, you knew a lot of things that were not the cause of the discrepancy, leaving you with very few choices.  In essence, if you want to be able to reject a hypothesis, you have to have error bars.

More later…

 

Advertisements

Tags:

Trackbacks / Pingbacks

  1. The necessity of uncertainty: Part 2 | The Grumpy Geophysicist - October 16, 2016

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: