Make things simple, but not simpler. Occam’s razor. Reductionist science lives on finding an underlying structure that accounts for the important differences in observations. If you can explain a bunch of observations with one rule, that beats having a special rule for each observation. But is this really a (or the) guiding principle of science?
Well, arguably the most parsimonious explanation for stuff is “God made it that way.” Why did we abandon such a universal explanation for everything? While today we look to science for explanations about why something happens (auroras, shooting stars, earthquakes, tsunamis), it feels like the origin of science was the more prosaic “what will happen if I do this?” Flinging things at enemies was a popular option in warfare for a long time, but the trial-and-error approach isn’t so wonderful if your enemy, seeing where you are firing from, is quicker to lob a shell at you more precisely. Recognizing that there are rules that are quite predictable gives you an edge–you can get things done more efficiently or even do things you previously couldn’t do at all. You don’t need to answer “why is there gravity” to be able to use a theory for it to do things like go to the moon.
So maybe science is being parsimonious while being able to predict things. Yet some theories look less than optimally parsimonious. The Standard Model for physics looks like something Rube Goldberg might have come up with. Is string theory really parsimonious? You get the feeling Occam’s Razor will draw blood on some pretty well established theories.
Earth science really slams into these problems. Say, you want a theory in how mountain ranges are created. You look today and see the Himalaya rising as India hits Asia. OK, maybe mountain ranges are made as two continents collide. Oh, but we have the Andes, too, and mountains in Alaska. Um, OK, well, mountains are made where two plates collide. OK, great. A fairly simple explanation that allows us to look for mountains. (We’ll put aside where plates collide and all we get are a few volcanoes).
That explain all mountains? It does seem helpful for the Appalachians and Urals and Alps. How about the Sierra Nevada? Assuming the young Sierra story holds water (it is argued), the range has largely risen up with plates not colliding. Seems trouble for our universal mountain-building theory. Or the ranges of the Basin and Range; why is all that going on? Sure seems distant from the plate boundary.
But then we have the Rockies about 1000km from the edge of a plate. Why are the Rockies there instead of where the plates were apparently colliding? Maybe a plate was scraping the bottom of North America. Maybe the Colorado Plateau was really strong. Maybe there was dynamic flow in the mantle. Maybe the Ancestral Rockies had set things up. How universal and parsimonious is our plates-colliding theory if we keep finding troublesome mountains?
In a weird way, earth science almost moves in the opposite direction of, say, particle physics. The physicists are looking for the one equation to rule them all; earth scientists are teasing out all the different ways Earth can do something. Parsimony in earth science is almost backwards from the way a lot of folks regard Occam’s Razor. We will hone an explanation to its bare essentials and then compare with all the examples we have. The ones it explains we can set aside. The ones it cannot we go on to investigate. There are two possibilities: our original explanation was wrong and focused on immaterial aspects, or there is more than one way to achieve some outcome. The great challenge in all this is to somehow sidestep the features that are not important while really nailing the ones that matter.
Consider the Rockies again. A fairly likely candidate for the same process is in South America, the Sierras Pampeanas. A paper some time ago pointed out that the geometry of these ranges (length and width) looked to be about the same as in the Rockies, and the bounding faults are reverse or thrust faults in both places. Is this then the key element that provides the insight into the origin of the Rockies? Some think so, but GG (and some others) have argued this is simply what happens when you squish an area in a continental interior with a thin cover of sedimentary rocks. Kind of like that you can’t really tell if a nail was driven by a hammer or a nailgun; the different tools can make the same outcome. GG argues that it is the source of the compressional stress that we care about and that important differences between the Sierras Pampeanas and the Rockies cannot be dismissed. Which is really right? With so few possible candidates, it is hard to tell. Occam’s Razor has little effect when your choices are so few and potential confounding features are so widespread.
Parsimony is an important tool, but not really the be-all and end-all some make it out to be. There is a temptation to force discrepant cases into a theory’s box when you value parsimony over all. Sometimes it is the right call, sometimes not. Relying on Occam to answer the question can be a big mistake.
Five years ago GG pointed to a paper threatening the cherished assumption in petrology that the pressure recorded by minerals is equal to the overburden pressure. GG has never been comfortable with that assumption, and missed (until now) a paper that is far more comprehensive in its impact. And frankly, it is so blazingly obvious that GG is embarrassed that this has been under the radar for so long. The paper is Yamato, P., and Brun, J.P., 2017, Metamorphic record of catastrophic pressure drops in subduction zones: Nature Geoscience, v. 10, p. 46–50, doi: 10.1038/ngeo2852. The killer money figure is this:
All the dots in the top panel are peak pressures reported in the literature versus the subsequent nearly isothermal pressure drop also reported, where the circled points actually have that second pressure separately measured. The first thing is that this linear array makes no sense: it would almost require that rocks go down on a spring: the farther down they go, the more rapidly they bounce back up. You’d think some rocks would just stay down there and heat up and that the subsequent rise could well be independent of the journey down. The second part is that this linear array makes perfect sense if you are looking at the difference between the pressure when the rocks are on horizontal compression versus horizontal extension, which is what the bottom panel is illustrating. In essence, if the vertical normal stress is constant (σv), then at failure in compression it would be σ3 but in extension σ1. With pressure being an average of the stresses, you then get a massive pressure drop, greater if the rock is in the brittle regime (∆PFRIC) than the ductile regime (∆PDUC). The authors estimate these curves as shown by the solid lines in the top panel and it sure seems like the simplest explanation for these massive decompression events is simply that the stress field changed.
How this changes a tectonic interpretation of the geobarometry is illustrated in their Figure 4:
The black line in the lefthand graph is what has typically been interpreted to date; in the righthand graph they correct for compressional and extensional stresses. Instead of a rock blasting its way to the surface and then stopping, in the right hand panel the rock goes down and then comes back up with the vertical axis now being the lithostatic pressure.
Now this isn’t without a pile of caveats and potential flaws. First, at these depths there is no reason for the principal normal stresses to be aligned with the Earth’s surface, so this is a worst case scenario. Second, it is a bit of a surprise that the points going all the way to over 4 GPa, seem to be in the brittle field. GG suspects that many of these rocks exhibit ductile features that would seem to contradict the inference of being in the brittle field. Third, a change in the stress field of this magnitude is pretty daunting and poses a challenge to the geodynamics community: how can stresses change that much? But if the rocks are sitting at roughly the same depth and temperature for a significant time, this might not be anything like the problem of near isothermal decompression, which does have some severe time constraints. But regardless of the challenges, frankly this makes way more sense than rocks just springing back up to some level and sitting there.
There is a follow-up paper that more fully develops some formalisms for investigating this effect in general: Bauville, A., and Yamato, P., 2021, Pressure-to-Depth Conversion Models for Metamorphic Rocks: Derivation and Applications: Geochemistry, Geophysics, Geosystems, v. 22, article e2020GC009280, doi: 10.1029/2020GC009280.
Now this paper dealt with high pressure-low temperature rocks typically associated with subduction zones, and this strongly suggests that inferences of continental rocks going to 100 km depths are mistaken. But there are a whole bunch of rather similar looking curves that are not quite as dramatic but similarly difficult to understand without this mechanism. GG is referring to the widespread evidence for massive decompression of lower crustal rocks seen the Sevier hinterland of Nevada, Utah and southeastern California. (For instance, can work outward from the overview of Hodges and Walker, GSA Bull., 1992). This has long been a major mystery as shallow level extensional structures are largely missing. Many workers have noted that Miocene and younger Basin and Range extension has led to very deep basins being created, but equivalent Cretaceous and early Tertiary sedimentary piles are rare.
This brings us to a second paper that considers this problem in the metamorphic rocks of eastern Nevada: Zuza, A.V., Thorman, C.H., Henry, C.D., Levy, D.A., Dee, S., Long, S.P., Sandberg, C.A., and Soignard, E., 2020, Pulsed Mesozoic Deformation in the Cordilleran Hinterland and Evolution of the Nevadaplano: Insights from the Pequop Mountains, NE Nevada: Lithosphere, v. 2020, Article ID 8850336, doi: 10.2113/2020/8850336. On the basis of geologic mapping and new geochronological data, these workers conclude that both Cretaceous thickening and decompression are less significant in this area, possibly indicating that the geobarometry in the nearby Ruby and East Humboldt mountains has been affected by overpressure issues like that considered above. And when you toss in structural evidence in other core complexes for changes between shortening and extension (e.g., Wells, M.L., Hoisch, T.D., Cruz-Uribe, A.M., and Vervoort, J.D., 2012, Geodynamics of synconvergent extension and tectonic mode switching: Constraints from the Sevier-Laramide orogen: Tectonics, v. 31, TC1002, doi: 10.1029/2011TC002913) it seems that much of the geobarometry in the western U.S. is due for reexamination.
Overall, this feels like a liberation of sorts. The decompression problems had produced some imaginative solutions that might no longer be necessary (e.g., Wernicke, B.P., and Getty, S.R., 1997, Intracrustal subduction and gravity currents in the deep crust: Sm-Nd, Ar-Ar, and thermobarometric constraints from the Skagit Gneiss Complex, Washington: Geological Society of America Bulletin, v. 109, p. 1149–1166.). The next few years might see wholesale revision of what was going on in the Sevier hinterland.
One of the most popular explanations for the High Plains is that they were dragged upward by a buoyant body, probably in the upper mantle under the Rio Grande Rift. This is arguably the only late Miocene to Pliocene event one could plausibly associate with post-Ogallala Formation tilting. GG has tended to be dismissive of this but hasn’t been through the math. Now there must be a simple analysis somewhere in the literature, but GG isn’t seeing it, so let’s make a simple model and see what it takes to make it work. We’ll assume a north-south trending horizontal cylinder with some density contrast under an elastic plate represents the source of uplift (although many folks like a “broken” plate, the physics of such a boundary are inappropriate here). We’ll place the cylinder at a depth z and calculate the uplift and the gravity anomaly from this body. We’ll tweak these until we can fit the observations.
Now we have a little difficulty in that the modern topography is due to more than just the Rift: the sub-Ogallala unconformity reveals rather clearly that there were east-flowing streams when deposition began, meaning that topography back then was tilted to the east, though that potentially was very close in time to deposition. So that topography was presumably compensated by some mechanism that might be well distributed (e.g., variation in crustal thickness). Since the free-air anomaly across the Plains is near 0, the Bouguer anomaly for local compensation of topography should be 0.112 mGal/meter. We’ll just add that to our theoretical models as needed.
The problem is that we don’t know how much topography we want to ascribe to the late Cenozoic Rift: one extreme view (seemingly that of Eaton, 1986, 1987, 2008) is that things were pretty flat in prior to the Rift on an east-west profile, with major rivers going more or less directly to the coast to the south-southeast; another that there was some gradient, though much lower than today (e.g., McMillan et al., 2002). Let’s tackle both and see what we get. In both cases we will focus on the topography east from about 105°W and we’ll place the cylinder at 106°W, under the axis of the Rift.
Some years ago, Mt. Elbert jumped from 14,433′ to 14,440′ above sea level. Some Coloradans hope this might be enough to push it past Mt. Whitney in California, but that peak too rose up, from 14,494′ to 14,501′. The change was because of the shift from the National Geodetic Vertical Datum of 1929 (NGVD29) to the newer (and presumably more accurate) North America Vertical Datum of 1988 (NAVD88). While the change did nothing for the local relative elevations (Mt. Massive, for instance, remained 12′ lower than nearby Mt. Elbert), there were some relative changes at the scale of the country.
The datum can be thought of (and is usually stated as) where sea level would be, though technically this isn’t quite true. Thus a change in the datum came about because of a change in the underlying estimate of where sea level was. As you might well imagine, sea level is not easy to divine here in Colorado, but measurements with satellites as well as lots of surface surveys provided a lot of information.
However, it has turned out that there were some mistakes in NAVD88. Tide gauges on the West Coast were computed to be as much as 1.25m above actual sea level. NGVD29 was tied to a number of tide gauges, but NAVD88 was hooked in from one in eastern Canada. The result is that the datum differs from estimates of the geoid:
So what is the solution? A new datum, of course: the North American-Pacific Geopotential Datum of 2022 (NAPGD2022). And while this is still being finalized, we have some idea of what the new datum will look like:
A sharp eye might see that Mt. Whitney might decline by 0.75m (2.5′) while Mt. Elbert might only go down 0.5 m (1.6′). So Whitney (maybe 14,499′) will still be safely above Elbert (14,439?’). But all the T-shirts and little fake geodetic monuments you can buy will change….
At the same time, there are changes in the horizontal NAD83 datum (the new one is NATRF2022) of a meter or two; both new datums will now move with the continent as geodesy finally has to deal with plate tectonics, post-glacial rebound and other assorted changes…
P.S. this change won’t deprive Colorado of any of its 14ers…but while it would seem peak 13,001 is at risk of dropping out of the list of 13ers, that elevation was an NGVD29 elevation and so was probably more like 13,008 in NAVD88…
So Howard Lee over at Ars Technica took a swing at how our understanding of global tectonics has been changing over the past 40 or 50 years and wrote a lengthy article on it. It is full of quotes and assertions that really don’t hang together very well, making a certain geophysicist kind of grumpy. It doesn’t seem that any of the scientists quoted were really saying anything wrong, but the assembly in the article, which doesn’t seem to recognize the discrepancies nor fully master the techniques being used, can lead to a sense of “WTF?”
One of the bread-and-butter things seismologists do is locate earthquakes. There are kind of two main flavors of this: one is global and the other is local/regional. GG doesn’t do global relocations (but will point out that depth of such locations often relies on the presence of depth phases like pP reflections from the surface or the relative strength of surface waves) but has done local event locations. And there are gotchas out there that often aren’t as appreciated as they should be because all too often data is pitched into an inversion code and the resulting output is accepted as correct.
We’ll start with some simple things and move into somewhat complex stuff a bit but will stop short of the really ugly problems of locating events in a 3-D structure that is in part determined by those same travel times.
So in the previous two installments, we reviewed ideas for how the High Plains got so high and some of the observations out there that bear on this question. Beyond satisfying some curiosity, what does this do for earth science? Why pay money to do this?
Let’s consider three outcomes: that the High Plains gained their elevation by the end of the Laramide orogeny (say, 40 Ma), that they gained their elevation after the deposition of the Ogallala Group (say about 5 Ma), and that they were high, went down, and rose again. Read More…
We’ve discussed isostasy a few times here, but today let’s stand back and ask the question, how do we determine what has led to the creation of isostatically supported topography? We will for today put aside the discussions of dynamic topography and just concern ourselves with isostatically supported topography, which seems likely to describe much of the US Cordillera. For this post, we’ll just focus on the crustal part of the problem, leaving the mantle for another day.
OK, first up is that isostasy means that the integral of density from the surface to some depth of compensation (usually somewhere in the asthenosphere) is constant. So how do we get at density at such great depths? At first blush you might think “gravity” as that is the geophysical observable produced by mass. The problem is that gravity is non-unique: you can recreate any gravity field by having a thin surface layer varying in density. Gravity gradients tell you of the maximum depth an anomaly can lie, and the integral over a broad region tells you of the total mass surplus or deficit relative to some reference. Those integrals support isostasy, but the gradients are tough to work with because isostasy is only thought to work well at long enough wavelengths that the strength of the lithosphere becomes irrelevant. So in essence you need to smooth gravity out to appropriate wavelengths–and once you do that, the depth limits in the raw gravity are pretty much gone.
So with gravity being relatively useless, where do we go? Keep in mind that we’ll be wanting to compare two columns to be able to discern what happened at one column relative to the other to produce a difference in elevation.
Consider for a moment the geoid, which is the difference in elevation between a reference spheroid and an equipotential. The geoid has lots of neat properties, among them being directly related to the gravitational potential energy in the lithosphere. It is sensitive to density variations at great depths and so can give us insight into deep earth processes. But there are some issues that casual readers of papers using geoid might want to be aware of.
Geoid has long been recognized as having a sensitivity to greater depths than gravity, but this is a mixed blessing as density variations far below the asthenosphere can affect the geoid, complicating a lithospheric interpretation. The most common approach is to filter the geoid to eliminate long wavelengths that are most sensitive to deep structure–but these same wavelengths are also sensitive to the difference between continents and oceans. In the western U.S., the look you get from the geoid depends on how you filter it. For instance, these are two images of the geoid, one as published in Jones et al., Nature, 1996, and the other with a different filter.
The clearest difference is at the right, where the solid zero line has moved a lot, but also note that the scale of the color bar has changed. It can be a bit hard to compare these, so another way of looking at it is to plot some points from each against each other:
The diagonal line would be where points would plot if both filters yielded the same values. Clearly the southern Rockies (SRM) pick up a lot of power in the degree and order 7-10 range compared with, say, the Sierra Nevada (SN). If interpreting this for potential energy, at D&O >7 taper to 11 the western Great Plains (GP) would have a positive GPE and would be expected to have normal faulting, but at D&O >10 taper to 15 it would be quite negative and you would expect to have compressional stresses and possible reverse faulting.
(Beyond the issues with the edge of the filter is the nature of the taper–a brute force cutoff can produce some artifacts you might not want to interpret.)
Anyways, what is the appropriate filter? There is no simple answer for three reasons. One is that the maximum depth you might care about probably varies across the region so a filter that cuts off in the asthenosphere in one place might also cut off the lower lithosphere in another. Another is that there is significant shallow power in the longer wavelengths/lower orders: continent/ocean boundaries have some real power in low degrees and orders. So when you filter out the long wavelengths, you can be removing shallow signal as well as deep signal. The third is that the sensitivity with depth is gradational, so a filter won’t fully cut off greater depths unless there is reduction in power from shallower ones.
(If you are wondering, in the paper we chose D&O 7-11 as the most appropriate filter for our purposes).
So be cautious when a filtered geoid is presented as a purely lithospheric signal, for it could be contaminated with deep sources or cutting off shallow ones.
Recently NSF’s EarthScope program office put out a media announcement with the top ten discoveries they attributed to the soon-to-end program. (EarthScope, for those unfamiliar with the program, originally had three main legs: the Transportable Array (TA) + Flex Array collection of seismometers, the Plate Boundary Observatory (PBO) network of GPS stations, and the San Andreas Fault Observatory at Depth (SAFOD), a drill hole through the fault). What struck GG about this collection was just how little we learned about tectonics, which was a selling point of sorts for the program prior to its start.
Now some of the “discoveries” are not discoveries at all–one listed is that there is a lot of open data. Folks, that was a *design*, not a discovery. A couple are so vague as to be pointless–North America is “under pressure” and there are “ups and downs” in drought–stuff we knew well before EarthScope, so these bullets give little insight to what refinements arose from EarthScope. And then the use of LIDAR to look at displacements of the El Mayor-Cucapah earthquake was hardly a core EarthScope tool or goal even as the program might have contributed funds. So the more substantive stuff might amount to 5 or 6 points.
Arguably PBO has more than delivered and SAFOD disappointed, but GG would like to consider the TA’s accomplishments–or non-accomplishments. TA-related “discoveries” in this list are actually a single imaging result and two technique developments (ambient noise tomography, which emerged largely by happy coincidence, and source back projection for earthquake slip, which is largely a continued growth of preexisting techniques). So in terms of learning about the earth, we are really looking at one result worthy of inclusion.