Archive | geology RSS for this section

Occam’s Cut

In the previous post, we discussed how Occam’s Razor is of little use in some arguments, leading to the principle of least astonishment. But here GG would like to suggest that the shear immensity of geologic time means that Occam sometimes cuts us off from explanations we need to consider.

In this case, let’s talk Laramide.  Orogeny, that is, the creation of the Southern Rocky Mountains between something like 75 and 45 million years ago. The prevailing explanation is that the subducting ocean floor only went down to about 100 km or so and turned flat, interacting with the continent in a way to make mountains far from the plate edge. It is a nice compact explanation.

The thing is, there are a lot of places where slabs today are flat and none of them produce anything of the scale of the Laramide Orogeny.  Closest are the Sierras Pampeanas in Argentina, which are far closer to the trench than the Laramide ranges were, among other difficulties. Even looking over past orogenies yields few plausible rivals–maybe the Alice Springs orogeny in Australia, or if you push things hard, perhaps the Atlas ranges in northern Africa. Or, of course, the Ancestral Rockies in almost the same place as the Laramide. But these are just as cryptic and far less common than all the events that created the Appalachians, or the Urals, or the Caledonides, or the bulk of the Alpine-Himalayan system.

Perhaps, when we encounter oddities in the past, we need to recognize that something unusual happened, meaning that Occam’s bias for parsimony might in fact be precisely the wrong bias. For instance, somebody walks up and says they will flip a coin ten times and it will come up heads.  He asks a passerby for a coin and then does as he says.  Parsimony says this was luck, but perhaps a better explanation is that it is a trick either involving an accomplice or sleight-of-hand [scientists are suckers for sleight-of-hand, as the Amazing Randi often showed].

Given the number of times slabs probably have been flat and given the far rarer production of mountain ranges far from the trench, maybe our bias for parsimony should be relaxed–odd and unusual results might demand more than a single cause. Maybe things were a bit Rube Goldberg-ish for awhile. In a similar vein, some workers are arguing that the impact ending the Cretaceous was so effective not just because of its size but because of the sulfur-rich rocks it hit (this in part a response to the absence of other impacts in causing extinction events and other extinction events seemingly lacking a coincident impact). Arguably something like this has or will emerge in explaining how one branch of the great apes led to humans despite lots of earlier evolutions of animals failing to reach a similar end. We often focus on the positive outcome–the mountains made, the extinction that happened–and miss how often the simple explanation predicts something that didn’t happen (kind of like the old quip that the stock market predicted nine of the past five recessions). We don’t ask, why are there no mountains in Iowa, for instance; we ask, why are there mountains in Colorado? But perhaps we need to ask both.

Occam reminds us to be distrustful of overly-complex explanations, but maybe we need to be careful not to demand too much simplicity. All theories will conflict with some observations in some way; there are always strange things that happen that are coincidences or results of unrelated phenomena.  This reality means that no theory will fit every possible observation; what’s more, we tend to accept more misfits for simpler theories (for instance, the half space cooling model for ocean floor topography is widely accepted despite all the oceanic plateaus and seamounts one has to ignore to get a decent fit). Given that, we should wield the Razor more carefully least we cut off our theoretical nose to spite our parsimonious face….

Oops update

GG asked readers whether or not an error in a figure drafted 8 years ago should be corrected and the answer was a resounding yes.  So the figures in question have been redrafted and will go to the journal shortly.  If you are curious, the “dynamic” version (uses Flash) can be found here.

To be clear, the intent is simply to remove a mistake in the past and not to update the figure to reflect how it might get drawn today. So if you find things you don’t agree with that reflect scholarship since 2010 or so, don’t expect a correction, but if there are other mistakes substantial enough that somebody might misinterpret things, let GG know.

Retraction Watch was apparently amused at the notion of polling the web to decide whether or not to update the figure and so ran a story on this episode. When the journal puts out the correction, a link will get posted here.

The Reign of Strain Isn’t Very Plain

Having just remembered the 1906 San Francisco Earthquake brings to mind Harry Fielding Reid’s model of elastic rebound for earthquakes developed from observations of that 1906 quake. The idea that the earth’s surface was slowly moving in opposite directions across a fault over a long time period, straining the rocks near the fault until a critical point was reached when the strained rocks would cause the fault to rupture, allowing each side of the fault to “catch up” with the more distant parts of the earth’s surface farther away.

Much later, when plate tectonics was developed, earth scientists could tell what the average velocity of plates were over a couple million years from analysis of magnetic anomalies on the seafloor.  When space-based geodesy came along, first with VLBI and then with GPS, geodesists found that the plates were moving today at a rate equal to that seen over millions of years.  It seemed as though the earth ran at a smooth and even pace.

The combination of ideas would suggest that one hope expressed about a hundred years ago was that faults would be triggered like clockwork. Every so many years, termed the recurrence interval, a fault would rupture with what would be called a characteristic earthquake. Ideally you could then predict the next earthquake if you knew when the last couple had happened.

This ideal view of the earthquake world has gradually unravelled, with a couple of observations in the past decade indicating that there really is something more variable in how geologic strain is created than the elastic rebound model and smooth plate motions would have suggested.

Read More…

Oops

Well, GG has from time to time pointed out mistakes in graphics in papers, so it is only fair that he share a mistake in his graphics pointed out by others. In Jones et al. (Geosphere, 2011), several panels of Figure 2 place the frontal thrust faults of the fold-thrust belt in the wrong place in Montana (to be clear, this is the upper left part of the full figure seen at right or below):

LaramideMistake  LaramideOrigFig2e

The black line with barbs was to represent the eastern limit of the fold-thrust belt, but GG apparently mistook a high-angle Laramide-style fault for a low-angle thrust connection up to the Helena Salient.  The proper line would be close to the red line in the figure above.

So a question to any readers is, do you think this merits a published correction?

[Updated 10:30 AM MDT 4/7 to show the full extent of the figure; poll added 12:06 pm MDT 4/7]

Success Is a Failing Model

Why make a model? For engineers, models are ways to try things out: you know all the physics, you know the properties of the materials, but the thing you are making, maybe not so much.  A successful engineering model is one that behaves in desirable ways and, of course, accurately reproduces how a final structure works. In a sense, you play with a model to get an acceptable answer.

How about in science?  GG sometimes wonders, because the literature sometimes seems confused. From his perspective, a model offers two possible utilities: it can show that something you didn’t think could happen, actually could happen, and it shows you situations where what you think you know isn’t adequate to explain what you observe. Or, more bluntly, models are useful when they give what seem to be unacceptable answers.

The strange thing is that some scientists seem to want to patch the model rather than celebrate the failure and explore what the failure means. As often as not, this is because the authors were heading somewhere else and the model failure was an annoyance that got in the way, but GG thinks that the failures are more often the interesting thing. To really show this, GG needs to show a couple actual models, which means risking annoying the authors. Again. Guys, please don’t be offended.  After all, you got published (and for one of these, are extremely highly cited, so an obscure blog post isn’t going to threaten your reputation).

First, let’s take a recent Sierran paper by Cao and Paterson.  They made a fairly simple model of how a volcanic arc’s elevation should change as melt is added to the crust and erosion acts on the edifice.  They then plugged in their estimates of magma inputs. Now GG has serious concerns with the model and a few of the data points in the figure below, but that is beside the present point. Here they plot their model’s output (the solid colored line) against some observational points [a couple of which are, um, misplotted, but again, let’s just go with the flow here]:

CaoPatFig

The time scale is from today on the left edge to 260 million years ago on the right.  The dashed line is apparently their intuitive curve to connect the points (it was never mentioned in the caption). What is exciting about this?  Well the paper essentially says “hey we predicted most of what happened!” (well, what they wrote was “The simulations capture the first-order Mesozoic- Cenozoic histories of crustal thickness, elevation and erosion…”)–but that is not the story.  The really cool thing is that vertically hatched area labeled “mismatch”. Basically their model demands that things got quite high about 180 Ma but the observations say that isn’t the case.

What the authors said is this: “Although we could tweak the model to make the simulation results more close to observations (e.g., set Jurassic extension event temporally slightly earlier and add more extensional strain in Early-Middle Jurassic), we don’t want to tune the model to observations since our model is simplified and one-dimensional and thus exact matches to observations are not expected.” Actually there are a lot more knobs to play with than extensional strain: there might have been better production of a high-density root than their model allowed, there might have been a strong signal from dynamic topography, there might be some bias in Jurassic pluton estimates…in essence, there is something we didn’t expect to be true.  This failure is far more interesting than the success.

A second example is from the highly cited paper by Lujan Liu and colleagues in 2008. Here they took seismic tomography and converted it to density contrasts (again, a place fraught with potential problems) and then they ran a series of reverse convection runs, largely to see where a high wavespeed under the easternmost U.S. . The result? The anomaly thought to be the Farallon plate rises up to appear…under the western Atlantic Ocean. “Essentially, the present Farallon seismic anomaly is too far to the east to be simply connected to the Farallon-North American boundary in the Mesozoic, a result implicit in forward models.”

This is, again, a really spectacular result, especially as “this cannot be overcome either by varying the radial viscosity structure or by performing additional forward-adjoint iterations...” It means that the model, as envisioned by these authors, is missing something important. That, to GG, is the big news here, but it isn’t what the authors wanted to explore: they wanted to look at the evolution of dynamic topography and its role in the Western Interior Seaway–so they patched the model, introducing what they called a stress guide, but which really looks like a sheet of teflon on the bottom of North America so that the anomaly would rise up in the right place, namely the west side of North America. While that evidently is a solution that can work (and makes a sort of testable hypothesis), it might not be the only one.  For instance, the slab might have been delayed in reaching the lower mantle as it passed through the transition zone near 660 km depth, meaning that the model either neglected those forces or underestimated them. Exploring all the possible solutions to this rather profound misfit of the model would have seemed the really cool thing to do.

Finally a brief mention of probably the biggest model failure and its amazingly continued controversial life.  One of the most famous derivations is the calculation of the elevation of the sea floor based on the age of the oceanic crust; the simplest model is that of a cooling half space, and it does a pretty good job of fitting ocean floor depths out to about 70 million years in age.  Beyond that, most workers find that the seafloor is too shallow:

SteinSteinOceanFloor

North Pacific and North Atlantic bathymetry (dots with one standard deviation range indicated by envelope) by seafloor age from Stein and Stein, 1992. “HS” is a half-space cooling model and the other two are plate cooling models.

This has spawned a fairly long list of papers seeking to explain the discrepancy (some by resampling the data to find the original curve can fit, others by using a cooling plate instead of a half space, others invoking the development of convective instabilities that cause the bottom of the plate to fall off, others invoke some flavor of dynamic topography, and more). In this case, the failure of the model was the focus of the community–that this remains controversial is a bit of a surprise but goes to show how interesting a model’s failure can be.

Oily Pasts and Futures (book review)

Politics and industry make strange bedfellows.  Politics is often short-sighted, with most politicians locked in to the next election, or even the next round of polls, but multifaceted.  Industry, on the other hand, can look over longer timespans but is narrowly focused (“can” is not always “does”). You might hope that the pair could produce public policy that was both broad and longterm, but the reality seems to combine the worst characteristics of each.

Nowhere is this more evident than in peering into the future of oil. Mason Inman’s recent biography of M. King Hubbert, The Oracle of Oil (Amazon link), provides a nice reminder of this interaction from an earlier time. Hubbert’s views on oil, which were made with an eye towards a fully sustainable economy, conflicted with corporate and political motives. Corporations are in a specific business and like to hear that their future is bright, a disastrous approach when the future is changing (see Eastman Kodak’s fall as digital photography bankrupted their film business). Thus there is a tendency within a company to both develop rosy forecasts and believe them (the more pessimistic will tend to leave).  Politicians want happy news about tomorrow–Cassandras don’t tend to get elected. So what happens when unhappy predictions are made?

Read More…

My Science Crimes

In keeping with this end-of-the-year theme of what GG is doing wrong, some “crimes against science,” which, as Bob Sharp defined them years ago, was doing some work of interest to the broader community and then not publishing it. (Thankfully, these aren’t the more serious offenses in the expanded criminal ledger GG proposed awhile back).

Now this isn’t an uncommon occurrence: students graduate with thesis chapters not quite ready for publication and discover that life beyond grad school doesn’t provide rewards for getting that stuff into journals.  Some other times things just pile up enough that a paper isn’t completed when everything is handy, and it just gets harder to return to as time goes on.

So, in case anybody out there would benefit from some of this stuff, feel free to nudge GG to take some time and share, either informally or by actually publishing some of this.  And if nobody seems interested, well, then maybe not much of a criminal act :-). Most of these are in some kind of manuscript form (there is other stuff that didn’t even get that far).

  • Geologic map of the Alexander Hills and eastern China Lake basin. Yes, GG mapped while in grad school and actually handed over a copy of his map to Lauren Wright long ago, who included some of it in a never-published update to the SW Tecopa quad (now would be Tecopa 7.5″ quad map). A lot of cool stuff–probably the eastern end of the early Garlock Fault interacting with some low-angle, basin-bottom faults and a pre-China Lake basin history not evident in published maps.
  • Seismicity of the Hansel Valley region.  GG feel really bad about this, as there were a lot of coauthors on the 1983 experiment, which was one of the densest deployments of seismometers in an extending area.  The results are in GG’s PhD thesis but still might merit publication as the data indicates how a low-angle normal fault might interact with ongoing seismic deformation.
  •  Magnetostratigraphy and some additional paleomag in the Lake Mead region. A collaborator dropped out and so the baton was dropped after a single paper. Some of the data is visible here.
  • Paleomagnetic measurements in monoclines of the Colorado Plateau.  Joya Tetreault’s thesis has this; substantial vertical-axis rotations exist in some folds (the Grand Hogback being the most dramatic), though the sampling is far less than ideal and some structures seem to make little sense.
  • Paleomag and micropolar analysis of seismicity in the Coalinga area.  Also part of Tetreault’s thesis. The micropolar work seemed to capture the bending component of folding in the seismicity while the paleomag suggested San Andreas-parallel shear within the fold limbs.
  • Earthquakes in the southern Sierra located with the 1988 experiment. Jason Edwards, a CU BA graduate, did some of this work which was never carried farther. It seemed there were events under one of the Recent cinder cones in the s Golden Trout field as well as some deep events in the westernmost foothills of the southern Sierra.
  • Geophysics of Panamint Valley and the Ivanpah Valley areas.  These were datasets collected by the MIT Geophysics Course in 1987 and 1983, respectively.  Both valley present a major challenge because a large basement gravity gradient exists across these valleys, complicating interpretation.

This is all in addition to various half-done projects still seeming to be active as well as datasets that never were fully exploited (for instance, data from a mixed broadband/short period array at Harrisburg Flat in Death Valley plus some more scattered instruments near Dante’s View, or our inability to get anything sensible out of array recordings of deep local events under the northern end of New Zealand’s South Island).