A piece in High Country News (update: now this is online) reminded GG about this, which he thought had faded away. Back in the late 1980s and into the 1990s, the Department of Energy was investigating the suitability of Yucca Mountain for a nuclear waster repository. As part of that effort, money was given to the state of Nevada to study the studies and do some work on its own. One thing that came out of this money was the idea that we could know how severe shaking was on a geologic scale by looking for precariously balanced rocks. The initial work, conceived by Jim Brune, was to look at the precarious boulders in the vicinity of Yucca Mountain. Finding several, he marked them and pulled on one with a gauge until the rock toppled, providing a measure of the force needed to knock the rock down. Somewhat later, the 1992 M5.6 Little Skull Mountain earthquake failed to topple any of these rocks despite several small rockfalls being observed in the area (initially Brune reported that one rock had fallen but then recalled that rock was the one he had toppled).
From physical analysis (both analytical and experimental), Brune and colleagues estimated that many precarious boulders could not survive more than about 0.1-0.3 g of static horizontal acceleration, which they note would be lower than the actual variable accelerations in an earthquake. From the presence of such precarious rocks, they have argued that some historic earthquakes did not follow traces many have assumed and that ground accelerations in some places were lower than anticipated. Because Brune and coworkers believe they have found paleoaccelerometers, considerable effort has gone into this project; for instance, Brune has a collection of over 12,000 photos of precarious boulders and a workshop was held on precarious boulders.
Sounds like a clever use of geology, no? Yet GG remains highly skeptical (as he was at the beginning, when he was at UNR, where Brune was working).
Why this skepticism? A number of reasons, from the most prosaic to more scientific.
First, if you’ve been in an earthquake, you probably have seen stuff topple. But you probably have also been surprised at what didn’t fall over. So some precarious rocks surviving an earthquake isn’t too shocking, especially when you consider that there are rockfalls but no precarious rocks falling in events like the Little Skull Mountain earthquake.
Second, the target shifted, reflecting limitations in this approach. The initial idea was that the presence of old precarious rocks would tell you there had not been a large earthquake, but finding them within 15 km of the San Andreas fault meant that they could exist near some really large earthquakes. This shifted the interpretation to the less useful prospect of putting an upper bound on ground motion.
Third, precarious boulders are pretty much limited to bedrock exposures, which we already knew were subject to some of the lower accelerations in an earthquake.
Fourth, how do you know if precarious boulders are absent (i.e., there was high ground motion)? This is far from trivial because you need a (fairly correct) model for the creation of precarious rocks and then you look for deviations from that model. The best attempt GG saw in the precarious rock literature compared the distribution of precarious rocks against non-precarious rocks to argue that there were too few precarious rocks in an area; this supposes that all other factors are equal, a difficult assertion to test without some concept of how such rocks are created.
Fifth, the dates on precarious rocks do not necessarily date when the rock became precarious. Setting aside the issue of dating desert varnish for the moment, the assumption in the precarious rock work is that boulders with a certain height to width ratio are precarious, yet this is obviously untrue if such rocks are surrounded by other rocks. Dating when a rock ceases to be covered at certain spots only provides some constraint on the time when it becomes precarious.
Sixth, the assumptions about precarious rock creation and destruction are too simplistic and lack constraints on important factors. Consider it this way: the number of precarious rocks per unit area at some time t (call it N(t)) represents the number of precarious rocks created by time t (call it C(t)) minus the number destroyed by time t (D(t)). Both creation and destruction have background components due to normal weathering processes (call them Cb(t) and Db(t)) and components caused by earthquakes, which only occur during the earthquakes (call them Ce(t) and De(t)). So we can write
N(t) = Cb(t) + Ce(t) – Db(t) –De(t)
Precarious rock research has focused strongly on the De(t) term and argued that Ce(t) is zero. There is an argument that most of the precarious rocks out there were formed by some 5000 years ago and so the modern rate of precarious rock creation is near zero. All of this seems unlikely: mass wasting in earthquakes would seem to allow for rocks previously embedded within a matrix of other rocks to be exposed, for instance, and it seems implausible that no new precarious rocks would be created. Many of these issues were noted in the conclusions in the 2011 Seismological Research Letters summary of the precarious rock workshop.
Look, GG understands, this is a cute application of geology. And knowing about paleoseismicity is important. But this approach has so many unknowns that it seems very far from generating much useful information.