Archive | February 2016

Geologic Illustrations (Part 1?)

Something hit GG (well, figuratively; GG can still dodge the physical brick-bats from unhappy students): we have made an effort to teach students about writing, but we haven’t made a similar move in terms of teaching them how to make figures.

This struck home in realizing that the discussion in a writing class centered on what was in an illustration in a paper.  And GG realized that the same thing was what typically happened in reading seminars: it was far more frequent to discuss what was shown in a figure than to struggle through some part of the text. And of course most scientific presentations now center on a PowerPoint (or equivalent) presentation. What we show can outweigh what we say.

We need to instruct students on how to make figures that work.

So as a public service, we’ll start looking at problems in figures and alternatives as inspiration strikes.  Now there are some nice books out there on scientific illustration; many of the points in such books apply to geoscience and so its likely we’ll repeat some. But the elements of time and space in geoscience add an extra degree of difficulty.

Today consider the simple choice of point size for a scatter plot of some kind.


We’ll not identify the authors simply because this problem is all over the literature and nobody really needs to be singled out (this particular example is quite old). So, here we have a map of values of something, larger symbols for bigger values (in this case, positive is black, negative is gray). Seems clear, right? So what is the problem?

The problem is that areas with no data look identical to areas with a well-determined value near zero. Your eye assigns more significance to the large points compared with the smaller points.  What is more, you have no idea which points are significantly different from zero and which are not. You might guess the big points are more significantly different than zero (but in parts of the image, you’d be wrong).

Now in this case, arguably, it is the stuff away from zero that matters for the authors’ point.  But there are plots out there where the zero points are every bit as significant as the non-zero points, and yet points are sized by their value (a common plot where this was employed for quite awhile was the plot of travel time anomalies by backazimuth and distance; zero does matter in these cases). Something similar is here:


At first blush you might think there were three points–a plus towards the bottom and two circles toward the top.  Closer examination reveals  a lot of small points scattered about–and then just what is inside that dashed circle? Are those microscopic dots data points or lakes or reproduction errors? In this plot, arguably, the zeroes, which are nearly invisible, are every bit as significant as the big points–in fact there is a decent chance they are more significant.

So when you think you need to use the size of symbols to convey some value beyond the points’ x-y position in the plot, be sure that smaller = less significant.  This might work well if plotting a resolution matrix, where smaller numbers are indeed less significant. But if all the points are equally significant–or their significance is unrelated to the size of the symbols you want to use–rethink what you are plotting.

Of course this points us at the general problem of 3-d plotting (plots where you want x-y-z all on the same plot).  We’ll have to consider that, but maybe next time we might discuss the great enemy of good plots: Excel.

The NSF Lottery…for real?

A recent article from eLife suggests that funding panel scores at NIH basically are unpredictive of future results: proposals that were very highly ranked produced publications with about the same impact as those that squeaked through the system. The authors suggest that a lottery system would be more appropriate. Many of us already term submissions to the NSF programs as a lottery; should we make this the actual practice?

Is this a fair evaluation of the system? Most probably, yes.

First consider the counter arguments: the metrics are bad, or not all scientific experiments will succeed.  The metrics used were publication numbers and citations to those publications.  Neither are the most compelling metric possible. Since all the grants presumably have money for publication, it would seem unlikely that there would be much variation there given the emphasis by funding agencies on tangible outcomes. There is some hint of correlation in the citations, but citations can be off too (using non-self-citations would be a better start, actually). So it is possible that the more impactful science was getting higher marks as a proposal but isn’t well identified solely by citations, which might better reflect the publication tendencies in different subfields. However, when you start piling up all the science output, it seems likely that some fraction of the important stuff would be getting cited more than run-of-the-mill results.

So it seems quite likely that this paper is more or less on the mark.  Why should that be the case? Here there are lots of possibilities: Read More…

A radical notion

Seeing all the sturm und drang over appointing a new Supreme Court justice, GG would love to throw out an idea so radical that neither side has (or will) consider it.

Appoint a scientist.

At first, you shudder, isn’t this a job for lawyers and judges? Well, not historically; there have been government administrators and politicians named to the court; this business of the court only being a bunch of lawyers who rose through the judicial ranks is rather new.

With the increasing number of topics where scientific judgement is valuable, having a scientific background could bring a totally different perspective to many issues the court faces.  There are even a few politicians with science backgrounds (Bruce Babbitt, former Secretary of the Interior and a former governor of Arizona, had geology and geophysics degrees, for instance)–though not many. Maybe you could tap somebody like Marcia McNutt, whose career has included running the U.S. Geological Survey, editing Science, and being an MIT professor.

Wouldn’t that set Washington on its ear?

Earthquakes are so passé…

Yeah, it is now time for Hollywood to move back to that old favorite, volcanoes! What is this?  Well, it appears there will be a sequel to San Andreas that apparently takes the cast out to “when the notorious Ring of Fire in the Pacific Ocean erupts” as the ScreenRant story says. Given the excesses that San Andreas had compared to older earthquake-driven disaster movies, just how over the top can this one go compared to, say, Volcano or Pompeii? Will the volcanoes erupt one by one, heading for some major metropolis like a string of firecrackers? Will the whole thing erupt and spawn a new moon? (Oh wait, that’s been done already). Will The Rock rescue people on a giant surfboard as he uses the successive tsunamis to race ahead to save distant relatives?  Feel free to speculate; it will be awhile before we find out what awaits…

Hair trigger earthquakes

One of the more perplexing problems in earth science is the connection between earthquakes. 40 years ago, if asked, seismologists would always assert that one earthquake following another that was far away was just a coincidence. But how far away is “far away”? And within what time window? A recent paper in Nature Geoscience argues that some earthquakes cataloged as single events on single faults are in fact multiple events representing rupture on distinctly separate faults.(A press release is here if you can’t get the original paper).

The earthquake that really reset a lot of thinking was the 1992 Landers, California earthquake.  It was found that some areas far away (100s to ~1000 km) had swarms of small earthquakes start up just as the surface wave from the Landers quake passed through. Several decent sized events cropped up at distances kind of far from the mainshock–the M6.7 Big Bear earthquake some 20 km away and minutes later and the M5.7 Little Skull Mountain earthquake on the next day and a few hundred kilometers away. The combination of seeing small earthquakes triggered by the dynamic stresses of a passing seismic wave with the increasing sense that large earthquakes were simply small earthquakes that sort of failed to stop made it more palatable for scientists to connect distant events.

Unfortunately it still isn’t clear how you would trigger, say, the Little Skull Mountain event a full day later and much farther away than any static or dynamic stress drop would seem to suggest. And there are plenty of older earthquake sequences with similar questions.  The 1954 Dixie Valley-Fairview Peak-Rainbow Mountain sequence started with a six month lag from the Rainbow Mtn events to the Fairview Peak quake, but that then was followed all of four minutes later by the Dixie Valley quake. Although there is literature arguing that the static stresses might be enough to explain some of the connections, there are enough marginal numbers to suspect we don’t fully understand this sequence either. It almost seems like you can have a sort of Rube Goldberg machine getting tripped by one earthquake, which might launch some smaller events or change fluid pressures that trips some more events that eventually leads to a second big event.

So what? Well, as the current paper points out, we have to be careful in estimating a maximum magnitude event: surface faults that seem disconnected might all rupture together, and over distances rather greater than we thought (something similar happened at a smaller scale in the Landers event, where some faults thought to be separate all ruptured together).  This is particularly a concern in southern California, where a rats nest of faults on similar trends holds out the possibility for connected ruptures (one nightmare would be if the Malibu Coast fault links through Santa Monica, Beverly Hills and Hollywood and on to the Raymond Hill Fault and possibly on to the frontal faults on the south side of the San Gabriel Mountains; such an event would be devastating and presently is not considered plausible.  It also passes through some of the highest priced real estate on the West Coast). In some ways, the 2011 Tohoko might be similar as more sections of the subduction zone ruptured than many scientists expected. On the other extreme, detailed mapping of the slip distribution on faults from analysis of seismic and geodetic records has shown that slip along a fault can be quite variable.  Basically, our old notion that the penny-shaped crack could provide all the insight we needed is pretty much dead.

So for now, we are warned that things might be worse than we currently guess.  We don’t have a full handle on the interaction of different earthquakes and fault system; this is where considerable effort is focused so that earthquake forecasts might better reflect what really could happen.

Notes from a social trail

GG had been on the Routeburn trail in New Zealand and wants to share little tidbits he couldn’t find before he left.  So this is sort of a big aside from the usual course of things….

 You can look in the guidebooks for stuff like sights to see and distances on trails. But look here for the mundane.

Camp vs hut vs guided walk. On the Routeburn you can do all three (some other Great Walks have fewer options). Huts are a godsend in rain or cold weather, they have flush toilets (!), but bunk rooms are a community noise machine.  We had folks get up at 5 am in one hut, making a racket as they packed up. If your sleeping bag is warm, you will be hot and for some the smell can be a bit much. But huts let you carry a lot less stuff (no tent, pad, stove, fuel). Camping gives you some space from neighbors and at least here you get sinks and a covered area to cook, so not quite as rough as much American backpacking. Guided walks are for those who wish to do this as a series of day hikes, their animal needs met at night by a full hotel experience. Of course there is a price to be paid for luxury built by helicopter in a national park…

What can you leave behind either way: dish soap and sponge or scrubbie. Water purifier (well maybe-more in a moment). Toilet paper. But bring a trash bag.

What you might question: water is a big one. DOC (Dept of Conservation) guys in Queenstown claimed water was from roofs and collected in cisterns. Not a chance at huts with flush toilets. Where is the water from? Streams or springs. Notes at huts say the water is usually ok but you could purify it if you like. DOC hut wardens might say there are no animals that might carry Giardia, but we saw deer and deer scat (do stoats carry Giardia?). One day DOC might regret this if they get something in their water. But right now they are basically using stream water.

About that “social trail” in the title. No, not referring to an unofficial trail but instead one where you constantly meet people. And the Routeburn is almost like a days’ long international journey. As GG’s daughter noted, you could play language bingo, so many tongues are spoken. And then as camping or huts concentrate folks, you almost always are chatting with people from across the globe. In fact, it often seems like New Zealanders are the least common people on the trail. Similarly, the level of backcountry experience varies a lot from total neophytes to expert travelers.

A minor surprise was how rushed most folks were. Most do the Routeburn in 3 days but some in one (it is about 20 miles long). A lot of folks get to a hut well before sundown and just sit inside, which seems strange. (Better to be leisurely on the trail in that case).

As in many mountains, weather can be quite different on opposite sides of the track. The hut wardens post weather forecasts for their location, which might not help you to get somewhere. This can cause some confusion for the inexperienced (you can see a forecast for a sunny day and find on the other side of the divide that it is raining–best to be prepared for anything).

Any ways, just some info GG would have liked to have had beforehand.