Archive | October 2016

Crimes Against Science

Many years ago, GG was fortunate enough to take a field seminar class taught by Bob Sharp at Caltech (it is for him that “Mt. Sharp” on Mars is informally named). At one point, Bob discussed a conclusion reached by an unpublished PhD thesis and then opined that the failure of this to be published was a “crime against science.”

GG is wondering if it is time to expand the statutes a bit.

An interesting paper in the mill by Edwards  and Roy is subtitled “Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition,” and this paper expands on the general discomfort many of us have had over a glut of incremental papers or a growing number of episodes of misbehavior to argue that the kinds of criteria being used to incentivize scientists are fouling the scientific endeavor so much that we will soon cross a line, and science will cease to be seen as a valid and useful endeavor. GG would love to quote huge chunks of this paper; better to go and read it for yourself.

With that in mind, here are some proposed new statutes in the crimes-against-science law book:

Read More…

The Irony of Open Space

A letter writer to Boulder’s Daily Camera pointed out something that should be obvious but seems to escape many on the political left: preserving open space makes housing more expensive.  So if your priorities favor social justice over environmental preservation, you probably should be against purchases of open space.

It is unlikely that many (if any!) residents of Boulder regret decisions over the years to spend tax dollars buying open parcels of land.  It has enhanced the quality of life in Boulder.  But a clear side effect is to increase property values, both because Boulder is now that much nicer to live in and the amount of land available for housing is that much less.  If you are voting in your own self-interest, and if you are a property owner in Boulder, you should always be voting for the purchase of open space. Your property will gain value and the quality of life will stay about the same or improve.

Of course there are other options for lowering housing prices than building on open space: you can increase density within the existing built-up area. Here in Boulder one option might be for the University to build enough high-density housing (apartments and dorms) for the 30,000 students so there isn’t the pressure on surrounding neighborhood rentals.  You can remove height limits, you can remove limits on how many unrelated people can live in a house, etc.

A longer view, though, might suggest that the open space dilemma might be resolved when population starts to decline.  Although it is hard to imagine in growth-happy Colorado, but the demographics point to populations declining over time. When population pressures relax, how happy will the remaining residents of Boulder be there isn’t that open space remaining?  You could tear down abandoned buildings and replace them with something more useful, but built-over open space is unlikely to be restored to a more nearly natural condition.

Battle of the Back-bulge

…Backbulge basin, that is.  The term is in common use in the stratigraphy/paleosedimentology community for sedimentary rocks deposited on the foreland side of a forebulge:

foredeepcartoon

Cartoon of thrust belt and foredeep, DeCelles and Giles, Basin Res., 1996

There’s a little exaggeration in this cartoon, though Read More…

What does it take?

Recently The Daily Show had their correspondent Jordan Klepper talk with some Trump supporters and his discussions led him to ask them “What would Donald Trump have to say for you to change your mind about supporting him?” Many answered that there wasn’t anything he could say that would change their minds. Arguably you could have done something similar with the most rabid of Bernie Sanders’s supporters, many of whom still deny that Clinton greatly out-polled Sanders in the primaries, so this is not necessarily a right wing/left wing kind of thing. Unfortunately, the same lack of logic seems present in confronting issues like climate change, GMOs, and vaccines.

Why bring this up here? Because there should always be the possibility that there is evidence that could change your mind.That is arguably the part of the “scientific method” that everybody should learn. Admittedly at times it can be hard to put a pin in what, exactly, it might take to overturn well-established theories.  For instance, what would it take to toss geological history as we now understand it and accept Noah’s Flood as literally true and the cause of all the geology we see? It is hard to comprehend the full list, but for starters there would need to be strong evidence that radiometric dating is wrong, that interpretation of geological facies is wrong, and that you can create angular and buttress unconformities with unconsolidated sediment.  However, at the cutting edge things get tidier.  What would it take for GG to believe that the High Plains rose in the past 5 million years? Perhaps a mechanism with relevant observations to support it. The development of a robust paleoaltimeter showing such a result. Maybe there is a way to show that incision of the High Plains cannot be the product of changes in climate.

If close-mindedness is a problem when it affects voters, it is an absolute plague when it infects legislators. The idea of a representative democracy is that the representatives will take the time to fully evaluate the relevant facts before deciding on a course of action; since it is their job, they should be able to understand issues more completely than their electorate.  Ideally they should be able to communicate back to their electorate why they might be voting differently than their voters back home think they should vote. (Does this happen at allanymore?). In such a world, we would not be seeing arguments about the existence of human-caused climate change, we would be seeing arguments on how to address it (How much should we rely on natural gas as a bridge fuel? Should nuclear energy be a part of the mix? Is there a role for carbon capture? Carbon tax, or cap and trade?). Those kinds of arguments are quite amenable to compromise; denying factual evidence, on the other hand, is a stonewall.

And so, perhaps, one of the things we in the scientific world should emphasize is that we do change our minds when the evidence demands it.That, perhaps, is the greatest good we can do for the public at large, more than any research finding we might make.

The necessity of uncertainty: Part 2

OK, so error bars are good things, if you believe the last post. So what else is there?

Simply put, in many cases uncertainty is not a scalar, it is a matrix, and often a really big matrix. To know the absolute uncertainty on one point, you have to know how it covaries with other points. To be able to manipulate results, you really are helped by knowing the full set of covariances.

In tomography, for instance, it is matrix the size of the model space for each point; how the uncertainty at one point covaries with other points matters. Consider for starters a trivial case: a block of rock with two parts and measurements are made of the travel time of a seismic wave through the block:

seismic2block

The time for the seismic wave to travel through the blocks, t, is (w/v1) + (w/v2). Seismologists often work in slownesses, which are 1/velocity, so an equivalent expression is t = w*s1+w*s2. Obviously with one equation and two unknowns, we cannot say much of anything about s1 or s2. But let’s say we have an idea for v1 and an uncertainty, so then we have s2=t/w-s1, and if we stick in that uncertainty for s1 we can get an uncertainty for s2. But here’s the catch: if s1 is actually higher than the value we estimated, then the value for s2 must be lower: they are not independent.

This might be slightly more apparent with numbers. If w=12 and t is 7, let s1 be 0.33 +/- 0.08. From this we get s2=0.25 +/- 0.08. But here’s the thing: if s1 is actually 0.25, then s2 must be 0.33 if t is perfectly known.  The errors are correlated–they covary. If s1 was estimated to be higher than its actual value, then s2 is lower.

OK, well how is that helpful? Imagine that somebody does the inversion and reports s1=0.33 +/- 0.08 and s2= 0.25 +/- 0.08 and some other worker just needs the average velocity.  Well, they take the published numbers and get 0.29 +/- 0.06, where the uncertainty simply reflects the assumption that the individual uncertainties are independent–but they are not. The thing is, we know very precisely what the average slowness is: it is 7/24 or a bit above 0.29–but the uncertainty is near zero, not ~20%. Because of the high correlation (or covariance) of the uncertainties, the uncertainty of the total is far smaller than of individual pieces.

This exact logic can be applied to receiver functions (which, for those who haven’t seen them, are kind of a spike train representing P to S converted energy in a seismogram); long ago GG generated receiver functions with uncertainties.  These were quite large and indicated there were no signals outside of noise in the receiver function. But when you applied a moving average and kept track of these covariances, the uncertainties rapidly dwindled, revealing a number of significant signals that were actually present in the receiver functions.

Not surprisingly, similar issues exist in tomography. Say we have a tomographic model based off of local earthquake travel times with nodes every 100m.  Odds are pretty good that in parts of the model, you can fit all the data just as well if you increase the wavespeed at one node and decrease it at an adjacent node. Your uncertainty at that point might be awful. But if you want the average wavespeed over a volume of points, that might prove to be pretty robust as the covariances collapse.

Unfortunately the creation of these kinds of covariance matrices is not common (they fell out of favor in tomography, for instance, once fast and compact matrix inversions made the creation of these awkward and time consuming), but even if they were around, they are not trivially presented in a paper. The replacement in tomography has been “checkerboard tests” and spike tests, where a simple anomaly is introduced, observations from such an anomaly are calculated, and then these fake observations are inverted to see how well the inversion recovers them. But the degree to which the anomalies are recovered depends on the geometry, and it is impossible to test all geometries, so in some cases the tests make it seem that the inversion is far worse than it is, and in some cases far better. (A test GG has never seen or done himself would be the opposite of the spike test: introducing a smooth gradient across a model and seeing if the inversion could catch it.  GG suspects there is a minimum gradient necessary to produce a signal.  Maybe something to try one day when there is spare time…)

This isn’t limited to inversions of observations. Complex models often have internal dependencies that effectively produce covarying signals from a common error.  Of course, these can in theory be estimated from the underlying mathematics, but many modern models are so complex that it isn’t remotely obvious in many cases just how things are tangled up. For instance, dynamic topography models essentially depend upon a load and a rheology, but if both are being estimated from a seismic model and the seismic model depends upon an a priori  crustal model, then errors in the crustal model can bleed all the way through to topography in a fairly non-intuitive manner.

So GG suggests that a frontier associated with Big Data and Big Models and Big Computing is the understanding of, ahem, Big Uncertainty. Because without that understanding we are left with…yeah, you can see it coming, big uncertainty.

Scientists’ motivations

OK, mentioned that the bad reference letter thing for female scientists was either #1 or #2 in the most depressing things GG has seen this week.  Before some additional horror walks on to the public stage, let’s briefly mention the other candidate.

The Pew Research Center released a major poll about how Americans view climate change, and there are lots of positive things in it, from very strong support for solar and wind power to a fairly strong trust in scientists to saying climate scientists should have a major role in policy discussions about climate change.

No, this was the one that got GG’s goat:

ps_2016-10-04_politics-of-climate_1-13

Poll results on how scientists reach their conclusions. From the Pew Research Center.

This is a strange set of questions, but it says that Americans believe that research findings about climate change are most influenced most of the time by a desire to advance a personal career. Best evidence comes in number 2. The scientists’ political leanings comes in number 3.

Why is this depressing?  This isn’t like some suppression of drug trials that cast a new drug in a bad light, where industry uses the conditions of the trial to prevent publication, or the throwing of money at scientists whose views happen to coincide with industry’s–this is saying that a scientist sitting and trying to interpret their data will skew it in a manner most likely to reflect his or her politics and self-interest. Maybe this reflect Americans’ jaded views of their own behavior, but that just ain’t science.

Look, for a really long time, the best financial interest of a climate scientist would be to downplay climate change because there are fossil fuel companies with very deep pockets who might well be inclined to help keep a lab running. There was no industry constituency for a long time for clean power, and advocacy groups are not spending money on R&D. The insurance industry wants the best available information because climate outcomes are not political in that industry: to properly balance your risks and assets, you have to know the odds. Arguably the best way to make out financially was (and probably is) to work on being able to unravel medium term weather and climate to be able to get an edge in various agricultural pursuits, including investing in crop futures. Competing for research grants is not a way to riches or even glory.

And research has shown that political allegiance dies out once you get within a scientific specialty; it is about the last thing that might be affecting scientific results.  Plenty of Republican climate scientists have said that global warming is real.

Oddly, concern for the best interests of the public is not a real factor either,nor should it be. Well, it might be in advocating undertaking a line of research, but not in determining the results of that research. “Hey, if we cook the numbers on this project, it will make everybody do the right thing!” is NOT how this works.

Look, scientists are human and so there will be all kinds of blunders and biases that will creep into scientific papers. But it isn’t because scientists as a whole manipulate results to bend to a political wind or for personal gain (indeed, it is unclear that the public has a clear idea of what personal gain might be for a research scientist); that was what happened in the old Soviet Union, where all kinds of goofy scientific ideas viewed as more compatible with Communist ideology were pushed forward to the exclusion of other notions (and the potential imprisonment of those who might disagree).

So while it is nice climate scientists are respected more or less, would be even better if the way they reach the conclusions were more apparent to more people.

Overthrowing the model

Recently we mentioned how you don’t want to mistake a model’s assumption for a result. A new paper in Science by Inbal et al. makes some claims about deformation in the mantle that are interesting, but it is something totally outside their field of view that makes this of interest here.

Back in the 1980s, after the Coalinga earthquake of 1983 showed that fold could pose a seismic hazard as much as surface faults, some researchers tried to see what kinds of hazardous faults might be hiding at depth.  Tom Davis and Jay Namson, two consulting geologists, were particularly enthused and soon had a model for Southern California. When GG was a postdoc at Caltech, one of the authors came up to show us the model; it looked something like the version published in 1989:

davislabasin

SSW to NNE section across the Los Angeles Basin, Davis et al., JGR, 1989

It is hard to see (you can click here for a bigger version), but the area where the shaded horizon is deepest is under the Los Angeles Basin.  The red highlight is where the trend of the Newport-Inglewood fault passes through, and below that is a detachment fault extending all the way from the San Gabriel Mountains on the right to offshore Palos Verdes on the left. The orange section in particular is of interest here, as it suggests that the Newport Inglewood fault is cut at depth. When this was presented to us at Caltech, GG asked, why is that orange segment required? At the time, this was being presented as a seminal threat to Los Angeles.  The short answer really came to be: the means by which this model is constructed require it, but after some hemming and hawing there was the admission that you could have two detachments, one rooting to the right, one to the left.  Nevertheless, this is what was published.

How does a paper on faulting into the mantle come into this?

Read More…