Archive | June 2014

The death of isostasy?

One of the chestnuts of earth science for more than 100 years has been the concept of isostasy, which more or less holds that the outer part of the earth (the crust and mantle lithosphere) floats on the far less viscous asthenosphere.  Although the concept was floated in 1855 by Pratt and Airy in seeking to explain geodetic discrepancies in the great survey of India caused by the Himalaya, the term coined by USGS scientist Clarence Dutton in 1889, and the implications brilliantly described (along with the basics of plate flexure) by G. K. Gilbert in 1890, much of the geological (and especially the geophysical) community had no use for isostasy until 1914-5, when Joseph Barrell’s work advocating from multiple lines of evidence for an asthenosphere was published (in nine parts: quite the serial publication). So while we could be celebrating the centenary of physically-based isostasy, in some ways we appear to be ready to bury it.  Why?  Consider this plot from Moucha (2008):

ImageThis shows the predicted dynamic topography of the western U.S.; the scale is in kilometers, so about 2 km of variation of topography in the U.S. would seem to be from dynamic effects (the paper’s title cites ‘mantle convection’ as the cause). Given that average elevations in the west are only occasionally up around 3 km, this would seem to make isostasy moot; the mantle convection alluded to is shown to be extending to the outer core, far below the asthenosphere.  The paper’s abstract makes it clear that the authors are attributing much of the modern elevation of the Colorado Plateau (outlined above) to mantle convection and dynamic topography:

Herein we compute the viscous flow beneath North America that is driven by density anomalies inferred from joint seismic-geodynamic modeling. We find that the Colorado Plateau overlies a strong mantle upwelling that is coupled to the sinking Farallon slab, currently beneath the eastern United States. Consequently, the Colorado Plateau is currently a focused dynamic topography high within the western U.S. Cordillera.

So we’ve had a century of misdirection, right? Isostasy is dead, no?

Read More…

Static “dynamic topography”, or what earth science would look like to Orwell…

Over the past decade, you’d think that topography had suddenly gotten hyperactive as the term “dynamic topography” became widespread, usually as a product of complex modeling of flow in the mantle. Just what is dynamic topography?  Try this quiz and see what you think; answer true if you think this is an example of dynamic topography, false if not:

  1. Subsidence caused by a subducting slab in the upper mantle
  2. Subsidence caused by the remains of a slab in the lower mantle
  3. Uplift caused by a mantle plume
  4. Uplift caused by extensional thinning of mantle lithosphere
  5. Subsidence caused by extensional thinning of the crust
  6. Subsidence caused by erosion
  7. Subsidence caused by slow cooling of an oceanic plate
  8. Uplift caused by melt depletion of mantle lithosphere
  9. Uplift caused by igneous intrusion into the crust

Read More…

Greeley is NOT the new Oklahoma…

Somewhat surprisingly, the Colorado Oil and Gas Conservation Commission has told the operator of the major injection well near the epicenter of the M 3.4 earthquake from 31 May to shut down after a 2.4 aftershock (some 50 additional aftershocks, all smaller, have also been located by the CU team running portable seismometers in the vicinity of the May 31 earthquake). This is unusual as most other areas in the country with disposal wells have to suffer earthquakes much larger before industry and its overseers decide to consider the possibility that the injection wells are causing earthquakes; a similar controversy in the Trinidad area of Colorado still remains unresolved.  What isn’t clear yet is whether the 20 day shutdown order is long enough to really see if the well is associated with the seismicity.

Disrupted disrupting learning?

There is plenty to complain about college tuitions; they are high and going higher (something GG has personal experience from the savings-account-depleting side of things). There are lots of discussions out there on why this is (the Washington Post’s WonkBlog ran a 10 part series on this in late 2013, and the book Why Does College Cost so Much? (summarized here) explore numerous factors in play); let’s not go there today.  Lets instead consider the cure being touted most strongly: MOOCs, and how they relate to other trends in higher ed.

MOOCs (Massively Open Online Courses) are seen as a way to essentially eliminate the majority of college educators (presumably, ideally, the expensive professors) by having a few superstar professors teaching nearly everybody in college, which, of course, will finally bring the cost of college education under control.  In essence, this converts education from a service-type activity to a commodity, and the cost increases of commodities is well below that of personal services.

The funny thing is that this flies right into the face of the other trend affecting higher ed science teaching, which is to move away from the “sage on the stage” to student-led learning (“guide on the side”). (Yes, while this has been sweeping other parts of education for a long time, it seems to be a relatively recent addition to science classrooms in higher ed). This is tied in with somewhat older educational strategies like presenting material in multiple ways (so a visual learner will get something, an aural learner something, etc.).  All of these would seem to require even more instructor-student interaction, not less. in fact, one criteria for evaluating colleges is average class size, so it would seem that the colleges folks will most want to attend are apt to be those where the classes are smallest, which is where the costs will be highest.

These two models are the kinds of things that make GG grumpy just in contemplation. One model at the extreme might reduce all education to remote lectures and some whiz-bang computer-driven evaluations while the other would have students in essence reproduce centuries of research and learning directly.  Neither seems efficient and a reliable means of educating students.

Why has this gotten so much attention? Is this really about a better way to educate students, or more of a cost-containment issue?

In some ways what has changed the most in all this seems to be the responsibility for learning.  Realistically, a student dedicated to learning material can learn it from a book, from a correspondence course, from a lecture course, from a hands-on course, from a  MOOC, from audiotapes, etc.  And some people do this quite successfully.  Experience of some faculty GG knows is that the ones who can succeed at distance learning/web courses are the kind who will succeed in a classroom.  This is showing up in the results of MOOCs, where the tiny fraction completing courses are, ironically, the ones who overwhelmingly already have a degree. This some MOOC proponents are shifting to advocating more of an assisted learning environment, where students might view MOOC lectures and access MOOC materials as at present but would attend sessions where they would be assisted by, um, some learning professionals.

We used to call those sessions “recitation sections.”

All this suggests that MOOCs are likely to be a fringe element in undergraduate education; these are more likely to be highly successful for professional education.

A New Yorker piece recently showed how the love of MOOCs is tied to an intellectually questionable quest for disruption for disruption’s sake. Toss in the financial quandary that many students and families find themselves in and you can see why pressing for something like this becomes strong.  Is it wise? Keep in mind that increasingly, a BA or BS is a necessity to avoid falling farther and farther behind compared to those not attending college. At current cost levels, it seems that demand for college education is inelastic.

GG cannot speak for other disciplines, but earth science often requires a lot of lab work. Looking at rocks, going to the field to make measurements, mapping geology, running analyses on samples. None of these will work as MOOCs and arguably they already are highly student-centered. Getting a bachelor’s in earth science isn’t going to happen through MOOCs or other remote learning environments; even a motivated learner might have trouble trying to learn to map without a Brunton, do optical mineralogy without a petrographic microscope, explore environmental chemistry without a lab. Even at the intro level, seeing rocks and getting hands-on help in figuring out what you are looking at, local field trips, etc., are all important. And you know what?  These are not courses you can teach with several hundred students.

Some have gone so far as to advocate that student-determined courses of study be the norm–make your own degree (see the last paragraph of the Friedman column, for instance).  No more breadth requirements! Just take those classes that really make you feel good or seem relevant! GG recalls a poll our faculty did years ago for a self-study.  Our then-current students complained bitterly about required math, physics and chemistry classes.

What did our alums say were the most useful classes they took? Um, the required math, physics and chemistry courses.  Can’t wait to see those self-made degree programs turning out students that employers will walk away from…

So should we be disrupting university education for MOOCs and student-led learning? Considering that the U.S. in 2008 gained over 17 billion dollars from overseas students studying in the U.S., it seems that college education is a place where the U.S. is a global leader.  Why disrupt a successful model? Sure, explore improvements (and many faculty are looking hard at where MOOCs or student-led learning make sense), but wanting to dynamite the whole thing? (Even a defense of disruption, in response to the New Yorker piece, comes across as half hearted–ooh, we ‘disrupted’ university IT by going to the cloud).

So while there is certainly a problem in the cost of college education vs. median family income, the solution may not be in the classroom as much as many hope.

The Wilderness Myth

Sierra reminds us to celebrate the 50th anniversary of the Wilderness Act. While keeping some parts of America free from roads and industrial development is something to celebrate, the concept underpinning this legislation is based on a fictional narrative, which would be amusing if it weren’t clouding the vision for managing these lands. Consider the legislation itself:

“A wilderness, in contrast with those areas where man and his own works dominate the landscape, is hereby recognized as an area where the earth and its community of life are untrammeled by man, where man himself is a visitor who does not remain. An area of wilderness is further defined to mean in this chapter an area of undeveloped Federal land retaining its primeval character and influence, without permanent improvements or human habitation, which is protected and managed so as to preserve its natural conditions and which (1) generally appears to have been affected primarily by the forces of nature, with the imprint of man’s work substantially unnoticeable; (2) has outstanding opportunities for solitude or a primitive and unconfined type of recreation; (3) has at least five thousand acres of land or is of sufficient size as to make practicable its preservation and use in an unimpaired condition; and (4) may also contain ecological, geological, or other features of scientific, educational, scenic, or historical value.” Public Law 88-577, 88th Congress, S. 4, September 3, 1964

Now that part about the imprint of man’s work being unnoticeable? Arguably there is no such place, except perhaps on the very rockiest of crags, where this is true.

Consider, for starters, Yosemite Valley (yes, it isn’t Wilderness, but it stands in nicely for less well known areas that are). When first visited by European Americans, the men of the Mariposa Battalion enjoyed riding their horses through the forest without obstruction. The large meadows and frequent vistas were enjoyed by early visitors to the valley, but those characteristics faded with time.  Why? Because the Ahwahnechee were frequently burning the undergrowth, managing the valley to encourage growth of foods they used and reducing the opportunity for predators to attack them. This behavior was widespread in much of the Americas.  The flora and fauna seen by the first Europeans were shaped by Native American behaviors, which included hunting, setting fire, harvesting (and sowing) various plants.

But this is hardly the most significant impact of humans in North America.  Consider, for instance, the Osage Orange (aka the hedge apple), native to parts of east Texas and a bit of Oklahoma. This charming fruit isn’t eaten by anything (squirrels tear it apart to get at the seeds); the development of such large fruit is usually associated with something that eats the fruit.  Also, in the geologic past this was a widespread plant.  What gives?

Daniel Janzen and Paul Martin proposed in 1982 that this sort of orphaned plant was an evolutionary anachronism, a plant with a specialization lacking modern purpose (think an appendix of the plant world). Why would this happen?  In this case, the idea is that there used to be a big animal that ate the fruit and so distributed the seeds. (Arguably this extends to some animals: pronghorn antelope can run far faster than any plausible predator today; was there a predator in the past that could push them hard?). And looking back in time, we see lots of big animals that could be candidates for a consumer of Osage Oranges.  In fact, there were some 30 or more species of large animal, creatures like mammoths, mastodons, cave bears, camels, saber-toothed cats, dire wolves, etc. that wandered the face of North America. Why are they no longer here?

Almost certainly the answer is the arrival of Homo sapiens. Megafaunae were wiped out in the Americas, Australia, New Zealand, and Oceania almost in lockstep with the arrival of humans.  While the extinction of these animals in North America coincided with the end of the last Ice Age, there wasn’t an equivalent extinction of small animals, nor was there a comparable extinction event at the end of  the numerous previous glacial episodes. Extinction in Australia and New Zealand didn’t coincide with rapid climatic change. So although the means by which humans extinguished these animals remains controversial (some combination of direct predation, competition, collapse of keystone species, changes in ecosystems though the use of fire, and other, more longshot, possibilities like bringing pathogens to areas), the guilt of humanity is very hard to escape.

What this means is that nearly every ecosystem in North America was beheaded some 10,000-13,000 years ago, so they are hardly “untrammeled by man.” These ecosystems have not yet reached any kind of stable equilibrium; they are all carrying the “imprint of man’s work,” both the long term loss of the megafauna and the ongoing impact of human hunting, foraging, and burning. The myth of wilderness arose in part because Europeans had chosen to denigrate the significance of Indian life, partly because many Native populations had been crushed by disease before significant European contact, and partly to make it seem that Americans were taking possession of a vacant landscape. It reached its pinnacle in the mid-20th century in part because direct experience with original Native practices was so distant from American memory (many early Americans were well aware that American Indians had a major impact on the landscape) and in part because the significance of the megafauna extinction was not yet recognized.

Why is recognizing wilderness as a myth significant? Because it influences how we manage these lands.  And make no mistake, we do manage them even when some of the management decisions are to do nothing.  If there is no wilderness in the sense of an untrammeled nature in harmony with itself, what are the goals of having Wilderness?  Are we trying to remake the pre-Columbian America? So should we encourage traditional harvesting, burning, hunting? Are we trying to restore to a pre-human environment? If so, we need to replace those lost species, as has been suggested in the Pleistocene rewilding initiative. Is Wilderness a refuge for the species that survive? We should perhaps then manage these more as wildlife sanctuaries. Is this just a playground for humans to restore themselves away from industrial life?  Then perhaps some of the cosmetic restrictions on management should be lifted. The irony is that current management is, largely, dedicated to making something that never existed before: a natural environment nearly totally beheaded, missing not only the megafauna extinguished some hundred centuries ago, but the humans that replaced them.

How to receive a deadly sentence

OK, so you’ve already learned of the single-sentence proposal killer. You can imagine the response by the scientists making the proposal: “But you didn’t understand the significance! Can’t you see that this would change our understanding of XXX?”

Well, no, GG couldn’t. And that is precisely the message that the proposers needed to hear. If they truly believe that their work was maligned improperly, they need to see why they were not understood. What may be obvious to them was not obvious to a reader.

There are two facets to reviews: one is identifying mistakes. The other is identifying poor communication. Outright mistakes are typically greeted with a subdued, thanks for catching that. But poor communication comes through to the reader as logically deficient or inadequately supported, and the review of that can sound argumentative. So all too often scientists get a review and fulminate over how unfair the review was and scheme to get around that meddling reviewer. And truth be told, there are times when a review is unfair. But the message that the recipient of the review needs to get is that when the reviewer misunderstood something, it was probably because the author didn’t communicate clearly. And the first response (well, maybe second, after throwing the computer out the window or putting a fist through a wall) should be to figure out why the text was not clear enough. The first response should be to look inward and see what you can do as an author better; only after you are satisfied you have done all you can should you allow for the possibility that the review was lazy, misguided or mean-spirited.

(A third facet all too common in reviews is to squabble with the authors about the interpretation of results.  This is less an issue with proposals as there are no results, but it is where the strongest vitriol usually emerges.  GG’s approach in paper reviews is to set off such disagreements in their own area with the note that if the authors find the argument GG makes convincing, they can adjust their interpretations, but the presence of these disagreements themselves in no way disqualifies the paper from being published.  A review that demands rejection of a paper solely on disagreements over interpretations–i.e., not questioning the data, analysis, or presentation of the same–is an unfair review).

Criticism is easy to make and hard to take, especially when you have put years of effort into something. We train young scientists pretty well in the skills of knifing down work we think is subpar, but we do a lot less work on training them to make constructive criticism (after all, the authors aren’t usually in the room to hear how they should do things better) and even less effort on training them to listen to criticism and improve because of it. The net effect is to both encourage rather sharp-edged criticism and a knee-jerk argumentative response. This is too bad, because well-done reviews (even if very negative) are very helpful.  We’ll chat another day about what peer review for papers really should be, what it seems to be now, and why it is worth rescuing….

The Movie That Needs to be Made

GG hears they are making a sequel to “Independence Day”.  GG would like to suggest a title and plot for another feel-good movie where the world unites to fight a common goal against an enemy that is merciless, that will take away food and water, that will even destroy major cities.  One that uses our own weaknesses against us.

Title the movie “Energy Independence Day”.

Just like in WWII, when the Germans had their backs to the wall and made synthetic fuels to replace the fuels they no longer had.  When the British had their backs against the wall but used cleverness to devise means of diverting German bombers and crack uncrackable codes. When the Americans were searching for a way to end the war and enlisted scientists from around the world (in essence) to build the ultimate bomb. Yes, in Energy Independence Day, teams of scientists will struggle against the clock to get fusion to actually work.  Maybe Jeff Goldblum is available to see the flaws in a current tomahawk reactor and, against the wishes of an evil manager, will put a virus into the controlling code to make it work…. Masses of people will shift to patriotic bicycling after an energizing speech from the President, who risks being displaced from the White House from the rising tide….Wildcat drillers will shift to using their fracking technology for good (geothermal power) instead of in the service of the evil enemy, carbon dioxide….The economy revives as people around the world are put to work to battle this untiring enemy….  And, most shocking of all, Congress finally comes together and installs massive wind turbines in Washington to finally harness the hot air so generously generated by our elected representatives.

*sigh*

GG would really like to see that movie.  Or the real-life equivalent.