There is a survey circulating within AGU asking for suggestions for the most important questions or challenges facing geoscience. This is kind of a regular thing (NSF gathers meetings around similar questions), but GG wonders if this is a productive exercise.
First, most important to whom? If we are talking the public at large, then you are almost certainly talking about geohazards from climate change to hurricanes to tsunamis to earthquakes to landslides. Better predicting or mitigating these hazards are probably the things that society most wants. Close behind are some more traditional concerns like locating mineral deposits.
If these are the class of most important problems we should pursue, then it might well make sense to encourage scientists to focus on these. And for what it is worth, there is a lot of effort directed toward these ends.
But are these the most important questions at a more abstract level? A lot of the work on hazards isn’t addressing more general principles, it is applying specialized knowledge to particular situations. The basic physics of most landslides has been well known for a long time. The conditions that produce tsunamis are pretty well understood as well. So maybe there are things we really have no solid grasp of that might be worth getting at.
When you shift to this style of questioning, things necessarily break apart by discipline or study topic. Who is to say that determining the presence or absence of century-scale atmospheric oscillations is more or less important than resolving the physical state and composition of material near the core-mantle boundary? Is learning when the Andes rose up more important than when the Tibetan Plateau went up? Or the Rocky Mountains? GG is at a loss; he makes his own calls, of course, but of the numerous issues in earth science that remain unclear, how would you choose a subset that really are the “most important”? And having confronted that ambiguity, what do you gain from answering the question?
Keeping in mind that abstract or non-directed science is funded because it produces unexpected insights that can be of great but unanticipated utility, how do you pick winners? GG is of a mind that trying to get some community to settle on a set of questions is probably not the most effective way of getting really juicy new knowledge. Having everybody pile on, say, calculating dynamic topography would probably produce far more chaos than insight while starving other experiments that might be just as valuable. And yet Congress might bridle at giving out money without some kind of master goal (perhaps this is why NASA has been rather successful in its probe initiatives: saying we are going to look for life on Mars or on Europa or Titan sounds sexy even if the probes also get to do a lot of other, less sexy, things).
If we sidestep Congress wanting some clear mileposts, what might be the most effective way to get somewhere? Probably a good way is in fact how many NSF programs work at present: on a case-by-case basis, proposal by proposal. If some proposal comes in that has nothing to do with the community’s wish list of problems but is well thought out and makes a good case that its problem is significant, why should it be rejected in favor of some crank-turning me-too middling thing that is pointed at that wish list? GG would say it shouldn’t. Committees are notorious for compromised and pasteurized repackaging of some advocates’ favorites (the old saw of a camel being a horse made by a committee comes to mind). So maybe we should bypass the group-think in making target lists and just try to follow the problems that really engage us. Some of us will choose well, which is the best we can hope for.
So, for instance, GG phrases his interests in the western U.S. as stating that this orogen is the largest non-collisional orogen on Earth. It is arguably the most poorly understood feature of its size. Does this make studies of this more important than, say, untangling the slip history of major faults in Southern California? Not necessarily–but it is better than saying that this research addresses point 1(b) section 4 of some summary document.
So the poll has been out for awhile and most folks review papers because they are in their specialty and they have time and the material looks interesting; other reasons are much farther down the list. The least likely reasons to choose to review are if money is involved and if the journal is a non-profit. So if you want to know why Elsevier continues to rack up monster earnings, this would be part of the reason.
As one commentator pointed out, this may not be getting at reasons why people turn down a review request, though it does seem likely that the reasons would be “no time,” “this isn’t my specialty,” and (though probably not said out loud as much) “this looks pretty dull.” Probably not as clear in the list in the poll might be a conflict with the authors or some aspect of the paper.
As it took GG 14 potential reviews approached to get 2 reviews for one paper, he would like to know how might we increase the fraction of people who accept a review. Editors are trying to ask people with the right expertise and who are likely interested in the content; is there some way to help them make time? It seems hard to know short of encouraging people to forget their personal lives…
The fourth place reason to review suggests one possible pressure point; a fair number of folks choose to review for the journals they publish in. Maybe there is some penalty or reward that could be in place to encourage some of these authors to find the time to review. Many journals already have awards for excellence in reviewing, so that carrot seems to have limited effect. There have been efforts to recognize reviewers in other ways. Some journals have a list of responsive or frequent reviewers. There has been some thought to returning Geology to an old practice of including a sentence or two from a review as part of the published paper. As a monetary credit seemed to carry little weight in choosing to review, maybe it would be access to some expedited review process…or a slow track for those who consistently find time to submit papers but somehow never have time for a review.
Is this simply more evidence that the review system’s antiquated view of what reviewers should do is approaching utter collapse? GG isn’t sure yet but isn’t very optimistic….
Geophysical inverse problems have an interesting set of difficulties. Firstly, they routinely suffer from being mixed determination–some parameters overdetermined from the available data, others underdetermined. Second, they are usually highly non-linear, meaning that it is easy to get trapped somewhere in model space you don’t want to be. The combination presents some problems that aren’t as trivially solved as is often viewed, in particular the difficulties posed by use of damping and smoothing constraints in inversions.
GG is a bit frustrated having now asked 10 scientists to review a paper. At the moment there are 2 recently contacted with no answer, 5 outright “no”s and 3 no reply after a long time.
So this prompts GG to ask the question, how do you decide when to review a paper? Give your best answers below and maybe GG will figure out what he is doing wrong….
“We’d like to think we know about all of the faults of that size and their prehistory, but here we missed it,” Dr. [Ross] Stein said.
“The geologists in this area are the very best — people aren’t asleep at the wheel,” he said. “But there are real opportunities for young scientists to come in and learn how to do this better.”–New York Times story on Ridgecrest earthquake
We missed it? As one who has worked in this area, GG didn’t feel that way, though he was never asked beforehand if a M7 was possible there. There were mapped scarps in very young alluvium along a pretty well established seismic lineament. That this could be one connected fault seemed pretty darn obvious, but close study was always a challenge due to the presence of the China Lake Naval Weapons Center. It even had a name–the Airport Lake fault zone. And frankly, there are many others like this kicking around in the west.
There is in point of fact a very long list of geoscientists “missing it” out there, including most prominently these:
- When GG was an undergraduate he was taught that all earthquakes in California with a magnitude above about 6 would produce ground rupture. This was then followed in short order by the Coalinga earthquake (1983, M6.7), the Whittier Narrows earthquake (1987 M5.9), the Loma Prieta earthquake (1989 M6.9), and the Northridge earthquake (1994, M6.7), none of which produced the kind of dramatic surface rupture expected. (While there was some surface deformation in Loma Prieta, it isn’t clear that any of it was from the main fault). Frankly, the peculiar relation between the surface rupture and fault rupture of the 1952 Kern County (Arvin-Tehachapi) earthquake should have been a hint that surface rupture wasn’t a given.
- Seismic hazard assessments assumed that the biggest earthquake you could get associated with slip on a fault was related to the length of that fault. Then we got the Landers (1992 M7.3) earthquake, which ruptured several unconnected but similar faults. This should have been seen coming, though, as the Dixie Valley/Fairview Peak earthquakes in 1954 demonstrated much the same kind of behavior. A related misjudgment was that big faults were segmented and thus there was a maximum earthquake that could be inferred from past ruptures. Tohoku (M9.1, 2011) underscored that as a bad interpretation.
- Seismologists often would say that earthquakes don’t trigger distant earthquakes because the finite stress changes don’t go out that far. The Landers event triggered seismicity as much as 1250 km away, mainly (it seems) from the dynamic stresses associated with the surface waves from that event. This has now been observed in other large events. There are suggestions that other stress transfer mechanisms might be out there that led, for instance, to the Little Skull Mountain earthquake and the much later Hector Mine (M7.1) earthquake after Landers.
- Not as clearly stated but clearly in the mindset of seismologists was that big earthquakes are of one dominant motion. So while Landers was on several faults, they were all pretty much strike-slip faults and the feeling was they were connected at depth. But we then got the Kaikoura earthquake (M7.8, 2016) (among others), which spectacularly lit up a large number of individual faults with wildly different styles of slip. Frankly, the Big Bear earthquake (M6.3) that shortly followed Landers but was a totally separate and very different orientation should have hinted that very complex earthquakes were possible.
So frankly having a seismic zone with scattered preserved scarps in an alluviating environment be the hints of a through-going fault is hardly a shock. GG thinks that a better interview target would have been Egill Hauksson, who has studied the seismicity of the Coso region in particular (something that Ross Stein had not prior to this event) to see if he felt that this was “missed.”
Given all this, what are some of the under-appreciated hazards out there? After all, the Big One is supposed to be a rerun of the 1857 Ft. Tejon earthquake. GG thinks worse could be out there. You want a really big one? What if the Malibu Coast, Hollywood Hills, Raymond Hills and Sierra Madre faults all went as one event? They all are doing the same sort of thing, but hazard mappers consider each to be independent. And while that is probably true for the average surface rupturing earthquake (as, for instance, 1971 San Fernando was separate from the kinematically similar and adjacent Northridge earthquake), that is no guarantee. Maybe you wouldn’t exceed M8, but a rupture like that would pound LA like nothing else. Or maybe multiple segments of the Wasatch Fault go as one (though frankly even the one segment in Salt Lake City would be devastating). There are no end of partially buried, poorly studied structures across the whole of the Basin and Range. Lots of stuff could be hiding in the forests of the Cascades as well.
Basically, when we look as geologists at the Earth, we are seeing only the top surface of a deforming medium. That top surface is constantly being modified by other processes (mainly erosion, deposition and urbanization). Toss in that major earthquake faults are not razor sharp planes penetrating the earth but are a complex creation of a network of smaller faults that have coalesced in some manner and you expect it to be hard to pick out all the big faults. Even adding subsurface information (which is often quite deficient in these areas) and faults can hide. Go farther east and it gets even hazier as recurrence times get really long and so hints of past activity hide from view. Frankly, there are probably some truly great misses out there; Ridgecrest really isn’t that far off the mark from what we might have expected.
(Minor update, GG added his own graphics to the end)
Another solstice, another reminder of the screwy ways that sunrise and sunset don’t align nicely with the solstice. GG has written on this a few times before, but is guilty of being lazy. A post over at Category Six (Weather Underground) examines the same misalignment but points to another web page where an effect GG overlooked–the variable changes in the rate of change of the solar noon due simply to the Earth’s tilt–is brought in as significant. GG’s posts have focused on how the solar day varies because of the ellipticity of the Earth’s orbit, but these posts argue that the length of a solar day (high noon to high noon) is longer at both solstices, not just the December one.
So a quick check is to look at the changes in solar noon near each solstice; if it is just ellipticity of orbit, we’d expect long solar days near one solstice and shorter solar days near the other. Here in Boulder (using timeanddate.com) solar noon on June 1 is 12:58 pm (20 days before the solstice), on June 21st (solstice) it is 1:02 pm and on July 11 is 1:06 pm–so over those 40 days, the solar day was 8/40’ths of a minute (or 12 seconds) longer than 24 hours, and it is quite symmetrical about the solstice. On Dec 1 it will be 11:50 am, on Dec 21 (solstice) it is 11:59 am and on January 12 it will be 12:08 pm, so over the forty days at the winter solstice the length of the solar day is longer by 18 minutes, so each solar day was 18/40 minute longer than 24 hours, or 27 seconds longer each day.
We can do the same calculation for equinoxes. Around the spring equinox it is 10 minutes shorter than 24 hours for the 40 days centered on the equinox, and in the fall it is 14 minutes shorter, so on average the solar day is short by 12 minutes/40 days or 18 seconds/day.
So we see the effect of ellipticity accounts for the 15 seconds/day difference between the solstices (it is a bit larger if you compare between perihelion and aphelion), but the 38 seconds/day difference in solar day between solstices and equinoxes is simply because the rate of change of solar day varies through the year even in a circular orbit, as discussed in the posts above.
Earliest sunrise is 7 days before the summer solstice here in Boulder, but the earliest sunset comes 14 days before the winter solstice–that difference is due to ellipticity, but the average offset of about 10 days is due to the tilt.
So GG’s explanation was mostly touting the wrong feature. Oops. Sorry. Shows how attractive a nice (but wrong) story can be.
It took GG a bit to really get his head around this, so here is an alternate to the postings linked above. Imagine that the Earth is fixed with respect to the stars. If it went around the Sun in a circular orbit, then the sun would move 1/365th of the way around the Earth. For simplicity, let’s pretend the year is 360 days long. So in one day, the Sun moves 1° along the circle describing the points directly under the Sun. In ten days, 10°. If the Earth’s pole pointed normal to its orbit, all would be well, because the circle of points below the Sun would be the equator: 10° along the circle is 10° of longitude everywhere.
But the Earth is tilted, so we might look at something like this if we look down from directly above the ecliptic (i.e., normal to the plane of Earth’s orbit):
The outer ring of spokes is where the Sun is every 10 days of our slightly fake 360 day year (assuming a circular orbit). The outer circle of the globe is again where the Sun is overhead–but now it is well south of the equator at the December solstice and well above it at the June solstice. But where the lines of longitude hit they no longer match up with the spokes. If you look closely at the 10 days from Dec 21 to Dec 31, you see the Sun moved over more than 10° of longitude:
This is saying that 10 solar days were longer than 10 “rotation” days. And if we look at the September equinox, we see the opposite trend: 10 solar days cover a smaller number of degrees of longitude (it is a bit harder as the lines of longitude are angled here):
Put another way, each solar day the Sun moves 111 km along the circle (which is the circumference of Earth). When the circle is near the tropics, though, one degree of longitude is about 111 km * cos (23°) = 102 km. The Sun, moving due west, overshoots its target and is shining down on a point about 9 km west of where it should. Now the Earth rotates through 15° of longitude or about 1665 km an hour at the equator, so 9 km is 9/1665 x 60 minutes x 60 seconds or about 19 seconds too far–which is exactly what we saw above from the sun tables.*
At the equinoxes, the path of the Sun overhead is now angled about 23° from east-west as it crosses the equator. So the Sun moves to the west only 111 km x cos (23°) = 102 km, but at the equator one degree of longitude is 111 km, so it now is 9 km east of where it belongs, and the solar day should be about 19 seconds shorter than the rotation day.
It is really simple once you get it, and GG is embarrassed that he overlooked this before and didn’t check the numbers the way he has done it above. Well, better late than never.
* OK, that was kind of an accident where approximations canceled. On the Earth with 365.25 days in a year, the Sun would go through 40,070 / 365.25 =109.7 km each day. At 23° N or S, 1/365.25th of the way around a line of latitude would be 101.0 km, so the Sun overshoots by 8.7 km. At 23 N or S, the Earth moves at a rate of 1515 km/hr, so 8.7 km would be 20.7 s, which would be the maximum movement of the Sun’s sub solar point relative to where it should be. There’s probably some care that should be taken with sidereal rotation rates and what not, but we’re pretty much there given the accuracy of the tables we used up above–they only give solar noon to within a minute.
GG is hunting around for some information related to the little trainwreck series of posts, and has noticed some issues that bear on the broader business of (upbeat music cue here) Big Data.
Now Big Data comes in lots of flavors. Two leap to mind: satellite imagery and national health records. Much satellite imagery is collected regardless of immediate interest; it is then in the interests of the folks owning it that people will find the parts of interest to themselves. So Digital Globe, for instance, would very much like to sell its suite of images of, say, croplands to folks who trade in commodity futures. NASA would very much like to have people write their Congressional representatives about how Landsat imagery allowed them to build a business. So these organizations will invest in the metadata needed to find the useful stuff. And since there is a *lot* of useful stuff, it falls into the category of Big Data.
Health data is a bit different and far enough from GG’s specializations that the gory details are only faintly visible. There is raw mortality and morbidity information that governments collect, and there are some large and broad ongoing survey studies like the Nurses’ Health Study that collect a lot of data without a really specific goal. Marry this with data collected on the environment, say pollution measurements made by EPA, and you have the basis for most epidemiological studies. This kind of cross-datatype style of data mining is also using a form of Big Data.
The funny thing in a way is that the earth sciences also collect big datasets, but the peculiarities of them show where cracks exist in the lands of Big Data. Let’s start with arguably the most successful of the big datasets, the collection of seismograms from all around the world. This start with the worldwide standardized seismic network (WWSSN) in the 1960s. Although created to help monitor for nuclear tests, the data was available to the research community, albeit in awkward photographic form and catalogs of earthquake locations. As instrumentation transitioned into digital formats, this was brought together into the Global Seismographic Network archived by IRIS.
So far, so NASA-like. But there is an interesting sidelight to this: not only does the IRIS Data Management Center collect and provide all this standard data from permanent stations, it also archives temporary experiments. Now one prominent such experiment (EarthScope’s USArray) was also pretty standard in that it was an institutionally run set of instrument with no specific goal, but nearly all the rest were investigator-driven experiments. And this is where things get interesting.