Some years ago, Mt. Elbert jumped from 14,433′ to 14,440′ above sea level. Some Coloradans hope this might be enough to push it past Mt. Whitney in California, but that peak too rose up, from 14,494′ to 14,501′. The change was because of the shift from the National Geodetic Vertical Datum of 1929 (NGVD29) to the newer (and presumably more accurate) North America Vertical Datum of 1988 (NAVD88). While the change did nothing for the local relative elevations (Mt. Massive, for instance, remained 12′ lower than nearby Mt. Elbert), there were some relative changes at the scale of the country.
The datum can be thought of (and is usually stated as) where sea level would be, though technically this isn’t quite true. Thus a change in the datum came about because of a change in the underlying estimate of where sea level was. As you might well imagine, sea level is not easy to divine here in Colorado, but measurements with satellites as well as lots of surface surveys provided a lot of information.
However, it has turned out that there were some mistakes in NAVD88. Tide gauges on the West Coast were computed to be as much as 1.25m above actual sea level. NGVD29 was tied to a number of tide gauges, but NAVD88 was hooked in from one in eastern Canada. The result is that the datum differs from estimates of the geoid:
So what is the solution? A new datum, of course: the North American-Pacific Geopotential Datum of 2022 (NAPGD2022). And while this is still being finalized, we have some idea of what the new datum will look like:
A sharp eye might see that Mt. Whitney might decline by 0.75m (2.5′) while Mt. Elbert might only go down 0.5 m (1.6′). So Whitney (maybe 14,499′) will still be safely above Elbert (14,439?’). But all the T-shirts and little fake geodetic monuments you can buy will change….
At the same time, there are changes in the horizontal NAD83 datum (the new one is NATRF2022) of a meter or two; both new datums will now move with the continent as geodesy finally has to deal with plate tectonics, post-glacial rebound and other assorted changes…
P.S. this change won’t deprive Colorado of any of its 14ers…but while it would seem peak 13,001 is at risk of dropping out of the list of 13ers, that elevation was an NGVD29 elevation and so was probably more like 13,008 in NAVD88…
There is a survey circulating within AGU asking for suggestions for the most important questions or challenges facing geoscience. This is kind of a regular thing (NSF gathers meetings around similar questions), but GG wonders if this is a productive exercise.
First, most important to whom? If we are talking the public at large, then you are almost certainly talking about geohazards from climate change to hurricanes to tsunamis to earthquakes to landslides. Better predicting or mitigating these hazards are probably the things that society most wants. Close behind are some more traditional concerns like locating mineral deposits.
If these are the class of most important problems we should pursue, then it might well make sense to encourage scientists to focus on these. And for what it is worth, there is a lot of effort directed toward these ends.
But are these the most important questions at a more abstract level? A lot of the work on hazards isn’t addressing more general principles, it is applying specialized knowledge to particular situations. The basic physics of most landslides has been well known for a long time. The conditions that produce tsunamis are pretty well understood as well. So maybe there are things we really have no solid grasp of that might be worth getting at.
When you shift to this style of questioning, things necessarily break apart by discipline or study topic. Who is to say that determining the presence or absence of century-scale atmospheric oscillations is more or less important than resolving the physical state and composition of material near the core-mantle boundary? Is learning when the Andes rose up more important than when the Tibetan Plateau went up? Or the Rocky Mountains? GG is at a loss; he makes his own calls, of course, but of the numerous issues in earth science that remain unclear, how would you choose a subset that really are the “most important”? And having confronted that ambiguity, what do you gain from answering the question?
Keeping in mind that abstract or non-directed science is funded because it produces unexpected insights that can be of great but unanticipated utility, how do you pick winners? GG is of a mind that trying to get some community to settle on a set of questions is probably not the most effective way of getting really juicy new knowledge. Having everybody pile on, say, calculating dynamic topography would probably produce far more chaos than insight while starving other experiments that might be just as valuable. And yet Congress might bridle at giving out money without some kind of master goal (perhaps this is why NASA has been rather successful in its probe initiatives: saying we are going to look for life on Mars or on Europa or Titan sounds sexy even if the probes also get to do a lot of other, less sexy, things).
If we sidestep Congress wanting some clear mileposts, what might be the most effective way to get somewhere? Probably a good way is in fact how many NSF programs work at present: on a case-by-case basis, proposal by proposal. If some proposal comes in that has nothing to do with the community’s wish list of problems but is well thought out and makes a good case that its problem is significant, why should it be rejected in favor of some crank-turning me-too middling thing that is pointed at that wish list? GG would say it shouldn’t. Committees are notorious for compromised and pasteurized repackaging of some advocates’ favorites (the old saw of a camel being a horse made by a committee comes to mind). So maybe we should bypass the group-think in making target lists and just try to follow the problems that really engage us. Some of us will choose well, which is the best we can hope for.
So, for instance, GG phrases his interests in the western U.S. as stating that this orogen is the largest non-collisional orogen on Earth. It is arguably the most poorly understood feature of its size. Does this make studies of this more important than, say, untangling the slip history of major faults in Southern California? Not necessarily–but it is better than saying that this research addresses point 1(b) section 4 of some summary document.
On the grand scheme of things, this is way down the list, but as this blog is from time to time an outlet for a grumpy geophysicist, off we go!
Here in Boulder we have three main classes of travellers: those in automobiles, those on bicycles, and those on foot. (We will for the moment omit the other, growing classes of motorized skateboards and scooters as well as e-bikes as well as the rather small motorcycle population). GG is in each group most weeks outside the snowy months and so gets to observe behaviors of all three from multiple perspectives.
Now Boulder strives to be bike-friendly, having built multipurpose paths, striping bike lanes on many streets and occasionally putting barriers up between bike lanes and traffic. The city has signed on to Vision Zero, a program to eliminate traffic fatalities. Now overall most traffic fatalities are car crashes, but in Boulder, according to a city draft study, it is a near tie between those in cars seriously injured (65 from 2015-2017) and those on bicycles seriously injured (61 in the same time period). When you figure that the number of miles driven is probably a healthy factor of 10 or more more than miles biked, it is clear that bicyclists are more art risk. And so it is hardly a surprise that while bikes are involved in 6% of the crashes in Boulder, they are involved in 39% of the ones producing serious injury or death. For this reason, the city is focused on getting drivers to behave better.
Too bad they have their eyes on the wrong group.
Recently an interview with NASA chief scientist Jim Green by The Sunday Telegraph led to a number of stories with titles like “the world may not be ready for the discovery” or “world is ‘not prepared’” or “Humans aren’t ready to accept there’s life on Mars“.
Um, really? Exactly what preparation do these folks think we need? I mean, will there be panic in the streets? “OMG, there are MICROBES in ROCKS on another planet MILLIONS of miles away! Let’s riot!” Do we need to take remedial biology classes? Will the Pope abandon Christianity? Is it time to upgrade our planetary defense systems? Should Trump’s Space Force be put on high alert? What exactly does Dr. Green fear?
(Frankly, GG is not remotely as optimistic as Dr. Green; to say “where there is water there is life” is not even accurate on earth–there is water in magma, gang, and not much in the way of life in that molten rock–and previously optimistic outlooks such as accompanied the original Viking lander proved to be misplaced. But whatever, could happen).
No, not the apocalypse or even the robot apocalypse but the end of days because…we learn something?
Probably one of the more bizarre op-eds to hit the New York Times was by philosophy professor Preston Greene, who warns us not to try a test to see if our reality is really just a computer simulation. His logic is that the simulation ceases to be useful one it realizes it is a simulation. Who knew philosophers were into comedy?
The basis for tests like the one disturbing Prof. Greene is that you can’t simulate the whole universe without…a spare universe. A rather daunting task. So the current logic is that any simulation would have approximations for more distant venues (and that Earth is the actual focus of study), and that somewhere those approximations would become apparent.
It doesn’t take much thought to question the internal logic and then the external logic here. The internal logic says that a self-realizing simulation would be terminated as it wouldn’t be useful. That all depends on what the point of the simulation would be. If it is to understand how civilizations react to learning they are simulations, then there is no risk. Or if it is to study anthropogenic climate change, it might not matter either. And in practice, it isn’t clear that being in a simulation really would change how you go about your life, so would such recognition matter? After all, people have lived their lives thinking that their path was ordained by God–how is this really different?
Frankly, if it was that important that the simulation would be unaware, you’d probably stick in some routine to warn of an impending test that might show this was a simulation so you could fudge the results of the test.
On the external side, you learn that philosophers aren’t very familiar with running simulations. For a real study, you want to make simulations that are focused on something you are interested in studying; ideally you leave out all the other junk that doesn’t matter–even if you could put it in. Does the rise and fall of civilization really depend on getting the ratio of lutetium to hafnium right in zircon crystals that were eroded and redeposited several times over the past 3 billion years? That Pluto have a geologic history? That the Earth’s magnetic field reverses from time to time? To make the Earth that geologists see, you’d have to simulate the whole history of the planet down to the isotopic content of individual mineral grains over about 4 billion years. Why would you bother? There is an amazing amount of information in some of the most prosaic of materials that actually makes sense. Why trouble running a simulation for billions of years when you could just say “let’s start with these isotopic ratios everywhere at 5000 BC?”–particularly when you want to run “very many” simulations which could be rather time consuming.
So any kind of intellectual investigation seems unlikely (too much irrelevant detail). Who would make such a total and complete simulation? Frankly, it would be some hobbyist who wanted to make a perfect simulation–probably in competition with another hobbyist.
It is hard to guess at motivations of some quasi-descendents who could master the resources of a solar system just to make a simulation–maybe outright boredom. But the logic that there would be lots of these simulations and therefore the odds are low of any one individual being “real” instead of simulated sits on a raft of pretty questionable assumptions.
One of the very first and most primary missions of the original U.S. Geological Survey and the preceding surveys was the creation of topographic maps. Many of the goals of the survey relied on having accurate base maps to work from. And so the Survey kept refining their skills, both by adopting newer techniques (e.g., moving from plane table and alidade to air photo stereogrammetry) and making larger scale maps (moving from 1:125,000 scale maps to 1:24,000 scale–the 7.5′ maps many outdoor users were familiar with).
Until around 2000, that is. About that time, the Survey decided that rapid updating of maps was more important than accuracy. This required either a lot more manpower or a shift to purely automated map making. With the proliferation of electronic means of using maps, sticking with printing vast volumes of paper maps also undercut the business model for map making. So instead the Survey began a program producing new maps called “US Topo” maps. There have been a lot of complaints about these maps from missing mine openings to absent peak elevations, but we’ll want to look at that most central part of “Topo”–the topography. As we’ll see, the name might be unfortunate as the topography may not be as useful as in the past.
Consider these parts of topo maps where GG was recently hiking:
On the left/above is the old map and the new map on the right/below (in both cases this merges two maps; you might notice that the longitude marks have moved a bit; bigger versions that don’t sit side by side are at the bottom of this post). There are boatloads of differences. The steep cliffs and multiple ridge lines and gullies on the north side of Terra Tomah Mountain are richly imaged on the left but smoothed into oblivion on the right. But there are some more subtle differences that can make a big deal on the ground. Look north of Rock Lake at the 10,600′ contour. On the old map, the 10,600 and 10,640′ contours are very close together, much closer than those above or below, indicating a short steep rise. Which GG can very is there from walking over it yesterday. On the new map everything from 10,520′ to 10,680′ has the same slope. This can be a critical piece of information when navigating in a forest. Or a less subtle piece of topography is seen in the 10,400′ contour west of Rock Lake where three dimples in the old map show three narrow gullies coming down the slope, two of which are actually likely candidates for a means to ascend the slope in question. These are absent from the USTopo map.
Well it is summer time and so a good chance to look at some geology while vacationing. GG has a few iOS apps he likes to use while bouncing about and thought you might want to look into them. (As GG doesn’t use Android, harder to make suggestions, but some of these are on both platforms). These are not the newest things out there, but sturdy applications that will work as needed. One of GG’s rules is that these apps have to be useful away from cell coverage
Topo Maps (Philip Endecott) and Topo Maps for iPad. As a practical matter, this is GG’s go-to for topographic maps. The design is elegant–everything is nearby but can be hidden if you don’t want it. You can download maps in a simple step (that initial step may not be quite your initial guess, but once you do it, all is obvious). Can import and export waypoints (neat trick is to make iPhone a web server do move stuff around); mosaics adjacent quad maps pretty effectively. Know about different datums and formats (DMS, decimal degrees, UTM, etc). Can estimate what parts of topo map are visible from a specified position (not perfect as underlying DEM a bit coarse, but highly useful). While some might lament the continued use of the final generation of USGS paper maps (aka, surveyed maps), the greater detail outside of urban areas (e.g., showing adits, shafts, buildings) and style of contouring are more helpful than the new computer-generated maps.
Maplets (Zaia Design). Two main benefits to this. One is that many handout-type maps have been scanned and are readily available. This includes various bike maps, trail maps, bus or subway maps, park brochures, etc. Having these on your phone can save you from unfolding some leaflet in a breeze, and the maps that are real maps (i.e., not artists views or diagrammatic maps) are likely GPS enabled so you can see where you are on the map. The second major benefit is that you can add your own map. GG has, for instance, added a lot of geologic maps in the southwestern U.S. to the Maplets library (when you do a search, try entering “Geol” in the filter edit field). Your map needs to be in a Mercator projection to be properly georeferenced (app uses the Google Maps modified Mercator projection), so not a trivial task, but for small areas or images you can manipulate, what a deal.
AllTrails (AllTrails LLC) is a bit of a strange hybrid. In the pro version (which requires an annual fee) you can download trail maps and info to your phone and it can help you navigate while in the field. Trail reviews can be helpful, though occasionally it is frustrating to find the trail that you wanted to hike (there is a bias towards loop trips, so if a trail can be included as a loop it is apt to show up that way, in which case the trail name might not show up in a search). Because much of the database is from user contributions, trails can be more up-to-date than old topo maps.
MotionX-GPS (Fullpower Technologies). This is what GG uses when mapping out a trail or just wanting to know how far he has gone. The original release of this app was a real battery killer, but they’ve tamed the beast and GG has charted 20 mile hikes without killing his battery. Of the many map choices, the one based on OpenStreetMaps database (MotionX Terrain) is, in GG’s estimation, the most helpful as OpenStreetMaps database is constantly updated (MotionX’s maps are a bit slower to catch the latest updates); note, however, that the topographic map base leaves something to be desired. Also fairly straightforward to download a track file and put it into the Openstreetmaps editor, should you be a creator as well as a consumer of information. Although the interface occasionally puzzles (e.g., the “Download” button at the top of a screen when downloading maps for offline use), it is pretty usable after a short period of familiarization. Downloading maps is mildly annoying, in part because of the range of options–GG suggests first looking at the map of the area of interest while online and zooming in as far as you think is useful and then noting the zoom number in the lower right; you’ll not want to lose space to the higher resolution versions of the images that are offered.
FlyoverCountry (Flyover Country Inc) is at its best when you have a window seat on a plane and have a view. In airplane mode, mobile phones now don’t disable their GPS chip, so if enough signal makes it to the device, it can show where you are. So you can use satellite images, street maps or a terrain as base map. Geologic overlays and point info can be downloaded prior to a flight or drive. Frankly, this is still awfully clunky: the geologic map units vary between too generic to be of help to too specific (e.g., in Alaska it would be helpful to know the terrane the rock unit belonged to). Sometimes it locks up or crashes (especially, it seems, when it first tries to open a map and you overwhelm it with commands). But it is pretty comprehensive, and it can be helpful even just to be able to see your position on a map while in flight. So worth a few minutes of your time to download a path and get it into your iPad or phone (pads better for this app).
Avenza Maps (Avenza Systems). Frankly, GG is less enamored of this than the other mapping apps above. First, most of the maps are for sale. Second, a lot of them are *huge* (a topo map shows as 208 Mb). The one advantage of these maps is that they are pdfs, so if the original material was electronic, you can zoom in without pixelization. Several government organizations provide their maps this way, so for some of these (e.g., some US Forest Service maps), this might be the only way to get a map on your phone. The app itself is free, as are some maps–but expect to pay for the more unique maps.
LapseIt (Interactive Universe). OK, there are plenty of time lapse photography options out there, and maybe some are more feature rich, but GG has used this one to do a couple of time lapses of the solar eclipse in 2017. Can use auto aperture or fixed, choice of lapse rate pretty flexible. But feel free to look at the competition.
StraboSpot. OK, this one GG hasn’t played with, but this is looking to maybe realize the “iPad as mapboard” dream that many of us had once we saw a tablet computer. The software is still very much in development, and online datasets are sparse, but this is designed to be a data collection tool. It will be interesting to see how this evolves….
Hope that helps some of you to find your way out away from town. If you have your own excellent app suggestions, please list them in the comments.