Archive | science RSS for this section

Masquerade Funhouse?

A thread on an AGU mail group lately has gone back and forth on whether peer-review of proposals by U.S. federal agencies is fair. Some have asserted that retribution exists in the system, but many of those who have participated have argued it is about as fair as any other activity involving humans, downplaying the possibility of massive collusion to punish an individual. It would not surprise GG if on a few occasions some kind of retribution tipped the scales against a proposal, but it is far more likely in most cases that a combination of other factors doomed a proposal. What emerged in this thread was an interesting thought, namely the reemergence of the idea of a double blind (or at least single blind) review system.

One fundamental premise, as noted by one writer, is “past performance is no indication of future success.” Basically, somebody who has generated something good might well lay an egg, while somebody whose last project failed could be on to something good.  There are two issues here GG would like to contemplate: what does it mean to “succeed” and “fail,” and what components of an individual’s scientific reputation might be relevant.

First, failure is always possible.  In trying to gain knowledge previously inaccessible to humanity, a scientist is venturing into the unknown. Things not going as planned is not particularly unusual. But what does it mean to fail? Read More…

Groupscience

GG was a bit disturbed in reading an Atlantic story describing how PLOS Biology and a few other journals were relaxing the rules on publishing papers that were scooped by rivals publishing the same results earlier (once again, pointed out by Retraction Watch). Responses from scientists as quoted in the article included “This changes EVERYTHING” and “This will change the career trajectory for many a disillusioned scientist.”

Really? What is going on?

It isn’t that allowing for parallel studies to make publication is bad, what bothered GG was the perception that this is righting an awful wrong that destroys many careers–that this is a BIG DEAL. From the way this is covered, you get the feeling that a lot of scientists are doing the exact same experiments with the exact same goals, but only the first to the finish line gets a publication. Now everybody gets a participation ribbon, right? If this truly reflects the state of biology these days, GG wonders if we have too many biologists. When considered in tandem with the “replication crisis” that also seems most focused in the biosciences along with the substantial number of retractions and fraud in that field, it feels like we are staring at an entire discipline in crisis, but not for the reasons they perceive.

It’s not like other fields don’t see work getting scooped, but it is far too rare to merit the kinds of plaudits this PLOS Biology practice garnered (In fact, lines of research in earth science are often distinct enough that truly being scooped so as to make a study unpublishable inspires questions of academic theft). So GG has to wonder if (1) certain lines of research are simply far overpopulated with researchers who (2) rush to get incremental results for publication, and in that rush (3) the least careful (or (4) most dishonest) are the most rewarded, which means that a significant fraction of new results are, in fact (5) non-reproducible. Now maybe GG misunderstands biology (well, OK, no “maybe”)–perhaps the bar on being novel is much higher in bioscience such that a paper reaching the same result from a different pathway becomes unpublishable.  This is not the case in earth science: a paper determining, say, the slip history of an earthquake from seismological observations won’t preclude another paper getting the slip history from geodesy–or even other flavors of seismological data. Yes, you may have trouble getting your later geodesy paper in Science or Nature, but there won’t be barriers in discipline journals.

GG recently was reading the GSA Presidential Address of M. King Hubbert from 1963,  and his complaint (he was an earlier grumpy geoscientist) feels relevant:

Within the university it is easily seen that such a system [of “publish or perish”] strongly favors the opportunist capable of grinding out scientific trivialities in large numbers, as opposed to the true scholar working on difficult and important problems whose solutions may require concentrated efforts extending over years or even decades. It took Kepler, working on the lifetime of astronomical observations of Tycho Brahe, 19 years to solve the puzzle of planetary motions, but the results were the now celebrated Keplerian Laws of Planetary Motion. Newton, with few intervening scientific publications, spent altogether some 20 years studying the mechanics of moving bodies before writing his great treatise Philosophiae Naturalis Principia Mathematica (1686) in which are derived the Newtonian Laws of Motion and the Law of Universal Gravitation. Twenty-two years of work, the last 11 essentially free of other writings, preceded Charles Darwin’s publication of On the Origin of Species by Means of Natural Selection (1859). How long could any of these men survive in an American university of today?*

If everyone is pursuing the same results with the same techniques over and over and over, it speaks to an absence of imagination, an unwillingness to explore, a pressure to indulge in groupthink–in essence, a desire to “grind out scientific trivialities in large numbers.” What all this noise and fury truly represents is not a terrible wrong committed against nobel scientists, but a gross corruption of the goals of science and a massively unhelpful and misaligned reward structure. Letting the late comers publish is hardly the big solution the Atlantic article claims; it is, in fact, helping to hide the main problem by rewarding the losers in a race that never should have been run.

*-An aside.  Of course, if we gave all faculty 20 years to publish something, for every Darwin or Newton, we’d have hundreds of deadwood research faculty.  Hubbert was in fact arguing that research as supported by the government should be moved out of the university.

De-Supplement

Perhaps the most bizarre aspect of the migration to electronic media is continued use of one of the most annoying aspects of the original print-electronic split, namely the electronic supplement. Originally intended as a way to keep things like data tables from overwhelming print copies of journals, this appendage to publications has become more a scourge than a help.

The first attempt in the geosciences to do something like this was the ill-fated split of GSA Bulletin papers into a paper version and a microfiche version (ick). This actually devastated the journal as authors fled to safer harbors. As more and more papers are being accepted with small stubs as the “article” and the main text as the “supplement”, journals seem hellbent on replaying GSA’s old blunder.  The only difference is that this time, there is no real excuse for doing so.

Basically, at this point, why would any supplement that is either text or pdf format be excluded from being part of the electronic version of a paper? These are often no larger files than the “main” article, and increasingly these parts of papers are where the real nuts and bolts are that you need to really evaluate a paper.

There might be reason to keep supplements for non-readable materials (binary files, mainly), but it is time to bury them for stuff that is human-readable. Call these things appendices, put them at the end of the paper, and include them directly when you go to download the article.  Enough of this business of realizing later, after you’ve pulled down the article and started to go through it thoroughly, that the guts are still sitting somewhere online.  Worse, all too often the outline of the logic is in the main paper while the supplement contains the pieces of the body, often tossed together in a disorganized heap that was spared any copy editing. Reassembling the true logic of the scientific work is harder than it would have been had the paper been written as (gasp!) a single, long paper.

Why are we continuing to allow paper journal formats to mangle our science?

Science Training: Pro vs. Amateur

See Logic of Science’s Training to be a scientist: It’s not an indoctrination and it’s more than just reading

The basic point is that becoming a scientist is more than reading a lot of science, and we in the science community sometimes forget that the lengthy process of learning what research is, how to do it, and how to evaluate that done by others is not recognizable to those not in the community. This post is a nice reminder from one in the midst of that training.

Wilderness Learning, Part 3

Lesson 2: Perseverance (part 2)

Well, after getting in position in the backcountry to do seismology and well on the way to recovering from the setbacks of losing helpers and the blocked trail, we now got to start doing what we had come here for: deploying these bulky old seismometers.

We decided that bedrock on the valley wall would be our best choice for this first station [we missed a much easier exposure of bedrock just north of the ranger station area]; this meant we pulled out our packframes, lashed the gear to the frames, and struggled up the scree at the base of the west wall of the canyon.

slide265

Tom unpacking an LBS recorder

For the most part, things went reasonably well.  We found a nice spot to dig in our seismometer (an L4-3D if you are wondering), the seismograph unpacked OK and would run. But then we finally ran up against a serious problem.  We had to figure out what the time on the seismograph was.  It had to be within about a hundredth of a second, or all our effort would be wasted as the experiment (seismic tomography) required accurate times.

[As a reminder, the seismometer is the instrument that actually detects and amplifies ground motion, in this case varying voltage in wires that go to the seismograph, which further amplifies the signal and records it, in this case on the reel-to-reel tapes.]

Read More…

Must Good Science Proselytize?

“Publish or perish.” So goes the mantra supposedly defining academic life. The implication is that publication is enough. We can all agree that unpublished science is not going to have much of a chance of influencing the course of scientific thought, but is the act of publication really enough?

There are a number of instances of science rediscovering things. Perhaps one of the more familiar stories is that of Gregor Mendel, whose work, goes the story, went unread by Darwin and unnoticed by others, leaving Mendel to die in obscurity, his work only appreciated years later when it was rediscovered.  While this story is largely wrong, it does resemble some lesser known cases.  That blog post listing some of the rediscoveries concludes by saying, in essence, this won’t happen in the future.  Our literature discovery tools are too good.

Nonsense. (At least for now). Read More…

Mapa Culpa

GG has been rather abrupt in some previous posts with some authors over choices made in making maps for publication, and some of the authors have been insulted, in part because GG seems to have mistaken a conscious choice for laziness or thoughtlessness. Its not like GG is free and clear on making mistakes, so today let’s review a few boo-boos of GG.

First up is one we discussed before, the use of gradient color:

LevandHm

Mantle topography from Vs model of Shen et al, from Levandowski et al. 2014.

GG was the thesis advisor and second author here, and there are two main problems.  One is the rainbow color map, which is widely panned for the challenges it places before those with some level of color blindness. [Alternate color schemes are nicely described here and you can test how images look to the color-blind here. The only good news for this figure is that a continuum of color is more decipherable than separated color blobs]. The other problem is that the colors are continuous.  Can you tell -2.2 from -2.6?  This probably would have been better with discrete color steps. But that’s probably not the complete answer:

This is from Jones et al., 1994, showing travel time residuals with a discrete color bar. On the left is the original, on the right as viewed by a red-blind viewer (as created by Coblis). While the Levandowski et al. image could be deciphered because of smoothly continuous colors, this figure becomes hopeless. (Plate 2 in Jones et al., 1994, is even worse, but GG can’t recover the original file to show this cleanly, and plate 3 is an utter disaster for red-green colorblind). There are similar examples in later tomography papers. Frankly, using some of the better diverging color maps out there would be a better practice.

But this post is inspired more by a comment made by Sara (Carena?) on an earlier snarky post: How about the projection used?   Read More…