Archive | academia RSS for this section

Why Not Review?

So the poll has been out for awhile and most folks review papers because they are in their specialty and they have time and the material looks interesting; other reasons are much farther down the list. The least likely reasons to choose to review are if money is involved and if the journal is a non-profit. So if you want to know why Elsevier continues to rack up monster earnings, this would be part of the reason.

As one commentator pointed out, this may not be getting at reasons why people turn down a review request, though it does seem likely that the reasons would be “no time,” “this isn’t my specialty,” and (though probably not said out loud as much) “this looks pretty dull.” Probably not as clear in the list in the poll might be a conflict with the authors or some aspect of the paper.

As it took GG 14 potential reviews approached to get 2 reviews for one paper, he would like to know how might we increase the fraction of people who accept a review. Editors are trying to ask people with the right expertise and who are likely interested in the content; is there some way to help them make time? It seems hard to know short of encouraging people to forget their personal lives…

The fourth place reason to review suggests one possible pressure point; a fair number of folks choose to review for the journals they publish in.  Maybe there is some penalty or reward that could be in place to encourage some of these authors to find the time to review.  Many journals already have awards for excellence in reviewing, so that carrot seems to have limited effect. There have been efforts to recognize reviewers in other ways. Some journals have a list of responsive or frequent reviewers. There has been some thought to returning Geology to an old practice of including a sentence or two from a review as part of the published paper. As a monetary credit seemed to carry little weight in choosing to review, maybe it would be access to some expedited review process…or a slow track for those who consistently find time to submit papers but somehow never have time for a review.

Is this simply more evidence that the review system’s antiquated view of what reviewers should do is approaching utter collapse? GG isn’t sure yet but isn’t very optimistic….

Why review? (Poll!)

GG is a bit frustrated having now asked 10 scientists to review a paper.  At the moment there are 2 recently contacted with no answer, 5 outright “no”s and 3 no reply after a long time.

So this prompts GG to ask the question, how do you decide when to review a paper? Give your best answers below and maybe GG will figure out what he is doing wrong….

Educational Tension

GG has taught a lot of classes over the years and generally does somewhat below the departmental average as measured by a questionnaire filled out by students in the last weeks of the term, here termed an FCQ (Faculty Course Questionnaire).  Does this mean GG was the worse instructor?

It turns out that something relevant has shown up in studies of different forms of teaching.  So-called active learning has been found in multiple studies to result in greater comprehension of material than a standard passive lecture. But active learning isn’t as widespread as maybe it should be, and part of the reason is that professors say their students don’t like it.  This has been confirmed by a study that both shows that students think they learn more from a lecture, and that they actually learned more from an active learning class. While there are many facets to this, part of it is that a well constructed lecture is apt to sound so simple that students think they have mastered some concept even though actually trying to implement that concept might reveal less mastery.

The point being that asking a class how much they think they learned is probably an exercise in self-deception.  We already knew that such evaluations were tied to the mean grades in a class and have long suspected that personality plays an important role in student happiness with a course. None of that reflects the actual success in teaching.  The problem is that finding a tool suitable for measuring learning is hard.  Physics in some ways has it easier: the concepts are quite clear and the material is pretty well circumscribed.  There are a lot of physics learning inventories that are pretty well vetted out there. In earth science the available tools are fewer and far less comprehensive.

So do lower FCQ scores mean GG is a less effective teacher?  We don’t know.  Quite possibly the answer is yes, but in not knowing we run the risk of keeping less effective instructors in classrooms and moving more effective ones out.

Going Peerless

Seems like every week or two, somebody is complaining about peer review–it is a barrier to scientific communication, it empowers gatekeepers, it destroys careers, etc. Now GG doesn’t buy into that (baby and bathwater territory, in his view), but perhaps as an exercise, what would happen if we outlawed peer review?

So you write up some science and want it shared with other scientists.  Let’s consider your options: email, blog post, paper server, journal.

So to start with, you email all the colleagues in your field with your paper–how does that go? Probably your email looks like spam to a bunch, but maybe you have cultivated connections well and several colleagues read the paper and want to incorporate its results in their work. How shall they cite it? Should they include your article with theirs, a kind of blockchain sort of thing? Hmmm, this doesn’t sound promising…

Read More…

Mining the Data Dumps

GG is hunting around for some information related to the little trainwreck series of posts, and has noticed some issues that bear on the broader business of (upbeat music cue here) Big Data.

Now Big Data comes in lots of flavors.  Two leap to mind: satellite imagery and national health records. Much satellite imagery is collected regardless of immediate interest; it is then in the interests of the folks owning it that people will find the parts of interest to themselves.  So Digital Globe, for instance, would very much like to sell its suite of images of, say, croplands to folks who trade in commodity futures. NASA would very much like to have people write their Congressional representatives about how Landsat imagery allowed them to build a business. So these organizations will invest in the metadata needed to find the useful stuff.  And since there is a *lot* of useful stuff, it falls into the category of Big Data.

Health data is a bit different and far enough from GG’s specializations that the gory details are only faintly visible. There is raw mortality and morbidity information that governments collect, and there are some large and broad ongoing survey studies like the Nurses’ Health Study that collect a lot of data without a really specific goal. Marry this with data collected on the environment, say pollution measurements made by EPA, and you have the basis for most epidemiological studies. This kind of cross-datatype style of data mining is also using a form of Big Data.

The funny thing in a way is that the earth sciences also collect big datasets, but the peculiarities of them show where cracks exist in the lands of Big Data.  Let’s start with arguably the most successful of the big datasets, the collection of seismograms from all around the world. This start with the worldwide standardized seismic network (WWSSN) in the 1960s.  Although created to help monitor for nuclear tests, the data was available to the research community, albeit in awkward photographic form and catalogs of earthquake locations. As instrumentation transitioned into digital formats, this was brought together into the Global Seismographic Network archived by IRIS.

So far, so NASA-like. But there is an interesting sidelight to this: not only does the IRIS Data Management Center collect and provide all this standard data from permanent stations, it also archives temporary experiments. Now one prominent such experiment (EarthScope’s USArray) was also pretty standard in that it was an institutionally run set of instrument with no specific goal, but nearly all the rest were investigator-driven experiments.  And this is where things get interesting.

Read More…

Post-Poster Blues

GG stumbled onto a story about remaking scientific posters describing work from Mike Morrison, a PhD psychology student.  His video on the weaknesses of scientific posters and his suggested solution is well worth watching. Many recommendations are classics, essentially boiling down to KISS (Keep It Simple, Stupid). GG is interested in investigating something of the origins of the problem described and how, in earth science, things might not be quite as amenable to his solution.

First up, how did we get to the poster hall of doom, anyways?

img_2884

Part of the poster floor at the Fall 2016 American Geophysical Union conference.

Posters are actually a fairly recent innovation (so the NPR story line about changing a “century” of conformity is nonsense). Professional meetings started as everybody getting together in a single room and, often, each reading their paper to the rest of their society (the early issues of the Bulletin of the Geological Society of America not only included the oral presentation but the Q and A afterward). Splitting into multiple oral sessions followed in time. When posters first showed up at AGU in the 1970s, they were in a small room and were a definite side show (GSA came later).  Some of these were presentations that people otherwise couldn’t present (maybe they missed the meeting, or had breaking results that were too late for inclusion in the regular program), but some were materials that simply didn’t lend themselves to oral presentations.  Big seismic reflection profiles and detailed geologic maps were often such materials.  “Posters” as seen today didn’t really exist: printed materials were tacked up in whatever form was handy; layouts were impressively fluid. So initially a lot of posters were things actually better shared in that format.

Read More…

Generalists and the PhD

A PhD is somebody who gets to know more and more about less and less until he knows everything about nothing.

That bromide (a variant of others) gets passed along quite frequently about academics, and a new book by David Epstein seems to confirm the implication that super-specialization is not useful. As described in an excerpt in The Atlantic, when narrowly focused experts try to make predictions, they fail spectacularly in comparison to predictions made by generalists. One example is the conflicting forecasts of Paul Ehrlich’s “population bomb” versus the counter-prediction of continued economic improvement made by Julian Simon; both missed the mark in different ways, but both continued to double down on their forecasts. Following many others, Epstein compares the two groups to hedgehogs and foxes. So why on earth should we make hedgehog PhDs?

On its face, a PhD is generally trying to untie one small knot in our universe of knowledge. When did the Rio Grande Rift start extending? What is the power law exponent for sodic feldspar if deforming by dislocation creep? Just how many angels can dance on the head of a pin, anyways? If all we do is train somebody to continue, arrow-like, on that initial trajectory into some byzantine corner of human knowledge, then we have failed. So what then would be success?

Success should be learning how to identify problems worth solving that are solvable and then defining a course of action that will yield that solution. In short, a PhD should be an exercise in learning these skills and applying them in one place to demonstrate mastery. Why would this lead to deeply entrenched viewpoints seemingly unchangeable by evidence?

Read More…