No, this isn’t a Twilight Zone reference to cannibalism in the classroom. GG would like to discuss the least obvious facet of faculty life, the service component. As such, this is probably of little interest to non-faculty, but might be useful to those seeking the safe haven of academia for their career…
A comment on an earlier post got GG reflecting on just what counts as the professional literature. Some 20-30 years ago, things were pretty clear. Professional literature was what was published in journals and certain professional books (like AGU monographs and GSA special papers). These were reasonably well indexed and accessible to academics. Then there was the gray literature: stuff that was sort of out there. This included theses, field trip guides, meeting publications, and reports of various flavors. To some degree books were a little less than ideal. Finally there was proprietary stuff, things like industry-acquired reflection profiles and analyses that sometimes were allowed to see the light of day in some compromised form (e.g., location undisclosed). Although these are earth science materials, there are comparable things in other fields.
How is this holding up?
A rather interesting comment chain on the website of a social scientist got GG thinking about corrections. (The blog post and comments deal with how to address published mistakes, with comments ranging from “never contact the authors” to “of course you contact the authors”). If fact, GG has gotten into lukewarm water with a couple of folks for pointing out things in their published papers in this blog. Anyways, what merits a correction? And what merits making a stink when there is no correction?
Take, for instance, a map. GG can identify several instances where maps in papers were simply in error. In one case, the author had misaligned a published map he was copying from and put a bunch of isopachs far away from where they belonged. In another, an author was rather cavalier about his location map, which placed a sampling locality far away from its actual location. Now in each case the problem could be recognized (in the first, by looking at the original source that was cited in the caption, and in the second from data tables with coordinates). Neither of these errors have been corrected (and in one case, I know the author is aware of the problem). As in neither case does the error influence the interpretation in the paper, is this worthy of correction? Of a comment?
Let’s rise up a level. Read More…
Hot on the heels of the Nature paper complaining about reliance on bibliometrics measures of success we have an Inside Higher Ed piece similarly bemoaning how simple metrics corrupt scientific endeavor.
And so what else showed up recently? Why, two new bibliometrics measures! One, the Impact Quotient, frankly does nothing but replace one useless measure (the Impact Factor) with a highly correlated one (the new Impact Quotient). The other is the s-index, which is a measure of how often a worker cites his or her own work.
We are going from trying to figure out something new about how the world works to making sure that everybody knows that we found out something new about how the world works, with the potential that the “something” has become increasingly trivial….
GG has defended peer review a few times as a means of limiting the damage from flawed papers. It is a positive good for science despite its limitations. But we are quite possibly in the waning days of peer review.
What has inspired this dystopian view? GG is an associate editor and has been for many years, and it has gotten ridiculously hard to get reviews. The growth of multi-author papers means that the number of conflicted potential reviewers has grown, limiting the reviewer pool. More and more, potential reviewers are choosing to not even respond to a request for a review–which eats up more time than a simple “no”. Others are giving the quick “no,” which is better, but still extends the reviewing process. Others are agreeing, but then decide the task is more onerous than cleaning up after a sick dog; months can go by with no response or just a hurried “getting to it now”. Sometimes there is never a review. Meanwhile, authors justifiably fume at the long times their papers are in review. At some point, the system will simply break down: authors will opt for venues not requiring review or using some form of post-publication review.
There are two culprits: the tremendous volume of papers, and the increasing demands on the time of reviewers. The first is driven by a mindset that every grant must yield papers–and in some circles, that is, every grant must yield at least a paper a year. Incrementalism drives identification of the least publishable unit. Toss in a growing trail of reviews as papers pinball down from the most prestigious journals to less desirable destinations and you seem to have an unending stream of requests for review. The final straw is the decreasing availability of funds, which paradoxically drives an increasing number of proposals which, once again, demand review.
On the flip side are the demands on reviewers’ time. First, these are the same people writing all those proposals and papers, and that takes a lot of time. But as universities have tightened their belts, more menial tasks are foisted on the faculty, from dealing with routine administrative paperwork to emptying their trash cans. Also, there is more pressure for public outreach, which takes time. Not to mention the allure of social media like Twitter and Facebook and (um) blogs. (GG views blogging as recreational, FWIW).
The obvious solutions are unlikely. Odds are low that more money will soon fill the coffers at NSF, or that outreach components for grants will be reduced. Once a university operates without trash collection in offices, it is unlikely to restore it when it can instead invest in amenities to attract the shrinking number of undergraduates.
There is only one knob really in our control, and it is our expectations of what our colleagues should be doing. Ask “what impact has your science had?” rather than “how many papers did you publish last year?” and maybe we could stem the tide.
Of course, that might require reading your colleagues’ papers. Which (sigh) takes time….
Another day, another piece in the scientific literature arguing that we need to stop counting publications and instead focus on quality-a noble goal. The latest, in Nature, is focused more on the biomedical research literature, but the recommendations sound familiar:
Politicians must understand that job creation is not — and furthermore, should not be — a primary goal of the NIH or any other science-funding agency. Funds should be distributed on the basis of merit alone, not geopolitical considerations and interests. Institutions need to realign their mentality with their original academic mission, and reduce soft-money positions. Publishers should care less about publishing flashy stories and more about disseminating solid science. Individual scientists should emphasize excellence and rigour over stockpiling more and more papers and grants.
Quite the wish list. This paper is kind of interesting for two reasons. One is it does make you ask, just who is counting? And two, it claims to identify the moment that the scientific endeavor went off the rails.
One of the typical surveys run by Pew Research Center is one asking about the impact of different institutions on America. Not surprisingly, there are differences between Republicans and Democrats in views on things like the media, churches and labor unions. But the latest iteration of this survey had a bit of a surprise: the partisan divide suddenly yawned open and swallowed higher education. Republicans have suddenly turned against higher education in the past two years, making the partisan divide on education (at 36%) greater than any other institution, including the much-maligned media.
In other words, seven years ago higher ed was thought to have a positive impact by 58% of Republicans and 65% of Democrats, and while that slowly diverged in the following 5 years, the big change was over the 2016 election cycle.
The news stories out of this suggest that this is a backlash against higher ed because of high tuition and debt or views that they are liberal strongholds. But really? All of that has been going on a long time.
There are a couple of possibilities here. One is that the members of the 2015 GOP who liked colleges were so turned off by the Trump campaign that they aren’t identifying that way any more, while the GOP attracted less well educated members of the Democratic party. After all, one of the great divides in the presidential vote in 2016 was on education. But then you might expect a sharp rise in the favorability of college among Democrats, and that number barely moved.
Another notion in the media is that colleges got dinged for making headlines about intolerance directed at right-wing speakers. But most of that postdated the election and the sudden decline was far earlier. Although accusations that college students were “snowflakes” have certainly increased, college free speech has long been viewed as questionable in conservative eyes.
No, it seems it was something a lot more specific, and GG would like to suggest it was, ironically, the arguments within the Democratic party about making college free.
Before you argue that such a program might be beneficial for a lot of Trump supporters, keep in mind that many of them oppose other government programs that might help them. Their objection, as often as not, is that you shouldn’t be a “taker.” Getting a free ride through college probably made college less of something you do for self-improvement and more of an entitlement.
Whatever the cause, this is not good for public institutions. If bashing colleges is in vogue, tuitions will rise where GOP candidates are successful–and ironically, they will rise most at schools that right now are most affordable. Is it really in the national best interest to make college even more of an elitist institution?