Fraud and Peer Review
A whole lot of attention has been focused on academic fraud in the wake of what seems to have been a spectacular case of data invention in a study of the effects of gay canvassers on the opinions of those opposed to gay marriage. It has inspired the New York Times to suggest that raving media attention is driving some bad science into journals like Science and Nature. Um, guys, you hadn’t noticed that already?
One suggestion being made is that peer review is broken. We’ve talked about some real problems with peer review before, and also discussed the bigger issues with letter journals (and revisited Carl Wunsch’s diatribe against the “near-tabloid” journals). So GG agrees, things are broken in short-paper-land, but is this case really showing how it is broken?
The problem was not peer-review per se. It was trust. This is why the Amazing Randi spent so many years showing naive scientists how easy it was to fool them. Read the detailed (and rather disturbing) review of just how this whole paper got unravelled and what you find is that there was trust between second (and senior) author Green and lead but junior author LaCour, then trust by reviewers and other scientists in Green. Obviously that trust was misplaced. Does that mean peer review is broken?
Well, yes and no. As Randi has shown, it is easy to fool scientists; they operate under the assumption that others are telling the truth. There is good reason for this: if you have to doubt the reliability of every other worker, you cannot build on any but your own work and work you can go back and verify. That can be an awfully narrow and tiny pedestal to stand on. So intentional fraud is hard to ferret out unless, as in this case, the work is so significant to your own work that you try to build on it and discover the rotten core. To charge peer review with fixing this is putting a heavy burden on an already creaky structure. The folks who could really do this in review are probably the academic competitors to the author, and then you run into other problems. (GG has turned down doing a couple reviews precisely because the work in question was close enough to some project of his that, to behave ethically, he would have had to set aside–something he actually did once, long ago). Frankly, this would require putting some money on the table for reviewers to accept the burden of showing that new work could be fully verified.
But you do wonder about this trust business. How much responsibility should a senior author have? Sometimes we are called in as experts on some aspect of a project, where we trust our colleagues are doing their part of the paper correctly. Here it seemed a matter of calculated academic cover; a paper should get the same scrutiny regardless of the seniority or profile of the authors, but that seems not to have happened here. It does suggest that senior authors need to contemplate exactly when to stay off a paper and when to ask to see all the data.
At the end of the New York Magazine piece, David Broockman (the fellow who popped this particular bubble) opines a bit on what might prevent stuff like this:
Broockman has ideas about how to reform things. He thinks so-called “post-publication peer review” — a system which, to oversimplify a bit, makes it easier to evaluate the strength of previously published findings — has promise.
*Sigh*. First, arguably this *was* post-publication peer-review, no? Second, this business of relabeling what has always gone on as something new really needs some perspective. Any science that is built upon is in many ways subject to post-publication review, and you aren’t going to get “post-publication review” of something nobody is using. About the only difference between science as its been practiced for many years and “post-publication review” is a webpage.