Review Writing 101
Having taken the seriously minority position in favor of signed reviews of papers, GG feels obliged to discuss what, in his view, makes for a good review (mind, that is not to mean a review that necessarily favors publication). Some time ago we discussed how to digest getting a negative review. This is going to wander on a bit, so you are warned.
First off, a reviewer is not deciding if a paper should be accepted for publication or not.* The reviewer is usually asked to express an opinion on the suitability for publication, but the editor makes the final decision. So writing a brief review saying “this is garbage” and checking the “reject” box is worthless. You have to provide enough information for the editor as well as the author.
So what are we doing in review? This depends to some degree on the paper itself; most papers offer new data, new analysis, and new interpretations. We’ll discuss those separately. But overall we are seeing if the science done was adequately presented and logically complete. And while the point deserves a longer post, prepublication review is a place where serious errors can be fixed and positions changed most easily. Once in print (virtual or real), an author’s views tend to become far more rigid and errors far harder to remove. Spending the time as a reviewer to really help the author correctly convey their work is a great public service, for it should reduce the possibility of later workers having to struggle with errors or misinterpretations.
But a suggestion: ignore the abstract until you are done with everything else. Why? Because the abstract is increasingly being written to get attention and so might highlight the most controversial aspects of the work. You might get strongly predisposed to attack or favor the paper based on what is in the abstract. GG has actually seen a review that seemed to mainly be based on the abstract: the reviewer saw a conclusion he hated and proceeded to assume that the authors had made some of the same assumptions as some earlier papers on the topic (the authors had, in fact, done no such thing). You know what? The papers you agree with are the ones you need to review most deeply. It is easy to overlook weakness when you agree with the destination.
A second suggestion: Write the review as though the author were across the table from you. Even if you choose to remain anonymous to the author, you are not anonymous to the editor. Being a high-handed jerk will not help anybody, tempting as the poison pen might be. Be contrite; after all, just as the author could well be wrong in some aspects of the paper, so you could be wrong, too.
OK, so let’s start with data. What should a reviewer look for in the data part of the paper? It used to be that all the data in a paper were there; these days data are hard to find. Keeping in mind that science should, ideally, be reproducible, can the data be found somewhere? Seismological datasets are getting to be pretty standard (IRIS DMC) but other datasets might be in many different formats located in many different places. While EarthCube and other such initiatives might change the picture soon, right now the reality is that locating permanent homes for data can be hard. In many cases, it is possible to archive the data as a supplemental file with the journal.
Since you usually cannot see the data (at least completely), check the descriptions of it (well, “them,” technically). Is there enough information? For instance, are the geographical descriptions of the data appropriate? Do you know enough about how the data was collected to evaluate its appropriateness for the topic at hand? For instance, imagine that the topic is deformation of a shoreline. How were the shoreline points identified? What was the vertical control? What are the uncertainties on these measurements? If the paper answers these, then things are looking up. One clue that some work is needed is use of the word “data.” Within a paper, the data used should be some kind of measurements, or some aspect of those measurements; “data” is a hopelessly generic term really only of great use as a placeholder in discussions like the present one.
Next comes some kind of analysis. This can range everywhere from a simple x-y plot to modeling with a supercomputer. Ideally you would verify that the calculations made, the derivations made, etc., are correct. Lots of luck for most papers. Sometimes you can compare the number of points in plots to learn that something went amiss, or try to find points on a map. What you should be clear about in the review is what you couldn’t do: you couldn’t evaluate some technique because it is too far outside your knowledge base; you couldn’t determine if the analysis was done correctly because you couldn’t see some critical part, etc. This can help an editor–if one reviewer actually re-ran the numbers in a table and couldn’t make sense of them while the other reviewer didn’t try, it is pretty obvious what review to trust on this point. We are seeing the results of analyses less and less in papers, and it can be appropriate to ask for such results to be archived somewhere too.For instance, in seismology the raw waveforms are now archived at IRIS. A paper that determined the lithosphere-asthenophere boundary (LAB) from Sp receiver functions might just show a cross section. But in fact there were a lot of time-consuming steps along the way: subsets of seismograms were identified, deconvolved, often corrected for incidence angle or other issues before being binned into the cross section, which might only show a fraction of the total bins. Asking for an archive of those receiver functions might allow some others to work with them, or even, down the road, determine why one paper seemed to see the LAB and another did not.
Finally come the discussions and interpretations. This is usually where there is a lot of noise from reviewers–this observation was overlooked, that interpretation was falsified in this other paper, etc. etc. Oxen lie gored, truth violated, sacred cows made into mincemeat; a lot of drama here. Here is GG’s advice: If the paper presents, clearly and accurately, a new and useful dataset, it is publishable. If there is a new analytic technique or a novel application of a technique to a dataset that can be useful, it is publishable. The conclusions can be total hooey–it doesn’t matter. If there is a concrete contribution there, the paper should make its way out. The key is that the analysis and the data need to be presented clearly and fairly. Basically, handing the authors a few pages for their own advocacy is kind of the payment of the realm for having done something useful.
This isn’t to say that a reviewer should simply pass on the conclusions or discussion. An author can be greatly helped by seeing where there is controversy and where she or he might have gone wrong. Many times a complaint from a reviewer highlights a misunderstanding from poor phrasing or even text inadvertently deleted. And because too many readers are skimming for the big message, getting a conclusion fixed can help out in a broad way. But recognize that sometimes there are value judgements here. For instance, in the disagreements on Baja-British Columbia, some papers claimed their paleomag showed large transport and the conclusions strongly said so while geologic papers would equally strongly argue that there could not have been such transport. Both sets of papers are valuable if they have contributed relevant observations; if the reader can clearly see how the observations and analyses lead to the conclusions in the paper, she or he can judge for themselves how unique that interpretation is or perhaps even divined a way to reconcile the different interpretations (remember, in analyzing papers, you must honor the observations but you can choose to ignore the interpretations).
At the end, come back to the abstract. Did the author fairly share sufficient detail on the substantive contribution, or is the abstract a polemic? If the latter, suggest changes to make it better reflect the existence of the useful stuff in the paper. A polemic abstract can hide a real contribution. Something like “Our data shows here that alien invasion led to the demise of the dinosaurs” might be enough to keep everybody away, but if the “data” was in fact some unusual isotopic anomaly, having that information in the abstract might allow other workers to build off of it.
Overall the goal as a reviewer should be to help the author to communicate what useful things he or she has done to the community at large. In some instances, this is pointing out that the present manuscript fails to do this to the degree that it shouldn’t be published, but suggesting what parts of the paper seem to have potential and why can help the author to regroup and reform the paper, perhaps for a more appropriate journal. The question in your mind should be, what would make this work reach its audience? Amazingly, sometimes the best thing can be rejection.
A final short note: there is also a class of papers–broadly including review papers and “idea” papers–that don’t fit into this mold so easily. They are trickier to review because the level of contribution is harder to identify. These are also dangerous papers because they can become community totems, papers that are used as a placeholder in place of a deep understanding of some aspect of the science. Sometimes these need very special handling as they are mostly discussions and conclusions.
* Yeah, that is the way it is supposed to work. On one paper that GG (as a reviewer) wrote a long and detailed deconstruction of a major flaw, the editor wrote back to say they would reject the paper if GG changed the recommendation from “major revision” to “reject.” GG did no such thing, in fact arguing that the paper would be far better were it kept within the journal and these problems fixed. The paper stayed, the problem fixed enough and it was published.