The NSF Lottery…for real?

A recent article from eLife suggests that funding panel scores at NIH basically are unpredictive of future results: proposals that were very highly ranked produced publications with about the same impact as those that squeaked through the system. The authors suggest that a lottery system would be more appropriate. Many of us already term submissions to the NSF programs as a lottery; should we make this the actual practice?

Is this a fair evaluation of the system? Most probably, yes.

First consider the counter arguments: the metrics are bad, or not all scientific experiments will succeed.  The metrics used were publication numbers and citations to those publications.  Neither are the most compelling metric possible. Since all the grants presumably have money for publication, it would seem unlikely that there would be much variation there given the emphasis by funding agencies on tangible outcomes. There is some hint of correlation in the citations, but citations can be off too (using non-self-citations would be a better start, actually). So it is possible that the more impactful science was getting higher marks as a proposal but isn’t well identified solely by citations, which might better reflect the publication tendencies in different subfields. However, when you start piling up all the science output, it seems likely that some fraction of the important stuff would be getting cited more than run-of-the-mill results.

So it seems quite likely that this paper is more or less on the mark.  Why should that be the case? Here there are lots of possibilities:

  1. Quality of proposal <> quality of science. A proposal that is hurried, sloppy, poorly laid out, insulting towards reviewers, contentious, or just in an ugly font is likely to be downgraded. None of those points matter to the actual science achieved. The flip side can be true too; a beautiful proposal that promises grand results addressing large questions might be unachievable.
  2. Proposers miss the point. Sometimes you’ll see a proposal where the proposed work is what really needs to be done, but the claims for what the work will accomplish are way off the mark. Reviewers are less likely to divine the really good stuff that might result from funding and so downgrade what could be an important piece of work. If the work is funded, odds are the investigator will figure out the point while doing the work.
  3. Panels and reviewers nitpick. If you’ve ever served on a panel at NSF or NIH, you’ve seen this. This proposal used the wrong reduction of gravity, that proposal is using an instrument that is a bit too old, another proposal doesn’t explain how they reduce that observation…many times these sorts of complaints have little to do with what in fact are the core elements of the science to be done. And yet proposals are downgraded because of them.
  4. Turf wars. Sometimes one school of thought on how particular studies are to be conducted or what is the big question is dominant on a panel; proposals from outside that school will suffer.
  5. Serendipity. Many important discoveries came when researchers were looking for something else.  A small grant to calibrate the geomagnetic timescale led to the discovery that an asteroid wiped out the dinosaurs–the productivity of the grant was unrelated to the quality of the proposal precisely because the researcher found something better.
  6. Fluff. OK, that is harsh, but NSF now asks for educational and outreach (“broader impacts”).  This has nothing to do with the quality of the science and so to whatever degree it changes the rankings of proposals, it lowers the predictive ability of a panel summary.
  7. Prediction is hard. Maybe 40 years ago, some 1/3 or maybe even 1/2 of the earth science proposals into NSF were flawed in objective ways. Panels would fling those and fund most of what was left. Today? Very rare is a proposal that doesn’t have a sound basis and plan of action. Picking winners from losers now is hard.

Some of these suggest ways to better examine why panels’ predictions are so poor.  For instance, removing publications with results bearing little resemblance to the original grant goals might reveal a better correlation if serendipity is a key. In some cases, such an analysis might reveal ways to improve the granting process.

Given all this, is a lottery such a bad idea? Well, maybe, but you’d have to take care or some folks might just slap together some random text and shove it in as their lottery ticket (the more you play, the more you win). The fact of the matter is that there is less money than ideas on what it could profitably be used for; allocating these scarce resources is a difficult task. It would be hard to walk away from some judgement being used to separate out proposals; perhaps you’d run this like the NBA draft lottery and you’d get more balls in the giant hopper for more positive mail reviews.

Advertisements

Tags: ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: