Is the NSF Lottery Getting to be too High Risk?
Consider that currently the EAR directorate at NSF reports a 23% success rate for proposals submitted to GEO programs. Within GEO, the rate is variable within individual programs, with some approaching 10%. As the reported success rate considers partial support a success, the fraction of money awarded to money requested is lower. NSF does not separate facilities from investigator-driven proposals, but few facility proposals are outright turned down (SAFOD is about as close to a failed facility as anything), so success of investigator-driven research is probably lower. Toss in various effective earmarks (things like historically research underperforming states, etc.) and the rates for core investigator-driven science are probably down in the 1 in 6 to 1 in 10 range.
Now figure how long a proposal takes to be made. Sloppy, ill-prepared proposals from 30 or 40 years ago are noncompetitive; you can’t just write out a neat idea and expect to see some bucks. Writing a proposal within a week is a challenge; 2-4 weeks is probably more in line with common experience (no doubt there are some PIs out there chuckling that they can crank these out in a day or two–well, congratulations. Not GG’s experience). Say you write proposals to fund a student for three years and yourself for a month or so in those three years, and say that your program is expected to support 3 students a year. These are not atypical numbers. If success is 1 in 6 and you need one successful proposal/year, you have to write 6 proposals a year. You cannot work on them while being supported by NSF, so no summer work on proposals if you actually have NSF summer funding. Your six proposals will eat up 1.5 months if you are efficient up to 6 months if you are really careful. If you are teaching faculty, that about uses up your time to do science.
(From this math it is easy to see how the biologists have ended up with their pyramid structure: the faculty only write grants and put their names on papers, postdocs write papers based on research conducted by grad students. The potential in this system for cheating, plagiarism and other misbehavior is high).
Why refer to NSF as a lottery? Nearly every researcher can recount a panel summary that unfairly misread a proposal, or a mail review that shot down a proposal because of some competitive issue. Anticipating such problems is somewhere between hard and impossible; toss in the fact that attacking truly controversial (in a scientific sense) issues is more likely to elicit such responses and you learn the hard way that good proposals (in the sense of attacking worthwhile problems with appropriate or novel techniques) often get shot down. Pedestrian proposals with well-established techniques, while not generating excitement from reviewers, have a better shot at running the gauntlet.
Consider the old rule that Eldridge Moores is said to have had in editing Geology: he wanted papers that got one excellent review and one reject review. He apparently felt that such contentious papers were the ones worth highlighting for the community. There is an argument to be made that something similar might be appropriate for grants, but if program directors fear that funded proposals that had one or two highly negative reviews will be second guessed by politicians, how likely are we to see such a system?
Finally ask, does this system encourage solid science? Arguably it encourages solid grantsmanship, which is hardly the same thing. This means rapid publication of whatever, it means proposals designed to minimize friction with reviewers, it means an ability to generate lots of proposals over short time windows–it basically means that groupthink is to be rewarded. Consider too the increasing lack of flexibility within grants. Walter Alvarez came up with the dinosaur extinction hypothesis while working on a paleomagnetics project. GG doubts he asked permission for this diversion; today, in theory and increasingly in practice, such a substantial shift in focus of research would require approval of the program manager. The message? Keep your head down, do what you said you would do, don’t rock the boat too much. [There is an irony on the other side: science that turns out to confirm what we guessed we knew is harder to publish than speculative or erroneous results that seem to conflict with current knowledge, leading to a literature that is prone to oscillate from one extreme to another].
Its not clear that there is a solution. The easy money of 50 years ago won’t recur. The growth in numbers of schools wanting to be research schools and not merely teaching schools is unlikely to reverse. Political considerations would discourage if not formally outlaw concentrating funding in a few places with strong histories of success. So success rates aren’t going to increase. Perhaps relaxing the standards of research funding in some universities might slow the flood of proposals (but this isn’t likely either as research funding is helping to keep some places afloat: although research moneys are not paying for academic year salaries, they are paying for graduate and occasionally even some undergraduate tuition, which does pay for salaries). The best hope lies in reviewers and panelists and program managers really reexamining how they look at proposals: are trivialities distorting the view of the whole? Is the perfect the enemy of the good?