R01s in Decline

A letter in Science from H. George Mandel (GWU) and Elliot Vesell (PSU) starkly lays out data showing the decline of R01 funding.

From 2000 to 2007, the success rate for new Type 1 applications dropped from 20.3% to 7.2%. The average award per R01 looks to have dropped from $3.38M to $2.69M. For individual institutes, they report new application success rates of 5% for NCI and NIAID and 3% for NINDS. Oof.

For Type 2s in this same period, success rate halved from 53.0% to 25.2%, with the award amount declining from $3.03M to $2.44M. The authors note that “For renewal applications, the decline means discontinuation of 75% of ongoing programs.” What a colossal waste of tax-payer investment.

Now, these are rates for new (A0) applications. The authors note that:

For FY2007, first-time and second-time revisions have provided funding for an additional 1573 and 1272 grants, and $321.1 and $470.5 millions for new grants. For Type-2 amended applications, these numbers are 932 and 626, and $352.5 and $228.3 millions, respectively.

It will be interesting to see how the numbers shake out when the A2 falls off the table and other peer review and application policy changes become fully implemented. Still, I think Neuro-conservative has it right.

Mandel and Vesell also examined R01 funding as a proportion of total NIH funding and of course found a similar downward trend:

Since FY2000, R01 funding has suffered compared with overall funding, so that by FY2007 the deficiency reached almost $1.2 billion. Rectification of this progressive decline in R01 funding would provide about 3200 additional research grants.

The next NIH Director will have some very hard choices to make, and one hopes the maintenance (restoration) of the R01 mechanism as the foundation of biomedical research in the US will be among his or her priorities, if need be at the expense of a few plush rest stops on the Roadmap.

This particular issue of Science, Clinical Trials and Tribulations, also has articles on the spiraling (upward) costs of conducting clinical trials, the ethical and scientific concerns of conducting trials overseas, the conduct of trials to promote rather than test drugs, moves to make clinical trial data more widely available, the gains of women enrollment in clinical research, and the twists & turns of cholesterol research as a cautionary tale. Also of interest may be an editorial on the misuse of the impact factor and a story about a U Wash program to teach bioethics in secondary school classrooms.

19 Comments »

  1. The next NIH Director will have some very hard choices to make, and one hopes the maintenance (restoration) of the R01 mechanism as the foundation of biomedical research in the US will be among his or her priorities, if need be at the expense of a few plush rest stops on the Roadmap.

    Fuck that shit! We need DEEP SEQUENCING and HIGH THROUGHPUT METABOLOMICS and MICROBIOMICS and SYSTEMS BLAHOMICS!!!!!! We need CROSSCUTTING PARADIGM-SHIFTING BENCH-TO-BEDSIDE megagrants!!!!! YYYYYRGGHHHHHHH!!!!!!!!!!!!

  2. whimple said

    Actually, the main problem as I see it is not the total amount of money the NIH is giving out, so much as it is the way in which that money is being progressively concentrated into a decreasing number of research institutions and a decreasing number of PIs. In times of stress, the rich get richer and the poor don’t just stay poor, they go broke.

    But, major scientific advances come from both a diversity of ideas, as well as from concentrated effort in specific areas. The current R01 mechanism fails to explicitly differentiate these two considerations, and consequently has become unbalanced in favor of concentrated effort (multiple R01’s per PI and non-modular budget R01’s), at the expense of diversity of ideas.

    I think this could be solved by explicitly separating the money for Diversity of Ideas from the money for Concentrated Effort. Say we call these R01-DI’s and RO1-CE’s.

    R01-DI’s (diversity of ideas) could be limited to one per PI and set at the current full modular budget. This is enough to pay salaries of two staff members, a reasonable fraction of the PI’s salary, and provide money for supplies. The RO1-DI could provide a stable core of funding for many different labs providing the ability to generate pilot-level data for larger projects, and would encourage efficient use of monetary resources, and generally keep the research pot cooking. I would suggest that the total R01-DI dollar pool be fixed, and indexed to inflation, in order to confer some predictability to research institutions for their hiring policies.

    The labs headed by the major players and rising stars, or those doing expensive follow-up experiments could then fight it out amongst each other for dividing up the R01-CE (concentrated effort) pool of dollars. This would allow the NIH to really put the focus on specific scientific areas, or specific research institutions and/or specific investigators if the NIH so chooses. The R01-CE dollar pool could be allowed to fluctuate according to the largesse or parsimony of congress at the time.

    This concept comes from Fish Farming 101… when you’re trying to produce a steady-state system for the generation of quality fish, you don’t just put all the fish in the same tank, because when food gets scarce, the big fish will just eat the little fish, and eventually you wind up with no fish at all.

  3. yuping wang said

    I was very upset and frustrated by the current review system …
    My last R21 grant got a score of 182. After revision, the proposal got “unscored” … I didn’t think at the scientific review officer did a good job.
    You spent a lot of time in preparing a proposal but who cares….
    We really need to revise the review system…

    Well, you’re not alone in that category (reasonably hopeful A0 score becoming unscored at A1 … or A1/A2 loss of score). Lots of potential reasons, including a possible increase in triaged applications in that study section, such that an application would get a 180 score one round but be unscored the next despite keeping the same “score” of 180. The bar just to get scored keeps dropping, especially since SROs knew the FY09 budget would be in limbo (& kept to un-supplemented FY08 levels) until next spring or summer. The planned institution of a provisional score (even if triaged) should help know whether you were on the cusp or not in contention at least and avoid some wasted resubmission effort. – writedit

  4. drugmonkey said

    ok, yuping wang, why do you think the SRO as deficient and why do we need to revise the review system? do you have any reasons for why you think this other than the fact that your great grant didn’t get funded?

  5. drugmonkey said

    and really writedit you should know better than to repeat this nonsense. The dramatic changes in the proportion of applications getting funded unrevised and at the A1 and A2 stages make this sort of comparison meaningless. It may be that R01s are in decline but this analysis can’t support such a conclusion.

  6. whimple said

    DM, what happened? You skip your meds today?

  7. BB said

    Yuping maybe got caught in the new triage system, wherein more than 50% of proposals are now triaged.
    But the gripe is valid: if you answer all reviewers concerns and your reapp still goes from scored to triaged, isn’t that a failure on the part of CSR to ensure fair reviews? Or do we start from the premise that reviews are biased, study sections give better scores to friends/colleagues/people they’re afraid to piss off/fill in the blank?
    Why counsel any young person to go after a career in science?

  8. bikemonkey said

    If I can jump in here BB, the review rules are made quite clear on this. Revised apps are NOT to be benchmarked against prior scores. All apps are to be compared primarily within round. So score relative to prior score does not tell you anything about the quality or validity of the review.

  9. whimple said

    So score relative to prior score does not tell you anything about the quality or validity of the review.

    So… then what exactly does tell us about the quality or validity of the review?

  10. bikemonkey said

    what exactly does tell us about the quality or validity of the review?

    Some substantive point that has a nodding acquaintance with the realities of NIH grant review.

  11. neurowoman said

    Can you explain the data in the article a bit more? I don’t really get it. The author is showing a decline in ‘unamended’ success rates; meaning the first time (A0) submission; but shouldn’t we be more interested in eventual success rates (chances you’ll eventually get the grant funded)? Will reducing the number of resubmissions from 2 to 1 mean that the A0 success rate will go up (forcing study sections to not push off new grants till A1 or A2), or that more people will have to write entirely new grants to get funded?

    And does the second table mean NIH has a pool of moolah for R01’s that they’re failing to award? If so, why don’t they just lower (raise?) the payline? What’s that about?

    On the first table, the authors are concerned not with ultimate success but delay in success. Submit A0 in Feb 08, get score in July 08, submit A1 in November 08, get score in March 09, submit A2 in July 09, get funded (maybe) in April 10 … 26 months and a lot of work since the initial submission, especially since you’ll have needed to collect more data and publish more papers the whole while. For Type 2s (renewals), PIs must struggle to keep a lab staffed or animal colony maintained or clinic running all this time with limited or no funding. While a lot of grants do need tweaking to be award-worthy, the authors are concerned (I believe) that funding good science is being unnecessarily delayed and perhaps lost (along with good scientists).

    On the second table, the point is that the NIH has shifted funding away from R01s to other mechanisms and priorities. The authors suggest that if the proportion of the NIH budget devoted to R01 support had remained constant since 2000, ICs would have funded an additional $1.2B worth of R01s in 2007. However, this shift cannot be rectified easily given the NIH Reauthorization Act, which by law requires all ICs to pay into the Director’s common fund to eventually comprise 5% of the total NIH budget (leaving them with less for traditional R01s). – writedit

  12. So… then what exactly does tell us about the quality or validity of the review?

    Actually, no information available to any individual applicant, not publicly available aggregate information, tell us anything about the validity of review as defined by the percentile a grant is scored at.

  13. writedit said

    In Nature’s coverage of the A2 elimination, they report that the NIH estimates the number of applications submitted will drop by up to 5000 a year.

    Nature also reports on the giant sucking sound of the Roadmap epigenomics initiative, which is estimated to cost $190 million over the next 5 years, and the protest lodged by 8 scientists in a letter published in Science.

  14. Neuro-conservative said

    Why counsel any young person to go after a career in science?

    What are you talking about? Who would be so unspeakably cruel to a minor?

  15. BB said

    Bikemonkey, I’m back after being off for the holiday. You wrote: Revised apps are NOT to be benchmarked against prior scores. All apps are to be compared primarily within round. So score relative to prior score does not tell you anything about the quality or validity of the review.

    Tell it to study section please!; they do, can, and will look at previous scores; more importantly they do, can, and will comment on how well one has addressed the reviewer’s concerns in the resubmitted version.

  16. bikemonkey said

    They do, BB, but they are not supposed to. Commenting about how well one has addressed the prior critique has nothing to do with benchmarking the score. DM put up some of his lame blather, but got it mostly right…

    http://scienceblogs.com/drugmonkey/2008/10/the_score_on_your_nih_grant_re.php

  17. BB said

    So the take-home message is this: address all reviewer concerns, but don’t expect it to help your grant’s score in the end. However, not addressing concerns will most likely hurt.

  18. whimple said

    So the take-home message is this: address all reviewer concerns, but don’t expect it to help your grant’s score in the end. However, not addressing concerns will most likely hurt.

    I wonder if this is why CSR toyed with the idea of getting rid of all grant revisions and just making every submission a new grant without an “introduction” in which reviewer concerns from past rounds were addressed.

  19. bikemonkey said

    it is fair to expect your score to get better. It is not fair to claim something is wrong with the review if it does not.

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: