Fixing Near-Miss Grant Applications

So you have that priority score that is frustratingly close but no cigar. In fact, your program officer may have held out hope for last minute Council support … but no. What to do?

Case Study:

R21 receives a priority score of 138, 11th percentile (think about this a moment folks – 138 … 11th percentile?) … concerns are incredibly minor, including a quibble over word choice (discussed!). Think, use of sex vs gender appropriately. Not relevant to the science at all. However, this A0 is not funded.

The PI takes the Neuro-conservative zen view on resubmission (First, Do No Harm), including a clean, simple apology for the errant word usage. Remember, this is an R21 with only a 1-page introduction.

The A1 receives a priority score of 107, 0.1st percentile, with the following comments in the summary of discussion: “The proposal is significantly bolstered by the thorough responsiveness to minor concerns from the previous review and with the addition of new preliminary data. … the committee was unanimous in their high enthusiasm for this outstanding application.” Obviously.

So, the PI kept collecting data, did not change anything in the Specific Aims or, that I can recall, in the Background & Significance … did streamline one set of experiments thanks to a recent innovation in the lab.

In this case, I don’t think the program officer was a fount of insight, but I would recommend starting there if not enough guidance for the resubmission is communicated in the resume & summary of discussion. Hopefully your program officer heard or heard of the discussion and can offer pointers for what to address. I’ve seen scores of summary statements for just about every grant mechanism and IC, though, and usually the opening discussion paragraph (for scored applications) throws the PI a bone for preparing an amended application.

Certainly a pristine, laser-focused introduction sans color commentary is critical. Citing new reports in the literature as appropriate (especially by study section members) and of course tying in new data (even better, a new manuscript accepted/published) you have will show reviewers you are committed to seeing this work through in a sound, scientific manner. Appropriately integrating new data from the literature and/or your lab in the research design and methods (additional rationale for your approach, consideration as an alternative approach, insight into potential pitfall or divergence in outcome, etc.) will further increase reviewer enthusiasm. If your IC (institute/center) has a new strategic plan or programmatic priority in line with your proposal, point this out as well. The key is not to open yourself up for potential target practice with the insertion of a new theory or premise or methodology. And, of course, not to get in a pissing match with the study section.

I’m sure comments from yuns will add a battery of great suggestions. I find it difficult to offer blanket advice on this sort of situation without knowing the application and its history. Grantsmanship instruction is available freely everywhere … the key is to get good input on your application specifically rather than try to fix it in a vacuum, no matter how many grant-writing workshops you’ve attended or Websites you’ve clicked through. An army of one was never less likely to succeed as in the battle for grant funding.



  1. whimple said

    That’s great to say keep on collecting data, but who’s paying the bills for that? It seems clear to me that what needed fixing in the grant was the part that said “A0”: correcting that to read “A1” instead did the trick.

    The NIH doesn’t need to let study sections continue to play this game. Percentiling the A0’s together separately from the A1’s would clear out these kinds of shenanigans.

    I also don’t understand the whole “contact your program officer and hope they went to the study section and can tell you what really went on” paradigm. In line with making the process more transparent and accountable to stakeholders, the NIH could release a (anonymized) transcript of the study section deliberations (but of course that’s NEVER going to happen).

    Yes, you’re correct: the NIH will never release study section discussion transcripts, which is why it’s worth at least asking your program officer. However, as BB notes, they often don’t attend, and some are more helpful in others in providing insight as to the unwritten tenor of the discussion. So, no, it’s not always helpful, and you should know your program officer well enough to know whether it’s worth a shot. I can attest that some can and will provide extremely valuable feedback on how to respond/resubmit. I’m sorry this has obviously not been your experience. Unfortunately, I suspect program officers need to be even more circumspect in these uncertain times, so such advice may dwindle further.

    Also, in this particular case, reading the A0 summary of discussion makes it clear the panel thought the grant should/would be awarded on initial submission – not that they felt it needed to pay its dues and come back as an A1. Hence the addition of the word “unanimous” to the A1 summary – just in case program had any confusion about the 107 score.

    And on the funding … other awards, start-up funds, intramural (institutional) pilot funds, negotiations with the chair … the source of funds to collect any preliminary data for a new grant application. As PP notes, even the R21 mechanism is rarely funded without any preliminary data … as so elegantly confirmed in this case study, actually. – writedit

  2. BB said

    I’ve contacted my program officer for feedback on the discussion and had the following experiences: in the case of low to near-fundable scores, I’ve been told that everything was in the summary sheets. In the case of triaged proposals (with clearly dissenting opinions among reviewers), I’ve been told there was no discussion. More usual was to be told the program official did not attend the study section meeting.
    For my last proposal, I had to address pages of comments from 3 reviewers- in 1 page! I did so, and was criticized for editing down the reviewers’ comments so my introduction would fit on 1 page.
    I’m looking at Help Wanted ads even as I write my next proposal.

    Triaged proposals are almost never discussed. I have not heard of a case in which a reviewer prevailed upon the study section to discuss a triaged application, though room for such variance does exist. On the program officer referral to your summary statement, I’d recommend you have a disinterested person read it if you did not see any apparent clues to guide your resubmission. There really is something there, whether explicit or not. Of course, your program officer’s response is a tad worrisome, since one would hope this person would be a more proactive advocate for your success if your work were of interest/importance to his/her portfolio.

    Resubmitting triaged proposals is always a tricky venture since the individual critiques are often conflicting, sometimes to an extreme. Responding to critiques for an unscored application can be a further crap shoot since other weaknesses not spelled out in the pink sheets could also be in play (& left in the resubmission, possibly dooming it to a second triage with an entirely different set of weaknesses raised). Perhaps I should post a case study on this as well. -writedit

  3. BB said

    One thing I forgot to add is that with DoD proposals, all bets are off because study section members change with each grant cycle. So for that “close but no cigar score,” even your program officer can’t help you because a new roster of folks will be reviewing- and they are not obligated to look at previous critiques. Frustrating to the max.

    DoD applications are frustrating no matter what. Unless they’re funded. – writedit

  4. iGrrrl said

    Of course, your program officer’s response is a tad worrisome, since one would hope this person would be a more proactive advocate for your success if your work were of interest/importance to his/her portfolio.

    There’s the rub. This is a point I continually try to hammer home, that no one is entitled to grant support by the federal government. The institutes have agendas, and sometimes congressionally-mandated areas of interest, yet I encounter faculty who have either no clue that this knowledge can help them, or no intention of applying it out of belief that it would taint their “pure” science.

    I’m cynical today, but it’s a cynical profession.

    So well said. Thanks, iGrrrl. – writedit

  5. BB said

    [i]The institutes have agendas, and sometimes congressionally-mandated areas of interest, yet I encounter faculty who have either no clue that this knowledge can help them, or no intention of applying it out of belief that it would taint their “pure” science.[/i]
    Yes indeed, a very good point. My science is so “impure” (experimental cancer therapeutics for targeted molecules), it’s probably applied science (horrors!).

  6. drugmonkey said

    Triaged proposals are almost never discussed. I have not heard of a case in which a reviewer prevailed upon the study section to discuss a triaged application, though room for such variance does exist.

    I’m not entirely sure what you are saying here, but it sounds a little bit off. A few comments from my experience…

    1) about a week or so before the meeting, reviewers assign their preliminary scores. They may also nominate some applications for streamlining (or whatever they are calling it now), i.e., triage.

    2) The other two (typically) may or may not have assigned triage nominations to the same application.

    3) The SRO will have a target proportion of applications which are supposed to be triaged, in recent rounds 60%. As the initial score distributions are established, the mean scores are used to set the triage line.

    4) First order of business at the meeting is to sort out the triaged applications and make sure nobody objects to triaging each and every one.

    I have seen situations in which two reviewers nominate an application for triage and the third insists on discussing it. Rare, but I have seen situations in which someone on the panel insists on discussing an app that the three assigned reviewers wish to triage. I have seen situations in which the reviewers did not intent to streamline but the average score was within the 60% triage zone. Mostly they go along but sometimes one or more of them really want to discuss the application.

    As a final note, it is possible for the panel to discuss an application and then decide to unscore it.

    I’ve seen all these things occur. Conversations with other people I know who review on other panels suggests these are not uncommon.

    Now it IS true that this is all supposed to be confidential with respect to specific applications. The applicant knows it was triaged or scored, of course, but not the details regarding how this outcome was achieved. so from the applicant side, they should be unaware if their proposal was “saved from triage” by one or more reviewers…

    “Rare, but I have seen situations in which someone on the panel insists on discussing an app that the three assigned reviewers wish to triage.” – This is the situation to which I was referring (versus the whole triage process) … reviewers saying no, and someone else on the panel exercising their prerogative to discuss, though I see I screwed up in using the term “reviewer” instead of panel member. But the broader point is that PIs with unscored applications should not look for a “resume & summary of discussion” in their summary statement nor expect feedback from a program officer even if he/she did attend the panel. Interesting about the post-discussion unscoring, especially since I see scores in the upper 200s/low 300s (so I now wonder why they weren’t just unscored – though I’m happy they weren’t). – writedit

  7. drugmonkey said

    R21 receives a priority score of 138, 11th percentile (think about this a moment folks – 138 … 11th percentile?)

    What’s to think about? There is a 400 pt range running from 100 to 500 available for scoring. 11 pct of this is 44. In a perfectly flat distribution a 144 should be 11th percentile. 138 (9.5%ile in a flat distribution) is not so far off given the way sections struggle to get anywhere near a “flat” distribution (there is a tendency to stack scores up around the perceived funding line).

    I am comparing with priority scores from prior cycles in which the percentile would (likely) have been lower for that score based on the relative rank, number of applications, and clustering versus a theoretical “flat” distribution. And … I’m thinking, 11th percentile not getting funded? (this was a couple years ago and not NINDS) – writedit

  8. Another Biomedical Scientist said

    60%? Try 70% triage rates! I recently had an R01 (an A2) listed as unscored. I called my friendly program officer (who conveniently was a close friend from grad school and was therefore happy to give me the gory details). He explained that I had the highest rated unscored application – and that it was at the 30th percentile. Study section members just don’t see the point of discussing applications too far away from the funding line. Even for a new investigator A2…I was definitely far angrier after learning that a 30th percentile application wasn’t discussed than I was when I originally got the pink sheets (generally very positive and reasonable, with weaknesses where I recognized them that came down to “need more data” or alternatively “skip the highly innovative but more speculative aim”). I mean, with NIGMS new investigator funding then at 22%, there was at least a chance of it moving up during a full discussion, right? Enough that it was at least worth discussing?

    Then the equally irritating rub: a “new” application was submitted that took the previous application, rid it of the problematic aim and subaims, added a recommended aim (that made a lot of sense) and strengthened previous aims, and added a lot more data. But it was sent to a different study section (in spite of the cover letter) and scored (yeah!)…but at 46% (that is, substantially worse than the weaker previous application). It’s now been resubmitted (as an A1) and fairly forcefully directed to the preferred study section.

    Wait, why are new investigators frustrated?

  9. whimple said

    I feel your pain. From my perspective as a new investigator, the study sections are just filled with narrow-minded assholes operating in self-preservation mode. Unscoring your A2 essentially ties the hands of you program officer and prevents him/her from trying to get it funding in counsel. It is incredibly frustrating. Novelty and innovation gets crushed. All I see getting funded is rock-solid, absolutely certain-to-work, more of the same.

    You can (should?) appeal the study section assignment if you don’t like it.

    New investigators are frustrated because old investigators won’t get out of the way.

  10. whimple said

    Wow. That sounded pretty bitter I guess. Sorry about that.

  11. bikemonkey said

    it is important to realize that whimple’s comment reflects a not uncommon feeling. The system could stand asking why this is so. Justified or not.

    True, unfortunately enough, and I think these are definitely concerns/complaints worth airing. Very poignant. – writedit

  12. TeaHag said

    I’m participating in a grant which recently received a score of 264. In a study section that is triaging at 60-65%, that suggests a situation where two of the three reviewers had placed it firmly in the triage pile and one had reached in to pull it up for discussion. It was clear from the comments in the summary statement that this was the case. Reviewer #1 raved about significance (incl. clinical relevence), approved the approach, schooled us for some overlooked controls and was clear and nuanced discussing the aims stronger/weaker etc.(can you tell that I’d send them a thank you card/). The remaining two reviewers took exception to a couple of the reagents being used, in a single paragraph of text and threw in a gratuitous “fatal flaw” remark free gratis.

    So, we’ve recently determined that the reagents in question were/are entirely appropriate for use (additional preliminary data and a recent publication from a competitor). So, logically this should be simple right? It’s an RO1 competitive renewal from a highly productive PI (not me lol) heading back in A1. The PI feels that all we need do is respond to the reviewers comments, basically answering the concerns of the first, and rebutting the remaining two and leave it at that. I’m worried that with a score of 254 sitting on the page, we’ll be expected to make much more significant changes to the grant in order to drag ourselves back into the fundable range. I’m paranoid enough to feel that if we get those reviewers again, that the two who were so negative the first time, won’t care to be told that they were wrong, I think that that we happy to find something fatal the last time… and just because we seem to be doging the bullet this time…. doesn’t mean that they won’t just pull out their bows and arrows.

  13. Frustrating said

    All the rules we learned about grant writing did not work. It is a bitter experience for young investigators.
    My first RO1 rated “unscore” last year. It took me one year to obtain more data and re-write it all the best that I have learned from my own experiences obtained from K01 I obtained and so on. Yesterday, I got the “unscore” again.
    I totally agree with all of your comments above. Would the referee put themselves in our situation before judging us “not competitive”. I am sure there thousands of us out there; working 100 hours per week with multiple tasks like lecturing writing grants, papers, mentoring, review journals, even working on the bench like a graduate students. If you put together we work like 1 dollar per hour. What is that for? because we want to contribute to bring differences to improve lives. I wonder those people judge us have gone through these stages in their careers. Why they are so “deaf-ear”?
    Regarding getting suggestion from program officer, it is useless. Their job supposed to help investigator, but the fact they treat investigators like trash and usually talk to us with arrogant manners.
    All together the process of reviewing grant through study section does not work. We need change or else thousands of young scientists will tearfully and bitterly leave their field and it is not going to be a good example for young people to follow this stressful and unfair path.

    Wow. What a poignant message, Frustrating. I am of course sorry to hear about your K01 but even more the manner in which your program officer has treated you and clearly the less than mentoring critiques in your summary statement. This process is meant to advance good science and scientists (through constructive criticism if necessary) – not shoot down applicants. The Great Zerhouni’s enthusiasm aside, I am concerned how all the peer review changes might impact younger investigators, at least in the near term as reviewers and applicants learn their way around the new rules. – writedit

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: