Archive for Grantsmanship

Questions Answered & Discussions Held –>

Greetings – for those of you arriving at the blog via the main writedit link, please refer to the NIH Paylines & Resources and Discussion: NIH Scores-Paylines-Policy-Peer Review pages (at the top of the right column of this blog) to ask questions (and have them answered relatively quickly, if not same day), learn from the experiences of fellow researchers (especially timelines of grant application submission, review, and award), and discuss issues related to the NIH and NIH funding.

Although I am much less engaged with the NSF now than in the past, I am happy to consider queries about their grant process at the Discussion: All Things NSF page as well.

Also, I will be overhauling How the NIH Can Help You Get Funded, so if you have suggestions for what would be useful to cover, please feel free to comment here or contact me me directly.

Thanks for all your support and contributions, and best wishes for success with your research and your grant applications!

Comments (3)

FY13 Funding Trends

In working on the book, I was disappointed that we could not get funding trend data for more ICs (10 of 24). Read the rest of this entry »

Comments (6)

CSR Scoring Recalibration

NIGMS has kindly publicly confirmed that CSR is recalibrating percentiles, having pushed SRGs to enforce the spreading of scores when reviewing Cycle III applications this past February and March. Read the rest of this entry »

Comments (7)

4-Year R01s at NHLBI

Although a Congressional mandate has been in place for many years to keep the “average” length of RPG awards issued at 4 years, most ICs manage this by letting 2-year R03 and R21 awards offset some of the 5-year R01 awards. This is trickier at an IC such as NIGMS, since they do not participate in the short-term mechanisms, so often they “adjust” R01s to 4 years, as do other ICs (e.g., NIBIB) – including NHLBI in past years. At the November 2012 Council meeting, the NHLBAC learned about NHLBI’s new fiscal policy on R01 project length:

The Institute’s longstanding practice was to adjust duration of R01s to achieve a 4 year average for research project grants. Applications that received the very best percentiles and those from Early Stage Investigators (ESIs) received awards for the full length of their Council-recommended project periods. The Institute has made a decision that beginning in FY 2014, it will fund competing, investigator-initiated R01s for 4years. Exceptions include ESIs, applications with timelines that cannot be accomplished within 4 years, and AIDS projects (which have a separate appropriation). Researchers are encouraged to submit for review only applications with a project period of 4 years or less.

NHLBI dropped out of the R21 parent announcement and does not participate in the R03 parent announcement either, so this is not entirely a surprise, but the explicit request that PIs submit proposals limited to 4 or fewer years in duration is new.

Leave a Comment

Insider Advice from former NIGMS Director Jeremy Berg

Paul Knoepfler has posted a two-part interview with Jeremy Berg that should be of considerable interest to those who wander by MWEG. In Part I, Jeremy offers advice to grant applicants (“Submit the most carefully prepared applications that you can!”) and reflects on how applications are selected for funding (including the accompanying budget cuts). In Part II, Jeremy provides his perspective on NIH funding in the next 5 years (anywhere from tough to horrible, depending on what happens with sequestration), what he would use his magic wand to change (give feedback to reviewers), and what he learned as NIGMS Director (the good, the bad, & the ugly).

Thanks to Paul for taking the time and initiative to do this (in his vast spare time) and to Jeremy for sharing his insights with the research community!

Comments (2)

NIH & NSF Efforts to Redistribute the Wealth

Last week, the NIH announced pilot program in which IC Councils will conduct an extra review of competitively scored applications from PIs who currently receive $1.5M or more per year in total costs to determine if additional funds should be awarded (this roughly matches the long-standing NIGMS strategy of giving extra scrutiny to PIs receiving $750K or more in direct costs, assuming an average F&A rate of ~50%). The NIH is quick to note that this Special Council Review (SCR) does not represent a funding cap policy and that “some of the most productive investigators are leading significant research teams and programs that may require over $1.5 million/year of NIH awards to be sustained … [and] that some types of research, for example large complex clinical trials, may commonly trigger this review but may also be recommended for funding.” RFAs and big P program applications won’t receive extra review, and with multiple PI/PD submissions, each of the PIs would need to exceed the $1.5M threshold. This pilot effort was inspired by the discussion on how the NIH can best manage its limited resources … the interactive slide on RPG funding per PI indicates that 6% of PIs receive $1.5M or more per year, representing 28% of the RPG budget.

This week, Science reported on the Big Pitch experiment at the NSF (Molecular and Cellular Biosciences Division) in which two different review panels reviewed two different presentations of the same research. One panel received the full traditional proposals, while the other assessed anonymous 2-page summaries that focused on the underlying concept rather than experimental detail. Only 3 out of 55 proposals (in this pilot, on climate change) were rated highly by both review groups, and 2 of these were funded; altogether, the NSF funded 3 projects selected exclusively through the 2-p proposal and 5 through the full proposal reviews.

The experience of one of the anonymous 2-p awardees might ring true with many struggling PIs:

Shirley Taylor, an awardee during the evolution round of the Big Pitch, says a comparison of the reviews she got on the two versions of her proposal convinced her that anonymity had worked in her favor. An associate professor of microbiology at Virginia Commonwealth University in Richmond, Taylor had failed twice to win funding from the National Institutes of Health to study the role of an enzyme in modifying mitochondrial DNA.

Both times, she says, reviewers questioned the validity of her preliminary results because she had few publications to her credit. Some reviews of her full proposal to NSF expressed the same concern. Without a biographical sketch, Taylor says, reviewers of the anonymous proposal could “focus on the novelty of the science, and this is what allowed my proposal to be funded.”

The Big Pitch format could “remove bias and allow better support of smaller, innovative research groups that otherwise might be overlooked,” Taylor adds. “The current system is definitely a ‘buddy system’ where it’s not what you know but who you know, where you work, and where you publish. And the rich get richer.”

A second round of Big Pitch (evolution proposals) had similar results, and the NSF is considering adding another arm to the experiment in which a third panel of reviewers receive both the short proposal and an abbreviated biosketch of the PI. They might also consider 4 rather than 2 pages for the concept proposal … and, apart from the anonymous review experiment, the MCB Division has limited the number of proposals a PI can submit, while Integrative Organismal Systems and Environmental Biology have implemented a pre-proposal policy (with submission of full proposals invited).

Small steps to address perceived inequity in funding decisions … looking forward to even more innovative, paradigm-shifting proposals.

Comments (3)

More on the Impact-Criterion Score Correlation

This time by Sally Rockey on Rock Talk.

Jeremy Berg introduced the concept of correlating overall impact score with the individual criterion scores, first using NIGMS and then NIH-wide data.

Based on the 32,546 applications (of 54,727 submitted) that received overall impact scores in FY10, OER played with the numbers a bit more but came up with the same conclusions: Approach and then Significance drive Overall Impact scores.

For applications receiving numerical impact scores (about 60% of the total), we used multiple regression to create a descriptive model to predict impact scores using the applications’ criterion scores, while attempting to control for ten different “institutional” factors (e.g., whether the application was new, a renewal, or a resubmission). In the model, scores for the approach criterion had the largest regression weight, followed by criterion scores for significance, innovation, investigator, and environment. The same pattern of results was observed across multiple rounds of peer review and institute funding decisions.

She also notes, as can be seen in her figure, that scores for Approach showed the widest range, followed by Significance.

So, the work you propose doing better be important … and, more importantly, better be done right.

Leave a Comment

Surveying Peer Review Enhancements

In the midst of grant deadlines, writedit has been staring longingly at the psychiatric hospital up the hill, where a room with a view and a valium drip sounds good about now, but has just enough time for a quick post to distract all of you with freshly assigned impact scores from obsessively searching for any hint of funding success … and those of you with stale impact scores from wondering again when paylines might be known.

The NIGMS Feedback Loop and Rock Talk both have current posts on OER survey data on Enhancing Peer Review. The Feedback Loop pulls out respondent assessment of the value of the individual criterion scores, a topic of recent interest to Director Berg both at the NIGMS and NIH-wide level. Seems less than half of you feel the criterion scores are particularly helpful …

Comments (1)

Peer Review Survey

The Comparative Assessment of Peer Review (who knew?), an NSF-funded project of the Center for the Study of Interdisciplinarity (who knew?) at the University of North Texas, has an online survey that you are all invited (and encouraged) to complete. The CAPR “examines the peer review process at 6 science agencies worldwide: NSF, NIH, NOAA, NSERC, the EU’s 7th Framework Programme, and the Dutch STW.”

Probably not entirely what you might expect, but still an interesting thought exercise with plenty of opportunity to enter free-text comments and input.

The project is also creating a digital repository for the aforementioned science agencies (the sorts of program & policy documents not easily found in one place) and examining the broader impacts criteria for NSF-funded research (other than their own).

And, speaking of peer review & broader impacts, for those of you familiar with the Rocket Boys story (and even more so for those of you who are not familiar with it!), I think you’ll enjoy this adorable little (3’32”) video from the NIH.

Comments (3)

NIH-Wide Data on Impact & Criterion Score Correlations

Ask and ye shall receive … NIH-wide data on the correlation between individual review criterion scores and overall impact score, compliments of Jeremy Berg.

Correlation coefficients between the overall impact score and the five criterion scores for 32,608 NIH applications from the Fiscal Year 2010 October, January and May Council rounds

As he notes, the trends across the ICs mirror what he found at NIGMS. The NIH-wide data also include more mechanisms … whereas Jeremy analyzed 654 R01s from one cycle, these data include all RPG, research center, and SBIR/STTR applications over 3 cycles (Oct-Jan-May Councils). Not sure if we’ll get all his other lovely data at the NIH level, but we can dream. In the meantime, thanks so much for your leadership in disseminating the NIGMS and now these NIH-wide data, Dr. Berg.

Comments (4)

Older Posts »