Update: The timeline for implementing specific policies is discussed here.
Update: In addition, NIAID reports that “This month, NIH will launch some pilots and try some changes, such as shortening the length of R01 and some other applications, developing a new scoring system, and giving applicants more useful feedback.” This month? June??? Yes! in the form of Transformative R01s …
Today the Great Zerhouni announced “Enhancements to Peer Review” of the sort that would not wind up in your spam folder. The Advisory Council to the Director met today, and Larry Tabak’s presentation to the same about Enhancing Peer Review actually has some meat to it.
For Priority 1, Engaging the Best Reviewers, we have recommendations to spread a 12-session reviewer commitment over 4-6 years; allow “duty sharing by colleagues as appropriate”; establish a policy of service requirement for certain classes of awards (Merit/Javits, Pioneer, Type 2 renewal with >$500K in DC, PI on 3 or more R01 equivalents); rank proposals at meeting’s conclusion (to provide feedback to study section members); allow reviewers who have served a minimum of 18 full study section members (chartered member or equivalent) to apply for administrative supplements of up to $250K in TC and/or request that they be considered for Merit/Javits on a competitive basis; and develop an NIH-wide standardized core curriculum based on best practices.
For Priority 2, Improve the Quality and Transparency of Reviews, we have – shockingly – a new 7-step scale (vs the 41-step scale currently in use) in which assigned reviewers provide individual scores (1-7) for each of the 5 review criteria (impact, investigator(s), innovation/originality, project plan/feasibility, environment) and a preliminary global score. Applicants who are streamlined would receive 5 scores, one for each criterion representing an average from all reviewers. For applications that are not streamlined, all study section members, based on the discussion of each criterion, will provide a global score of 1-7; after initial scoring, all proposals within relevant categories will be discussed as a group & ranked (ranking at the conclusion of the meeting then allows for “recalibration” of global scores).
The summary statement then would be realigned with the explicit rating criteria, with a template allowing a prescribed amount of space for each criterion. Optional fields would be available to reviewers who wish to provide additional advice (“mentoring”), including the helpful suggestion that the proposal not be resubmitted unless fundamentally revised as a new application.
The application itself would be shortened and redesigned as well: 12 pages for R01s, with “other mechanisms to be scaled appropriately”, and structured to align with the explicit review criteria. Appendices will be limited to 8 pages and will only be permitted “for specific information that is deemed critical on the basis of NIH-defined criteria (e.g., elements for a clinical trial or a large epidemiologic study)”.
For Priority 3, Ensure Balanced and Fair Reviews Across Scientific Fields and Career Stages, they talk about both early stage investigators (ESI) and new (to NIH) investigators, with an intent to “cluster review, discussion, scoring, and ranking of ESI within a study section, pilot percentiling ESI across all study sections, and … ensure that the number of fully discussed proposals from ESI is not disproportionately reduced”. Clinical research applications would enjoy the same clustering of “review, discussion, scoring, and ranking”. Reviews for more experienced investigators would “place equal emphasis on retrospective assessment of accomplishments and a prospective assessment of what is being proposed”.
“Transformative” research will be encouraged by expanding the Pioneer, EUREKA, and New Innovator awards until these comprise ~1% of R01-like awards. The Pioneer & Junior Pioneer (aka New Innovator) pot will be >$500M over 5 years, while the EUREKA pot will be >$100M over 5 years. These programs will be joined by a “new, investigator-initiated ‘transformative’ R01 pathway using the NIH Roadmap authority & funding” with >$250M over 5 years. (so, yeah, we’re talking close to $1B for these 4 programs)
Another component of Priority 3 seeks to reduce burden on applicants, reviewers, and NIH staff. Here, the goal is to reduce resubmissions both from applicants with a high likelihood of funding based on their A0 review (hallelujah!) and from applicants with low or no likelihood of funding based on their A0 review (thank you straight talk express). This component also seeks to “rebalance success rates among A0, A1, and A2 submissions to increase system efficiency” and to include statistics on cumulative success rates as a function of score or percentile in the summary statement. This section is accompanied by some incredible figures, including a bar chart and table of unsolicited R01 applications funded by percentile and by status [Ao, A1, A2] for 4 years [1998, 2004, 2005, 2006]; the bottom line is that “almost twice as many rounds of applications required today … though most are still ultimately not funded.” Cheery data, eh?
For Priority 4, Develop a Permanent Process for Continuous Review of Peer Review, suggestions include piloting 2-stage reviews (editorial board model) and “prebuttals” as well as high-bandwidth electronic review, different methods for ranking the relative merit of applications, and monitoring performance of review. On the issue of percent effort, an alternative approach to requiring a minimum percent effort is suggested: applicants would be required to complete a subfield in the Environment section of the application in which they indicate if they have NIH RPG (research project grant) support in excess of $1M at time of anticipated funding. If so, they must justify why additional resources are needed.