Peer Review Advisory Committee Meeting

Update: Thus Spake Zerhouni … the projection of Age Distribution of PIs out to 2020 is amusing.

Update: The Chronicle of Higher Education has an article about NIH efforts to lure back senior reviewers, including an update on pilot peer review reform projects underway and a table at the end showing the percentage of assistant professors serving on CSR panels for 2002, 2005, and 2007. The numbers? 8%, 10%, 7%, respectively.

The Peer Review Advisory Committee (not to be confused with the Working Group of the Advisory Committee to the NIH Director on NIH Peer Review) met today, and the presentations seem to all be uploaded … except for the Great Zerhouni’s update on NIH Peer Review Enhancements. So, we have:

– Toni Scarpa on CSR Initiatives to Improve Peer Review (most bang for the buck re: content)

– Megan Columbus on Electronic Submission Update (transition to Adobe not until Dec 2008 – tentatively)

– Don Scheider on CSR Realignments (neuroscience case studies) & Clustering of Applications (orphan applications explained)

– Marion Mueller on Peer Review in Germany (leads with quote “Peer review is 50% garbage, 50% malice, and 10% good advice.”)

– Olivia Bartlett & Shamala Srinivas on Instantaneous Electronic Scoring of Multicomponent Applications (P01, P50, U19, U54)

Norka Ruiz Bravo has two slides comparing outcomes in women and men and new and established investigators. For 2006, it seems 11% of women and 9% of men scored within the top 10% in CSR study sections. According to the May 2008 council meetings (nice trick to provide these data on April 30th), 13% of new investigators and 18% of established investigators had Type 1 applications scoring within the 20th percentile. Interestingly, the May councils in 2001 and 2002 showed the distribution to be 15% and 17%, respectively, with the gap creeping up in May 2006 (14% vs 18%) and again in May 2007 (14% vs 19%).

10 Comments »

  1. whimple said

    Seems encouraging. I’d prefer that they tell us what percent of grants from women / new investigators are funded, and what percent from men / established investigators are funded.

  2. bikemonkey said

    The review data are probably tied to the Council round rather than the meeting itself. Scoring for the May 2008 council round was completed for the most part in feb-mar.

    The two Ruiz-Bravo slides make it even clearer that they know full well they are lying with stats on the New Investigator front. Why did they present male/female data for the top 10%ile if they didn’t realize this was the critical slice of the applications, hm? and if they know that, then they know comparing the top 20th %ile for New Investigator apps is intentionally minimizing the problem with bias at the review stage.

    whimple, we need all of the data and then some. because it is of interest what is happening at the primary review stage (the scores / %iles) and also to know what Program is doing to “adjust” these distributions by picking applications out of line. Program has been trumpeting some data showing NewInvestigator funded-application numbers that look okay/better. They completely boot the fact that this is being accomplished because of Program actions to adjust for the outcome of study sections. Of course you might say the outcome is the same whether the app scores within the hard fund line or gets picked up by Program but I’d prefer that they just go ahead and “fix” the behavior at the study section level.

  3. Neuro-conservative said

    Thanks for these links, writedit. I think the most interesting slide is the first data slide from Scarpa (slide four in the deck). It shows that the total number of applications per year shot up as the doubling took effect, from 40,000 in 1998 to 80,000 now. For a brief second, I thought that this accounted for the drop in paylines from 20%ile to 10%ile. But of course, the budget has doubled as well. So if # applications has doubled, and total funding levels doubled, why the big squeeze? I am assuming that most applications are modular with about 250K direct — I think there is a slide somewhere that you once linked showing that the large majority of submissions are in the 200-250K range, and that this hasn’t changed much over time.

  4. BB said

    Did I read that correctly? 51% of grant proposals in Germany are funded and they have 524 reviewers?
    51%- imagine, we’d feel as if we were in Paradise with funding levels like that.

    Germany has been trying to lure top scientists, as was recently reiterated by the Humboldt Foundation president. The 51% rate is what she shows, but the research project award size is hard to gauge. However, the number of peer reveiwers listed for 2002-2004 was 10,883 (one of the opening slides says approximately 10K), with <594 elected scientists filling roughly the role of program officers/SROs. Would have been an interesting talk to hear.

  5. VWXYNot? said

    Slightly off-topic, but I couldn’t find an email address to contact you privately. I posted yesterday about the effects that British English spelling might have on US reviewers and was wondering if you or your readers might have any comments? I think I’m right, but haven’t talked to anyone with experience on US review panels!

    http://vwxynot.blogspot.com/2008/05/divided-by-common-language.html

    Thanks

    CAE

    If you are consistent, which one presumes you would be, the un-American spellings should not count against you, particularly since you are applying from Canada. I cannot speak to the personal biases of all grant reviewers from all US funding agencies, but I don’t think you need to lose sleep over this. A quick search of CRISP (i.e., funded PHS applications) turns up tumour, centre, visualise, programme, and so on. Poor diction and faulty grammar would be a more significant stumbling block. You may need to be careful of word choice … not sure of a science example, but on the lines of biscuit vs cookie.

    In the reverse direction, you won’t see British spellings coming back on any summary statements (or you shouldn’t). NIH Scientific Review Officers monitor these to remove British spellings and other potential identifiers in written critiques so as to maintain the anonymity of reviewers (otherwise, the lone Brit or Aussie or Canadian etc. on the panel might stand out).

  6. JSinger said

    …the projection of Age Distribution of PIs out to 2020 is amusing.

    I could feel myself aging as I clicked through those slides!!!!

    For a brief second, I thought that this accounted for the drop in paylines from 20%ile to 10%ile. But of course, the budget has doubled as well. So if # applications has doubled, and total funding levels doubled, why the big squeeze?

    Engineering Science had a good discussion of that question. Grant size seems to have increased towards the start of the doubling.

    Interesting, now that you’ve prodded me to look. In 1998 (beginning of doubling), the NIH made 39,643 awards totaling $282,011. In 2003 (end of doubling), the NIH granted 52,587 awards totaling $415,821 (worth $351,795 in 1998 dollars). In 2007, the NIH made 52,635 awards worth $403,986 ($291,476 in 1998 dollars). Eyeballing the slide of RPG (research project grant) award size (#10) in the NIH Data Book, it looks as though the average award was about $250K in 1998, $340K in 2003, and about $365K in 2007. – writedit

  7. bikemonkey said

    and of course the difference in salary between asst prof and cap (which everyone seems to be at by the mid to late 50s from what I can tell) is $80-$100K. just another bennie of your aging PI population.

  8. […] know have this opinion so I’ll run with it. Everyone knows that something has to be done and NIH appears to be set to make a concrete commitment to make some changes that might get us back on the right track with a little help from congress and a new administration […]

  9. […] this is putting good applications in a holding pattern. [Update 05/07/08: I notice that writedit points to a powerpoint from the Great Zerhouni which includes (slide #57) a graph much like my example! […]

  10. […] 05/07/08: I notice that writedit points to a powerpoint from the Great Zerhouni which includes (slide #57) a graph much like my example! […]

RSS feed for comments on this post · TrackBack URI

Leave a reply to VWXYNot? Cancel reply