Pooling Peer Review

Two items in Nature journals consider the benefits of a consortium approach to peer review of journal manuscripts (in existence) and grant applications (modest proposal). Nature Neuroscience announced today it is joining the the Neuroscience Peer Review Consortium, which “reduces the overall reviewing workload of the community by allowing authors to continue the initial review process when their paper moves from one consortium journal to another, once the paper has been rejected or withdrawn from the first journal.” The Nature Neuroscience editorial describes the process, voluntariness, and flexibility and notes that the NPRC system will be evaluated at the end of the year … and on an ongoing basis at the journal’s blog, Action Potential.

Separately and quite distinctly, in a letter to Nature, Dr. Noam Harel of Yale makes a modest proposal: a centralized grant proposal repository into which applications could be deposited at the PI’s leisure and that sponsors could search for interesting science to review and possibly fund (no doubt with some encouragement by depositing PIs). The research proposals would only be made available to sponsor agencies, and multiple sponsors interested in the same work could collaborate on a shared funding agreement. As a thought exercise, interesting. As something to actually implement …

And finally, while we’re pondering peer review, Gregory Cuppan, a managing principal at McCulley/Cuppan (which specializes in document development), contributes to the commentary on a prior thread discussing the Publishing Research Consortium survey data. Specifically, he notes that “most people have little or no formal training in the task of review” and would “be interested to know how many readers of this blog have actual formal training in the task of review (here I make a strong distinction from training for the task of editing).” He refers to a 1961 study by the Educational Testing Service in which 53 distinguished reviewers read 300 college student papers but only had a median correlation among reviewer scores of 0.31.

In a separate note to me, he also suggested we look at an article by Mayo et al. in the Journal of Clinical Epidemiology suggesting traditional grant review processes and funding decisions suffer from a high degree of variability due to too few reviewers being involved. The report presents empirical data from intramural review of pilot project applications at McGill University Health Center Research Institute; applications were both ranked and scored (1-5 scale), with poor agreement between the two (kappa value of 0.36, with 95% CI 0.02-0.70). The top-ranked proposals would have failed to meet the “payline” with varying probability depending on who was assigned to provide a scored review. The examined process does not translate to current NIH study section practice, but it lends credence to the recommendation (see pp 4-5 & 38-41) that chartered study section members (not ad hoc reviewers) rank scored proposals at the end of each meeting. Per Mr. Cuppan’s suggestion that manuscript reviewers lack training, see pp 45-46 of the Enhancing Peer Review report for standardizing reviewer, chair, and administrator (officer) training.

11 Comments »

  1. Gregory Cuppan said

    What got me to your blog was my review of the NIH Enhancing review report. I too noted pp45-46 recommendation regarding training, yet find it curious that nothing is mentioned regarding what may prove to be useful review training. I suggest that what NIH really needs to do is develop a peer assessment process for grant proposals.

    Based on the read of papers regarding the limitations to review one can readily conclude grant funding may involve a considerable amount of happenstance. Not a message members of this august community would want brought to the front.

    Furthering my point on the limited value of review to identify good work. You may want to look at these two papers: Science, 1981, Vol 214, Issue 4523, 881-886 [must buy article or view scanned pages via your institutional access-writedit] “Chance and consensus in peer review” S Cole, Cole JR, and GA Simon. The paper covers an experiment in which 150 proposals submitted to the National Science Foundation were evaluated independently by a new set of reviewers indicates that getting a research grant depends to a significant extent on chance. The degree of disagreement within the population of eligible reviewers is such that whether or not a proposal is funded depends in a large proportion of cases upon which reviewers happen to be selected for it. No evidence of systematic bias in the selection of NSF reviewers was found. Also the following paper is worth a quick review. Brain (2000) 123, 1964-1969 “Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone?” Peter M. Rothwell and Christopher N. Martyn.

    One of the problems with peer review is the notion of review itself with a falsely laid assumption regarding value and the broad assumption that people somehow “get it”—like we are magically inoculated in how to effectively engage in the task of review; be it grant proposals or draft articles for journals. Bottom line: as routinely practiced, review generally does little to bring the work under consideration into better focus. Most often review creates noise and dampens signal.

  2. bikemonkey said

    if by “members of this august community” you mean the applicants and peer members of NIH study sections, hell yes many of us what these issues explicated.

  3. BB said

    Two things come to mind. First, Noam Harel’s idea is already in place by several grant consortia. One of the, the Gotham group, I looked at last year. One problem I have with grant consortia is the possible lack of confidentiality of one’s ideas. Because of that, I never submitted to the Gotham group in the end.
    The second comment I have is about training to review. I think a lot of us trained to read papers critically, through journal club. We know how to review a manuscript. But grants are different. Factoring in esoteric qualities like previous grant history, which member of congress is leaning for funding for a pet project, grant horse-trading, the old boys network, multiplies the complexities of the grant review process. Science might be the least of it in too many cases.
    At a grant writing course I went to 5 years ago, we had mock study section. Maybe this is a way to train people for grant review. It certainly helped me focus my grant writing a lot!

    BB, I like the Journal Club point a lot … and research trainees in good labs should get a similar experience at weekly lab meetings where grant proposals in progress are discussed. (I hope this concept sounds familiar to at least a few folks out there.) Baby It’s Cold Outside has several for-credit courses in various Schools & departments in which grant applications are prepared over the course of a semester, with all students participating in the review/critique of each other’s narratives. Maybe one of these days I’ll try to write out what it is I do when I take apart and critique a grant application I’m asked to review. As I told DM, I blame this odd gift on a mild concussive injury on the rugby field. – writedit

  4. maxine said

    Nice post. There is a formalised peer-review training process at the BMJ, as described here (free access) by Trish Groves, deputy editor of the journal http://blogs.nature.com/peer-to-peer/2006/06/quality_and_value_how_can_we_g.html
    and at the journal’s website.

    I like the point about matching articles with appropriate reviewers, which is a serious problem given the escalating number and diversity of grant applications versus the limited number of study section members plus limits on what the SRO knows about each reviewer’s range & depth of expertise (standing member or ad hoc invitee). Thanks for the link and the note about this process at BMJ, Maxine. -writedit

  5. bikemonkey said

    But grants are different. Factoring in esoteric qualities like previous grant history, which member of congress is leaning for funding for a pet project, grant horse-trading, the old boys network, multiplies the complexities of the grant review process. Science might be the least of it in too many cases.

    BB you sound suspiciously as though you are actually endorsing these factors as legitimate??????

  6. BB said

    No, no, bikemonkey, not at all.
    We hates them, to quote Gollum. Trying to point out (and not succeeeding, I see) that we come out of grad school ill-prepared to write grants well, or at least, to write grants to satisfy study sections.

  7. Marilyn Mann said

    Hi everybody. I have a question. I was reading an article from Archives of Medical Science. I noticed that the abstract appeared to have an error in it. A reference to “LDL” should have said “HDL” (based on the relevant table and discussion in the article). I sent an email to the editor of the journal noting the error. He sent me a reply noting that there was still time to write a letter to the editor. I sent two emails to the corresponding author and did not receive a reply.

    I’m a lawyer, so I’m having trouble understanding the correct procedure here. Is it really necessary to write a letter to the editor to point out this kind of error? Seems like overkill. Why doesn’t the journal just make the correction?

    Thanks, Marilyn

    I have to admit, I am not familiar with the Archives of Medical Science. How the editor handles this stuff is his or her call. Most would print a correction in the next issue (plus put an online correction immediately). Corrections are up to the editor rather than the corresponding author, and one would hope he/she would have noticed the error and reported it to the journal as well. It seems this editor prefers to give you “credit” for correcting this error, hence the request for a letter to the editor. You’re right – it is odd. You’ve reported the error, and the editor (upon confirming the correction with the author) should change the electronic version of the article whether or not you submit a formal letter.

  8. Hello,

    BB said:
    “Two things come to mind. First, Noam Harel’s idea is already in place by several grant consortia. One of the, the Gotham group, I looked at last year. One problem I have with grant consortia is the possible lack of confidentiality of one’s ideas. Because of that, I never submitted to the Gotham group in the end.”

    The Gotham Prize Foundation is one of my inspirations for suggesting a Centralized Proposal Repository (CPR)- I duly credit it on my website. However, that is just one funding source and one (broad) topic. True, their proposals are open to the web for review and comment (and collaboration) – however, the CPR does not need to be any less confidential than the current grant submission systems already in place. It will merely expose each proposal to a larger number of agencies and potential funding. It’s akin to a universal application for college or med school/grad school – it would make life a lot easier.

    Please see a more detailed version of the CPR proposal on my website at http://noamyharel.googlepages.com/universalproposalrepository

    Thanks! Noam

    Cool. Thanks for alerting us to your CPR proposal Website. I’m sure if DM picks this up over at Scienceblogs and/or The Scientist runs with the idea, you’ll get the scientific community buzzing. I hope you submitted these ideas during the Enhancing Peer Review comment period. I need to read through the full CPR proposal still, but do like the concept – and maybe the federal agencies can break down enough silos, especially with the advent of grants.gov, to make this workable. I think RFAs and PASs (& some PARs) would still require targeted separate applications, but the IC PAs and parent PAs could simply serve as guides for research priorities as they currently do. – writedit

  9. drugmonkey said

    Now surely you are not equating my humble contributions with The Scientist! We don’t have that big an impact, geez.

    Noah’s got something up over at Action Potential….I guess I’d better take a whack at it…🙂

  10. Noah Gray said

    With some modifications, there is a chance that this proposal could work. But I have my doubts, not that it is necessary or could be useful, but because of my interpretation of the scientific community. Feel free to view all of my concerns here.

    Thanks, Noah. I need to sort through your thoughtful coverage of this concept, but in the meantime, thanks too for the link to a great New Yorker cartoon on observations the impact of grant funding. Another favorite cartoon on not funding grants is there, but not my all-time favorite grant-related New Yorker cartoon. Damn. – writedit

  11. […] would consider a central proposal respository such as that proposed by Noam Harel at Yale (as we discussed earlier). This way, the public could sort through what’s available and, if inspired, submit an […]

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: