Is Peer Review Broken?

So asks the cover story by Meredith Salisbury for Genome Technology.

She starts the ball rolling in her accompanying editorial:

Get three scientists together, and it’s almost a guarantee that the conversation will eventually turn toward the vagaries of the peer review process. Be it for winning grant funding or getting a paper published, this system of relying on a handful of fellow scientists to select the most promising and influential research shapes — at least to some degree — every single researcher’s career path.

And then she gets 3+ scientists together, with the tone set right off the bat by Ferric Fang:

“For something that is of and for scientists, the peer review process is very unscientific,” says Ferric Fang, a professor of laboratory medicine and microbiology at the University of Washington. Whether it’s for papers or grants, having just a handful of people review someone’s work is statistically unsound, he adds. “If these [reviews] were data that you generated in your lab, you would say, ‘I don’t know what the conclusion of this is.'”

And per the suggestions made on grant review processes, apparently efforts to enhance peer review at the NIH haven’t gone far enough. A sampling to get you over to Genome Technology for the full report:

One hope is that having a larger pool of reviewers could help reduce the impact of any individual review, says Fang. Under the current system, “one bad review can sink an application.”

Another take on the grant review system in general is that focus needs to shift away from today’s model of specific proposals for short-term periods. …

Lawrence [Peter Lawrence at the zoology department of the University of Cambridge] would prefer a system where reviewers considered the track record of the investigator more than the details of the new research proposal (with special dispensation for new investigators). …

According to Fang, this concept of awarding funds on a track record basis would also serve the purpose of weeding out people who are very skilled at writing proposals but are less competent at actually performing the science.

Oh no! What’s a writedit to do?! Well, I am the first to acknowledge that no amount of skilled grantsmanship can make up for poor science, so I think, to a certain extent, this last concern can be dispensed with.

9 Comments »

  1. qaz said

    One of the biggest problems with the journal review process is the strange concept that scientific results pop down the chain. First, the author decides that his/her result is Nature/Science quality (aren’t they all?), fights with N/S and loses (taking several months, hopefully without being scooped), at which point the review process goes to the next journal down (e.g. Nature Neuroscience) and the process starts all over again, losing more time in the process. I have a colleague who has spent more than three years trying to publish his paper because he is convinced (rightly or wrongly) that he has to try every “high impact” journal before going to a “workhorse” journal.

    I’ve become very enamored of the new Frontiers in Neuroscience model where all papers are sent to the field-specific journal and reviews are supposed to help you make sure that your paper is the best it can be. From there, papers that the community thinks are deserving are popped up to the more general (more broadly read) journals.

  2. D said

    1). I wonder how often Ferric Fang reviews for NIH?
    2). I wonder how often he reads every grant (with which he has no conflict of course) submitted to a study section? As far as I know there is no rule that prevents unassigned reviewers from reading every grant submitted and providing written critiques.
    3). There is also no rule that prevents a reviewer from heavily weighting “track record” in assigning an Impact score. In fact, it happens quite regularly.

    So 4) I suggest that Dr. Fang be a rule breaker and do these two things next time he reviews for NIH. No one is stopping him.

  3. whimple said

    Fang’s concern is misplaced. What he should be worried about is not funding competent scientists that write poor proposals. This is of particular concern because “poor” can include “ahead of its time”, which would represent a major loss for science. Presumably the system corrects itself in Fang’s area of concern by not funding subsequent proposals submitted by good proposal writers who can’t back it up.

  4. D said

    Although I am sure that Dr. Fang is a very busy man, he apparently has been a reviewer for NIH only once (a mail-in reviewer for NIH ARRA grants) since he went off of the old BMC study section in 2001. That makes his comments even more…..I can think of lots of words but I will use frustrating. He wants more reviewers but declines to volunteer his own time.

  5. Wimple,

    Fang is an excellent and entertaining writer. I like his editorials.

    What I am curious to know is what he specifically means by “track record basis”.
    My only reference is the descriptors of his editorial at Infection and Immunity, October 2009 “Important science, it’s all about SPIN” .

    SPIN stands for Sizeable (S), Practical (P), Integrated (I), New (N).

    I wonder why he would not prefer NIPS. Who knows if he feels uncomfortable with the 30+ reviewers in a Study section because there is not enough SPIN.

    In Using track records for Peer Reviewing Grants, what SPIN might mean ?

    SIZEABLE

    Number of publications?. Should 20-30 papers/year be considered sizeable ?

    Number and variety of grants ?. 2-3 R01s, 1 T32, 1-2 PO or maybe 3-5 PO, 1-2 T32, 1-2 RCs.

    PRACTICAL

    Track records show sizeable science, descriptive because it shows what and where certain phenomena occur and, even though mechanistic science is virtually absent (the whys and hows require deep and audacious thinking), the steps into therapeutics are strong.

    Track records show sizeable funding.

    INTEGRATED

    It depends on what integrated might mean. Integrated science could be incremental science which appears consistent with the above “sizeable and practical” SPIN acronysm. There might be a broader concept of “integrated science”.

    NEW

    History of novelty on track records ?. Two possible answers.

    a) a lot
    b) anything but novelty

    The point is that Yes, peer review needs to improve. And the task of improving it might never end. But one way to improve it for the benefit of science and scientists, at this particular time, is to device a rigorous evaluation system to justify and account for sizeable funding. Non competitive submissions need to be rigorously examined and evaluated for much more than number of publications.

    We are in an emergency situation where a lot of good innovative science is triaged and/ or not awarded due to “SIZEABLE TRACK RECORDS”.
    P.S. – Sorry, forgot to tell you great post!

  6. There is absolutely nothing wrong with the NIH peer review system, and all these attempts to “improve” it have been a totally counterproductive waste of time. The problem is that the NIH peer review system is being forced to do something that neither it nor *any* possible system of peer review can do: identify a top ten percent of applications that is “objectively better” than the second ten percent. The reason for this is that–as I have pointed out numerous times–there *exists* no objective distinction within the top 20% of applications that could possibly form the basis for drawing distinctions between the first and second 10%.

    Being forced over and over and over to perform this impossible task is what is so demoralizing about serving on study section right now.

  7. Manning said

    CPP: That is the most accurate assessment of NIH peer-review that I have read. If everyone on this board were asked to read ten literary works and pick out the top 5, there would probably be consistency among the readers in selecting those top 5. Imagine how much variation there would be if all of us were asked to pick the single best paper.

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: