Payline Complexity Explained & CSR College of Reviewers Updated by NIAID

As always, the latest issue of NIAID Funding News is a treasure trove of information and good advice.

First, for the hundreds of you out there wondering why your IC hasn’t set a payline yet, NIAID reports, shockingly, that the trend toward score clustering has increased and explains how score clustering causes jumps in assigned percentiles. An impact score of 20 seems to be the sweet spot thus far: “In the first two review cycles of this fiscal year, approximately 3% of applications reviewed by CSR received a score of 20.” NIAID gives an example in which a score of 20 in one study section might land at the 9th percentile, with a score of 21 in turn at the 11th percentile (payline at the 10th percentile).

NIAID also includes an update on the CSR College of Reviewers (discussed here previously), including the current membership roster. As a reminder, these folks (“editorial board members”) will be asked to provide written reviews only on up to 12 applications a year for 2 years as part of 2-stage reviews (with the second stage being the face-to-face meetings of “editors”).

Lots of other good intel and advice, so be sure to scroll through the entire newsletter and, no matter which IC is “yours”, sign up for delivery to your very own digital mail box.

7 Comments »

  1. qaz said

    The new highly quantized system makes it much more likely that scores are quantized, which is going to cause a lot of problems like the one described here.

    By the way, it also greatly enhances the ability of a single member of the study section (not necessarily someone who has actually read the proposal) to sink a proposal: I have observed several cases where all three reviewers rate the grant identically (let’s say “2”), giving an available range of… “2”. If one reviewer “votes out of range” (let’s say a “5”), then that proposal just got a score of 21. If the vast majority of grants that are going to get funded sit at 20, then that grant won’t get funded because one member of the study section was unhappy.

    I HATE the new system. The old system had lots of problems, but the numbering/clustering of the scores was not one of those problems. The “clustering problem” of the old score system was an illusion that went away once you looked at it carefully. The clustering problem of the new score system is very real.

    • PY said

      I don’t see why the quantization of scores would increase the influence of a single member. In your cooked up example the quantization makes the impact of a single reviewer more obvious, but on average nothing changes. In the old system your negative reviewer would pushed the score down only by a bit but still below the cutoff.

      On the other hand the quantization makes the arbitrariness of the scores more apparent and easy for POs to have a say, for better or worse.

      The only alternative I see is ranking based scoring.

  2. physician scientist said

    I was just on a study section where the range of the three reviewers was 3-5. One person who hadn’t read the application then decided to vote out of the range, giving it an 8. Two more jumped in at 7. None had read the application.

    • drugmonkey said

      how do you know they had not read the app? perhaps there was a glaring flaw that was apparent on a simple scan. perhaps they had previously reviewed a prior version?

      you are suggesting a type of random behavior that I’ve never seen personally…absent a bit of context.

      • physician scientist said

        one of them explicity said that they hadn’t read the manuscript, but based on what they were hearing, he/she would score it an 8.

  3. NAPS said

    The new scoring system is worse than the old one on many fronts. The only positive thing I hear frequently is the less reading and writing burden on the reviewers. In the above mentioned scenario of a fundable score/percentile (20/9th), not every one in this cluster will be funded if the number of ties is large and there is not enough budget. Who then gets in? It becomes the decision of PO and council. Do you think this is fairer than the old system where the differential is made based on decimal points. Argument against the old system has been “who is to say the one gets 10.1 percentile is better than the 10.2 percentile”..absolutely true, .but at least you know the differential is based on real numbers and not based on some “undefined” human behavior beyond the study section level.

    I agree with qaz. I HATE the new system! (here from the view of a reviewer and soon to be applicant).

    As for reviewer voting out of range….I have seen non-assigned reviewers voting out of range and not having read the entire grant CAREFULLY a couple of times. Funny thing is that reviewers are actually stupid enough to admit it (but in that “I know what I am talking about and I don’t need to read the whole damn grant” tone).

    • drugmonkey said

      I know what I am talking about and I don’t need to read the whole damn grant

      That’s me. Anyone who claims that review of a grant never, ever hinges on a subset of points, as opposed to a “CAREFUL” reading of the whole thing, is full of stuff and nonsense.

      Heck, you can come to the exact same analytic summary of a proposal as someone else and *still* assign very different scores because of relative emphasis on the criteria. That’s the deal, until and unless there is top down specification and enforcement of how each criterion is to influence the gestalt score.

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: