Last week, a commenter on the RC1 thread asked for reflections on the two-stage peer review process, particularly the Editorial Boards:
Were you on an Editorial Board? I’d still love to hear more feedback on what the reviewers thought of those.
In fact, I’d love to hear more feedback on what applicants and reviewers felt about the Editorial Board review process in general. Independent of the huge numbers, short time frames and 1-2% success rate. Would you want regular study sections to use this process?
Other questions of interest to this individual (and others who have contacted me directly) include:
Was it rigorous? Did it seem like a waste of time? Were the scored apps of high quality? Did you feel rushed? Do you feel that better science got funded by the Editorial Board review process than if grants were picked by random lottery? Did the second level (after mail review) add anything?
Were new/unknown investigators at a disadvantage? Was science outside the interest/expertise of the Editorial Board members at a disadvantage?
I know some questions have come up about conflicts of interest among reviewers, which the NIH recently addressed in its Challenge Grant FAQ:
How were conflicts of interest managed for the Challenge reviews?
Given the volume of applications received and the compressed timeline for finishing the reviews, the NIH determined that it was necessary to recruit over 15,000 outstanding scientists to serve as mail reviewers (including some who would also be applicants). However, a Challenge applicant could only serve in the Challenge reviews as a mail reviewer and not as a study section member, and only for a study section(s) other than the one reviewing his/her application. Mail reviewers do not participate in the discussion or final scoring of the applications, and do not interact with other study section members.
Hmm. Except Editorial Board reviewers were asked to score applications based on the mail reviewer scores and critiques … though apparently most Editorial Board members felt they could not do so without looking at the original application … often leading to critiques of the mail reviewer critiques … and so on.
And heck, why stop at the special process used to review RC1s …. How do reviewers (and, I suppose, applicants) feel about the new review, scoring, and critique procedures?
One Editorial Board member told me that on more than one application, the mail reviewers had very divergent scores but were in agreement with their critiques/opinions, suggesting the learning curve will be steep on the uniform assignment of scores. Perhaps the NIH could use these thousands of clusters of three naive (in terms of the scoring procedure) reviewers looking at the same application to analyze patterns of score assignment against the written comments. I know just the person to write the grant application to fund this …
And what about the plan for increased use of alternatives to in-person study section meetings, which is when many of these finer points would be addressed and, of course, advocates speak out on behalf of specific applications.
Fire away, folks. The NIH needs all the feedback it can get.