Two items in Nature journals consider the benefits of a consortium approach to peer review of journal manuscripts (in existence) and grant applications (modest proposal). Nature Neuroscience announced today it is joining the the Neuroscience Peer Review Consortium, which “reduces the overall reviewing workload of the community by allowing authors to continue the initial review process when their paper moves from one consortium journal to another, once the paper has been rejected or withdrawn from the first journal.” The Nature Neuroscience editorial describes the process, voluntariness, and flexibility and notes that the NPRC system will be evaluated at the end of the year … and on an ongoing basis at the journal’s blog, Action Potential.
Separately and quite distinctly, in a letter to Nature, Dr. Noam Harel of Yale makes a modest proposal: a centralized grant proposal repository into which applications could be deposited at the PI’s leisure and that sponsors could search for interesting science to review and possibly fund (no doubt with some encouragement by depositing PIs). The research proposals would only be made available to sponsor agencies, and multiple sponsors interested in the same work could collaborate on a shared funding agreement. As a thought exercise, interesting. As something to actually implement …
And finally, while we’re pondering peer review, Gregory Cuppan, a managing principal at McCulley/Cuppan (which specializes in document development), contributes to the commentary on a prior thread discussing the Publishing Research Consortium survey data. Specifically, he notes that “most people have little or no formal training in the task of review” and would “be interested to know how many readers of this blog have actual formal training in the task of review (here I make a strong distinction from training for the task of editing).” He refers to a 1961 study by the Educational Testing Service in which 53 distinguished reviewers read 300 college student papers but only had a median correlation among reviewer scores of 0.31.
In a separate note to me, he also suggested we look at an article by Mayo et al. in the Journal of Clinical Epidemiology suggesting traditional grant review processes and funding decisions suffer from a high degree of variability due to too few reviewers being involved. The report presents empirical data from intramural review of pilot project applications at McGill University Health Center Research Institute; applications were both ranked and scored (1-5 scale), with poor agreement between the two (kappa value of 0.36, with 95% CI 0.02-0.70). The top-ranked proposals would have failed to meet the “payline” with varying probability depending on who was assigned to provide a scored review. The examined process does not translate to current NIH study section practice, but it lends credence to the recommendation (see pp 4-5 & 38-41) that chartered study section members (not ad hoc reviewers) rank scored proposals at the end of each meeting. Per Mr. Cuppan’s suggestion that manuscript reviewers lack training, see pp 45-46 of the Enhancing Peer Review report for standardizing reviewer, chair, and administrator (officer) training.