Peers Review Journal Peer Review

Stop the presses!

The Chronicle of Higher Education reports, stunningly, that “an overwhelming majority [of academic scientists] believe peer review in journals is necessary.”

I would have expected to read this at The Onion, not at The Chronicle.

The survey report, paid for by the Publishing Research Consortium, “found that the average peer review takes 80 days, that the average number of manuscripts each reviewer reads yearly is 8, and that each reviewer tends to spend 5 hours on a manuscript over the course of 3-4 weeks.” Which explains a lot …

Update: Perhaps these reviewers should read the book reviewed in JAMA this week, Peer Review and Manuscript Management in Scientific Journals: Guidelines for Good Practice.

Update: The Lancet comments both on the Publishing Research Consortium survey noted above and the egregious behavior of the NEJM reviewer of the meta-analysis of rosiglitazone’s cardiovascular risks (see comment below).

Advertisement

7 Comments »

  1. bikemonkey said

    “over the course of 3-4 weeks.”

    a bit misleading. more like “after the two week interval has elapsed we have to bug ’em 3 or 4 times and then the reviewers get to work and dash off some crap in a half hour. Embarrassed by this lack of professionalism on their part, the average reviewer multiplies their actual value by 10 when responding to surveys”.

  2. The peer-review system clearly has problems, but when trying to solve them we always find ourselves between a rock and a hard place. Peer-review is clearly needed, but in its currently format it has been consistently shown to lack reproducibility, to be influenced by a multitude of frequently non-disclosed conflicts of interest, and to be operationally challenging (finding reviewers is one of many problems).

    Perhaps we should be trying harder to find alternative models, but cultural change is always slow and painful. Here is a random list of potential attempts:

    1. reviews should be ongoing rather than happen at a single point in time — Web 2.0 technology does allow for that.

    2. Along the same line of the suggestion above, perhaps articles should be modifiable over time, with references being made to specific versions. This feature would probably make the process more dynamic and responsive to peer-review

    3. we should try different proxies to map the expertise of peer-reviewers. So, if I am good within a certain clinical field but horrible with statistics, should my opinions on both topics get the same weight? Probably not …

    4. we should allow for content to be displayed at different levels of detail, allowing for quick summaries along with in-depth information. Biomedical ontologies are here to stay and could allow for that, but their use is still lacking.

    of course, all of this is much easier said than done.

    Thanks for a thoughtful start to the conversation, Dr. Pietrobon. We’ve touched on peer review in journals just a bit here – not nearly as much as authorship issues and peer review of grant applications. Journalogy might be an appropriate platform for an in-depth discussion, but Matt seems to have become preoccupied with other priorities since December. I expect our friends across the Pond will be revisiting this topic in a more substantial manner than has The Chronicle. – writedit

  3. writedit said

    Also, I added this as a comment under Failings of COI Policy, but one wonders about whether certain issues are even routinely considered as needing to be fixed (since the underlying misconduct is so “obvious”). The thought of a reviewer for NEJM first being assigned to this manuscript (he served on the steering committee of a GSK-sponsored clinical trial of Avandia, earned money giving talks for the company) and then his faxing the final manuscript to the manufacturer a week before the devastating news was made public is not among the considerations that immediately arise when discussing journal peer review. I just cannot get past Haffner’s statement to Nature: “Why I sent it is a mystery … I don’t really understand it. I wasn’t feeling well. It was bad judgement.”

  4. […] Nature also includes a commentary on double-blind peer review of journal articles drawn from the Publishing Research Consortium survey results that “also highlight that 71% have confidence in double-blind peer review and that 56% prefer […]

  5. […] Research Ethics, Biomedical Writing/Editing Nature Nanotechnology continues the discussion of journal peer review with a summary of and commentary on the recent Publishing Research Consortium report. Letters to […]

  6. Gregory Cuppan said

    Real issues with peer review are two: the failure to account for differences between the tasks of review and assessment, and the other failure is provide truly meaningful guidance to “reviewers” regarding the task of assessment or the task of review.

    Assessment is the act of evaluation/appraisal of a body of scientific work for rigor and merit.

    Review, on the other hand, is the act of examination for soundness in logic of the work and how effectively this logic is conveyed through the words and data placed in the body of the document.

    The fundamental problem that is rarely mentioned in discussions like this one is the simple fact that most people have little or no formal training in the task of review. I’d be interested to know how many readers of this blog have actual formal training in the task of review (here I make a strong distinction from training for the task of editing). I will venture to say the answer is none. We learn to review through experience and the process of trial and error. End result is review tends to be a highly idiosyncratic activity where we can rarely predict with any degreee of certainty the outcomes of the peer review process.

    Work done in the early 60s by Educational Testing Service Factors reinforces my point in abundant fashion. ETS published a paper in 1961: “Judgement of Writing Ability” that used a review panel of 53 distinguished readers from six different fields to review 300 papers by 300 college student authors. The “net-net” is the study showed the median correlation of reviewer scores for document quality to be a disappointing .31. Other studies have been done with “trained” reviewers and the results are better, but the median correlation remains under .50.

  7. […] a managing principal at McCulley/Cuppan (which specializes in document development), adds some interesting commentary to a prior thread on the Publishing Research Consortium survey data. Specifically, he notes that […]

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: