Winner’s Curse: Why Sensational but Inaccurate Findings Get Published

This week’s edition of On the Media included a piece entitled Bad Study Habits that discussed a recent report in The Economist on why much published research is wrong:

IN ECONOMIC theory the winner’s curse refers to the idea that someone who places the winning bid in an auction may have paid too much. Consider, for example, bids to develop an oil field. Most of the offers are likely to cluster around the true value of the resource, so the highest bidder probably paid too much.

The same thing may be happening in scientific publishing, according to a new analysis. With so many scientific papers chasing so few pages in the most prestigious journals, the winners could be the ones most likely to oversell themselves—to trumpet dramatic or important results that later turn out to be false. This would produce a distorted picture of scientific knowledge, with less dramatic (but more accurate) results either relegated to obscure journals or left unpublished.

Hmmm. Sounds familiar. Do say more, Dr. Ioannidis (who in 2005 published Why Most Published Research Findings Are False in PLoS Medicine … we won’t ask if he places his own data in this category):

… Dr Ioannidis and his colleagues argue that the reputations of the journals are pumped up by an artificial scarcity of the kind that keeps diamonds expensive. And such a scarcity, they suggest, can make it more likely that the leading journals will publish dramatic, but what may ultimately turn out to be incorrect, research.

Dr Ioannidis based his earlier argument about incorrect research partly on a study of 49 papers in leading journals that had been cited by more than 1,000 other scientists. They were, in other words, well-regarded research. But he found that, within only a few years, almost a third of the papers had been refuted by other studies. For the idea of the winner’s curse to hold, papers published in less-well-known journals should be more reliable; but that has not yet been established.

Like so many of us, Ioannidis would like to see a way for ALL research findings to be archived, both to offset the bias toward publishing positive data and to provide a way for investigators to avoid repeating experiments done by others or repeating flawed studies:

They suggest that, as the marginal cost of publishing a lot more material is minimal on the internet, all research that meets a certain quality threshold should be published online. Preference might even be given to studies that show negative results or those with the highest quality of study methods and interpretation, regardless of the results.

I think such a repository would be at least as valuable as PubMed Central (NIH Public Access home to all publications arising from NIH-funded research). now requires all registered trials to report their findings, the good, the bad, and the ugly, which will help even out the coverage of clinical research, but nothing pre-clinical. Curating such a resource would be a reasonable, common-sense use of the NIH Common Fund funds … perhaps with the marquee journals chipping in some threshold triage/review service to help handle the load as community (of science) service.


  1. DrugMonkey said

    just because findings published in a paper have been “refuted” by subsequent work does not mean that the original paper was “false” or “incorrect research”.

    From The Economist: “The researchers are not suggesting fraud, just that the way scientific publishing works makes it more likely that incorrect findings end up in print. … The group’s more general argument is that scientific research is so difficult—the sample sizes must be big and the analysis rigorous—that most research may end up being wrong. And the “hotter” the field, the greater the competition is and the more likely it is that published research in top journals could be wrong.”

    From PLoS: “… a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”

    “Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data, analyses, and results.”

  2. They suggest that, as the marginal cost of publishing a lot more material is minimal on the internet, all research that meets a certain quality threshold should be published online.

    This exists: PLoS ONE. I am going to post about this later today.

  3. Lab Grab said

    This is an age old problem and the reason why we have so many different journals, and peer reviewed processes. At one point people felt 100 ideas on a topic of research was overwhelming. Now with RSS readers and news alerts we see 10,000 pieces of “Science News” a day. This volume increase really makes it noisy, and sensational is easy to sell.

    I am one of those who think open access benefits everyone, but its a long ways off.

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: