Addressing Underpublication of Clinical Trial Results

In the September issue of The Oncologist, Scott Ramsey and John Scoggins (both from Fred Hutchinson) report that fewer than one in five cancer clinical trials registered with have been published in the peer-reviewed literature (17.9%). Categories of studies with the lowest rates of publication included industry-sponsored (5.9%), non-randomized (4.4%), and terminated (3.4%) trials.

In examining the 357 trials with published results, the authors could judge whether the results were positive or negative for 341. The majority (64.5%) reported positive results, with Phase I trials most likely to report positive results (89.9%), followed by Phase IV (83.3%), Phase III (63.2%), and Phase II (53.6%). NIH-sponsored trials were most likely to result in positive results (78.8%).

Previously, Richard Johnson and Kay Dickersin (both from Johns Hopkins) also raised the issue of publication bias against negative clinical trials in Nature Clinical Practice Neurology.

Fortunately, the FDA Amendments Act of 2007 has set in motion the expansion of to include compulsory reporting of basic results. You can check out progress made on the basic results data entry test system, which is designed to capture the following information:

‘‘(i) DEMOGRAPHIC AND BASELINE CHARACTERISTICS OF PATIENT SAMPLE.—A table of the demographic and baseline data collected overall and for each armof the clinical trial to describe the patients who participated in the clinical trial, including the number of patients who dropped out of the clinical trial and the number of patients excluded from the analysis, if any.‘

(ii) PRIMARY AND SECONDARY OUTCOMES.—The primary and secondary outcome measures as submitted under paragraph (2)(A)(ii)(I)(ll), and a table of values for each of the primary and secondary outcome measures for each arm of the clinical trial, including the results of scientifically appropriate tests of the statistical significance of such outcome measures.

(iii) POINT OF CONTACT.—A point of contact for scientific information about the clinicaltrial results.

(iv) CERTAIN AGREEMENTS.—Whether there exists an agreement (other than an agreement solely to comply with applicable provisions of law protecting the privacy of participants) between the sponsor or its agent and the principal investigator (unless the sponsor is an employer of the principal investigator) that restricts in any manner the ability of the principal investigator, after the completion date of the trial, to discuss the results of the trial at a scientific meeting or any other public or private forum, or to publish in a scientific or academic journal information concerning the results of the trial.”

Indeed, in an accompanying commentary, James Doroshow notes that NCI is developing in parallel a complementary clinical trials database to catalogue administrative and outcome data for all studies performed at NCI-supported institutions (perhaps drawing from or building on the NCI’s excellent existing Cancer Research Portfolio database). Unlike the basic results reporting, the NCI database will include interim reports on accrual and outcomes. Thus, the oncology community will, within the next few years, have rapid access to safety and efficacy data from cancer clinical trials.

Further, the editors of The Oncologist, Gregory Curt and Bruce Chabner, indicate that they are considering whether to “undertake the publication of a peer-reviewed, searchable venue for these trials” in reference to “well-executed trials that fail to meet positive endpoints: ‘negative’ in a sense, but valuable nonetheless.” The editors invite readers to indicate their level of enthusiasm and support for such a venture.

The title of Doroshow’s commentary captures the urgency for action on this front, not only in the oncology community but among all clinical disciplines: Publishing Cancer Clinical Trial Results: A Scientific and Ethical Imperative.



  1. PhysioProf said

    Another key issue is this kind of selective underpublication of negative results badly skews the outcome of meta-analyses.

    Absolutely, PP. Good point, especially since the reporting in and an NCI database won’t get picked up in meta-analyses or other data-mining of the literature – at least not until they figure out how to assess methodological quality based on the limited information available and in the absence of peer review. – writedit

  2. whimple said

    There is also the issue of defining success as “positive” or “negative”, when the real metric should be “worth it” or “not worth it”. “Positive” tends to mean: “has any measurable benefit with statistical confidence of x“. This leads to incrementalism that really just isn’t very useful. Today, for example, I had a chat with a pharm rep (providing free lunch, as they do everyday). He was selling a new drug for relapsed small cell lung cancer. I asked him if it works. He cheerfully informed me that it more than doubles the length of patient survival… from 9 weeks, to 20 weeks. This isn’t helping anyone, except mainly the drug companies.

    On the other hand, it’s easy to identify the clinical trials that found treatment modalities that are “worth it”. These are the clinical trials that had to be terminated early because it became obvious that allowing the trial to continue was unfair to the control group not getting the treatment.

    More good points – and I love the drug rep anecdote. His study probably wouldn’t be published because his company wouldn’t want to be tied down by the limited conclusions that could be drawn as imposed by peer review. Your second insightful example, on the other hand, would likely be published – though well after the initial press releases and NYT story on the dramatic breaking of the blind based on early stopping rules. Is there an easy way to search for DSMB-halted trials based on overwhelming evidence of efficacy versus safety concerns? – writedit

  3. BB said

    Isn’t it Irissa that adds 6 weeks to patient survival?

  4. […] so many of us, Ioannidis would like to see a way for ALL research findings to be archived, both to offset the bias toward publishing positive data and to provide a way for investigators to […]

  5. […] importance of publicly posting clinical trial data has many benefits, such as countering the publication bias against negative results and identifying investigators involved with industry-funded studies. However, Writedit had not […]

  6. Tests with a negative result are still worthy of publication, if for no other reason than to prevent someone else wasting their time doing the same test

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: