The July issue of the American Journal of Medicine has an interesting report by Martin et al. that asks “Why Are Peer Review Outcomes Less Favorable for Clinical Science than for Basic Science Grant Applications?”
Jumping ahead, the answer is not entirely satisfying:
“current data suggest that nearly all of the difference in review outcomes for clinical and nonclinical applications is due to a failure to adequately address human subject protection requirements and to a lower rate of submission of competing continuation applications by clinical applicants.”
To arrive at this conclusion, the authors examined CSR-reviewed R01s from Oct 2000 through May 2004 (12 rounds), so a couple years in doubling, a couple years after (though none of the recent lean years). Clinical research applications were those identified as such on the face page (box checked for human subjects involvement). They used the NIH definition for new investigator.
Aggregate data (new & experienced PIs, A0-A1-A2 submissions, Type 1 & Type 2 applications) show that 22.53% of all nonclinical R01s score within the 30th percentile compared with 17.85% of clinical R01s. (correction per Whimple’s comment below)
Conversely (unfortunate strategy for clinical research applicants), 28.3% nonclinical PIs submit Type 2 applications compared with 20% of the clinical PI pool. Thus, the overall success rate of clinical R01s takes a hit due to this lower rate of competing renewals. The authors give a few possible explanations but suggest that empirical data are needed to document why this discrepancy occurs so effective corrective measures can be taken. Indeed, the authors estimated that:
“the lower rate of submission of competing clinical applications contributes to approximately one half of the aggregate difference in peer review outcomes between clinical and nonclinical applications.”
So then there’s the issue of human subjects concerns, which do affect priority score. Among all clinical R01 submissions, 14.8% did not adequately address human subjects protection requirements (no comparable data for nonclinical R01s that used animals in terms of addressing vertebrate animal requirements). Overall, 19% of clinical R01s with no human subjects concerns scored within the 20th percentile, compared with 10% of those with human subjects concerns. The authors note that:
“approximately one-half of the observed differences in peer review outcomes for clinical versus basic research applications can be attributed to applicants failing to adequately address human subject concerns in their applications.”
CTSAs have an entire core devoted to regulatory knowledge (we have a regulatory compliance facilitator devoted to helping PIs with their grant application human subjects sections & IRB protocols) and bundle clinical research ethics with the design, biostatitics, and ethics core. With all the reporting data NCRR is collecting, one would hope they could eventually analyze whether these transformative resources have made a difference in the funding of clinical R01s.