Update: As noted below in Brendan’s comment , open discussion of this issue continues at Nature Network. Please join in there! The Chronicle is also having a lively discussion of “the oath” … and JAMA has a nice summary of why even well-intentioned folks cannot overcome unconscious bias caused by conflicts of interest, and The Lancet notes this commentary in covering a plagiarism case in the UK. Links to The Gallup Organization report on which this commentary is based can be found at ORI and below.
Update II: Nature has published letters from Bosch (exemplary standards in Croatia); Feder & Stewart (calculated dishonesty among senior researchers); Nussenzveig & Funchal (need for international ORI); Swazey (questions methodology); and the study authors (response to Swazey).
Nature reports a survey conducted by the Office of Research Integrity that, not surprisingly, finds that most misconduct goes unreported. Sandra Titus et al. found that the “2,212 researchers we surveyed observed 201 instances of likely misconduct over a three-year period. That’s 3 incidents per 100 researchers per year.” (an average of only 24 institutional investigation reports are submitted to ORI each year).
Titus et al. contacted 4,298 scientists holding NIH extramural research funds at 605 institutions.
In 2006, we asked participants to indicate the number of times they had observed suspected research misconduct in their own department in the past three academic years (2002–05). 2,212 scientists provided complete responses to questions concerning research misconduct (51% response rate). Of these, 192 scientists (8.7% ) indicated that they had observed or had direct evidence of researchers in their own department committing one or more incidents of suspected research misconduct over the past three academic years. The 192 scientists described a total of 265 incidents.
Scientists were asked to indicate how they became aware of the possible misconduct and were told to report observations and not hearsay. … We used these descriptions to validate whether the observation met the federal definition of research misconduct [fabrication, falsification, plagiarism].
Two people independently coded and evaluated the 265 descriptions to determine whether each met the federal definition of research misconduct. In all, 64 reports (24% of the total) did not meet the threshold of the federal definition — which left 201 observations of potential misconduct made by 164 scientists (7.4% ). These 201 misconduct observations included fabrication or falsification (60% ) and plagiarism only (36% ).
According to our respondents, 58% of the observed incidents had been reported to officials at their institutions.
The authors go on to extrapolate that annually “approximately 1,350 would have been reported [compared with actual number of 24] whereas almost 1,000 could be assumed to go unreported to any official.”
As a fan of work on procedural and distributive justice by Brian Martinson, Melissa Anderson, et al., I was particularly pleased to see the accompanying editorial, Solutions – not scapegoats, address the need to examine the environment as well as the individual:
Meanwhile, misconduct investigations all too often focus solely on an individual offender, and fail to diagnose the environment that has allowed misconduct to flourish. Instead, institutions should seize the opportunity to learn from the experience, and to address the bigger questions. For example, did the atmosphere in the lab create the pressure to cut corners? Or did the intensity of the tenure chase contribute? One way to address such questions might be through internal departmental discussions, in which everyone is free to admit mistakes, and discuss how to fix the problems instead of apportioning the blame.
Indeed, the authors suggest 6 strategies for championing research integrity at an institutonal level:
Adopt zero tolerance: “Social responsibility to the academic community and to the public who fund the research will be strengthened when it is apparent that an institution has a real commitment to integrity.”
Protect whistleblowers: “more than two-thirds of whistleblowers, in a Research Triangle Institute study, experienced at least one negative outcome as a direct result of their actions. Plus, 43% reported that institutions encouraged them to drop the allegation.”
Clarify how to report: Establish “a reporting system that clearly identifies the individuals to whom allegations should be brought, and establishing clear policies, procedures and guidelines related to misconduct and responsible conduct.”
Train the mentors: “Mentors specifically need to become more aware of their roles in establishing and maintaining research rules and minimizing opportunities to commit research misconduct. An institutional investment in building better mentors is an important vehicle to promoting research integrity.”
Use alternative mechanisms: “Auditing research records would be one such means. Mechanisms of review are needed to reduce deficient record keeping, improper protection of human or animal subjects or the utilization of questionable research behaviour.”
Model ethical behaviour: “Institutions successfully stop cheating, for example, when they have leaders who communicate what is acceptable behaviour, encourage faculty members and staff to follow the policies, develop fair and appropriate procedures for handling misconduct cases, focus on ways to develop and promote ethical behaviour, and provide clear deterrents that are communicated.”
In this same issue, Nature also cites the decision of the Ottawa Health Research Institute to suspend Kristin Roovers, whose case ORI closed and reported on last year.
Of course, the lively discussion of the Hellinga retraction is just that – a discussion of the retraction and the science involved, though I suspect we will eventually be discussing more formal issues of misconduct.