Science Retraction – Community at Work

Most Current Discussion of this topic can be found here.

This week in Science, a retraction due not to misconduct [see discussion below – an investigation could follow] but to the community of science working as it should:

We wish to retract our Report “Computational design of a biologically active enzyme” [MA Dwyer, LL Looger, HW Hellinga, Science 304, 1967 (2004)], which describes triose phosphate isomerase activity in a computationally redesigned ribose-binding protein (RBP) from E. coli. Dr. John P. Richard (Dept of Chemistry, Dept of Biochemistry, SUNY Buffalo), to whom we provided clones encoding the novoTIM activity, has brought to our attention that the triose phosphate isomerase activity observed in our reported preparations can be attributed to a wild-type TIM impurity–seen in preparations that use a continuous rather than stepwise imidazole gradient (as in the original paper) or that add a second sepharose column. Richard’s reanalysis has now also been confirmed by others in the Hellinga laboratory. The interpretations in the original report were based on lack of observed activity in mutant, engineered enzyme that bound substrate, but lacked catalytic residues. Variations in expression levels of designed proteins relative to the amount of contaminating endogenous protein might account for the pattern of observed activities that led to our erroneous conclusions. The in vivo experiments have not been reexamined.

We deeply regret that our report of a designed enzyme activity does not live up to closer scrutiny. Nevertheless, we remain optimistic that the problem of structure-based design of enzyme activity will be solved and that novel catalysts will be produced in conjunction with computationally based methods.

Mary A. Dwyer, Dept of Pharmacology and Cancer Biology, Duke Univ Med Ctr
Loren L. Looger, HHMI, Janelia Farm
Homme W. Hellinga, Dept of Biochemistry, Duke Univ Med Ctr



  1. David said

    The retraction does not indicate whether or not there was any misconduct. Remember “[t]he in vivo experiments have not been reexamined.”

    The retraction is not due to confirmed misconduct, but yes, of course … this carefully worded statement and the entire situation suggest at the very least a high likelihood of what Brian Martinson, Melissa Anderson, et al. would call “‘regular’ misbehavior.” -writedit

  2. designer said

    There may not be clear evidence of misconduct, but the way in which the science was conducted is clearly inexcusable. It takes some arrogance to believe that an observed activity for a protein expressed in E. coli comes from one’s engineering rather than a background contamination. Especially if one is designing in an activity that DOES exist in E. coli. To have left any stone unturned in trying to eliminate the possibility that the TIM activity was from contaminating material is simple sloppiness. Here, it took another lab to do it. Hooray for protein design, eh ?

    Agreed 100%. More than if possible, especially given the players and their institutions. So many failings in the less than responsible conduct of research are spoken & unspoken here (again, “responsible” as in responsible to the rest of the community of science, not just avoiding the misconduct police). Can we possibly teach this behavior out of science? This ranks up with the NEJM peer reviewer debacle. Though it is nice to see the community setting the record straight in this instance, such work should not have been published – or thought publishable by the investigators. -writedit

  3. writedit said

    And for a non-biased, completely objective reflection this situation from a past protege of Dr. Hellinga’s … there is Broken Science (plus the comment that no doubt inspired this gentle rebuke). PP, I sense a kindred spirit.

  4. concerned said

    It may interest you to know that Homme W. Hellinga, (in my opinion) an arrogant, bombastic charlatan and the much-touted James B. Duke Professor of Biochemistry, recipient of the N.I.H. Pioneer Award, was forced to retract his seminal 2004 Science paper on computational enzyme design. The retraction is published in the Feb. 1, 2008 issue of Science. Maybe Hellinga should have done appropriate controls before he rushed to press. Of course, why be too picky when one can be catapulted to prominence on the basis of sexy data that no one bothers to double-check? Fame, NPR interviews, DARPA funding, and write-ups in the New York Times are surely better than the quiet obscurity of academic research.

    This is a rare modern example of the peer review process actually bringing down a jerk who peddles irreproducible data. Prof. Hellinga’s fame and funding will no longer protect him from the wrath of the many, many collaborators, students, and employees he has fucked over. I hope that they eat him alive.

    I thought that you might be amused.

  5. careful scientist said

    I see that one of the authors is now at Janelia Farm, likely as a reward for his participation in this paper. Does this individual deserve such prestige? Science has become like show business in that flashy results get one’s career advanced quickly. Unfortunately, the hard work of peeling the encrusted layers of artefact from the Truth beneath is seldom rewarded so prodigiously. At least the careful scientists of the world know that our results will stand the test of time.

    Very nicely put. Thanks so much – and thanks so much for being one of the careful scientists whom I am proud to support. – writedit

  6. In the Know said

    To the best of my knowledge, the second author on the paper is there for the work he did on the program that they use to predict the mutations required to the starting protein. Not the most sound of reasons for an authorship, but I don’t think he had anything to do with the actual work. Situations like this provide a good reason to turn down these courtesy authorships, though.

    As for the first author, Nature just published an article that mentions that she was investigated for, and cleared of, research misconduct. Rumor has it that Duke is about to start investigating Homme Hellinga, though. There are rumors that the lead author had severe concerns about the data, but her concerns were overridden by the PI.

    I do have to say that there are other people responsible for this paper being published as it is: the reviewers. The shortcomings of this paper were obvious from the beginning, and if the reviewers had challenged these at the time of review, maybe this issue could have been caught before publication. It could have been one of those times where a critical review would have really saved somebody’s bacon. Somewhere, there are a couple of PIs very thankful that the review process is blind.

    Thanks for this thoughtful, thorough comment, In the Know. I was troubled by Dwyer “investigation” as described by Nature (which provides a clear & concise history of this saga) and will be interested to see how Duke handles this case in terms of focusing on actions & behaviors versus funding & prestige. One wonders if the reviewers of the forthcoming David Baker paper about to claim this victory have since gone back to re-review his manuscript. – writedit

  7. whimple said

    To the best of my knowledge, the second author on the paper is there for the work he did on the program that they use to predict the mutations required to the starting protein. Not the most sound of reasons for an authorship, but I don’t think he had anything to do with the actual work. Situations like this provide a good reason to turn down these courtesy authorships, though.

    Sounds like plenty sufficient work for an authorship to me, or was the programmer an outside contractor?

  8. In the Know said

    No, he was also a graduate student. Again, if I understand it correctly, I believe he had performed this work for his own experiments, which were then published. I wasn’t close enough to the situation to know for sure, though. Perhaps he spent a significant amount of time working on the computer side of this specific project. Either way, I’m confident that he wasn’t involved in the bench work, which is where things got a little hairy. Perhaps my name should be ‘Sort of In the Know’…

  9. knowing is better than believing said

    I have very serious objections to the manner in which the explanation in the retraction simply doesn’t account for the data in the paper. It appears that one or more of the authors have actually falsified data. I lay out my objections below:

    1) How is it that the mutants that remove catalytic residues all result in decreased activity than ‘wildtype’ NovoTim if the activity is a contaminant from purification (figure 4)?
    One would expect that the contaminant would be present in similar concentrations as that found in the ‘wildtype’ NovoTim. The contaminant would have to be present in correspondingly less amounts than ‘wt’-NovoTIM, resulting in the activities being present in the exact amount expected based on the mutation. This is extremely unlikely. In addition, the reported activities are displayed in ∆G units which indicates that the activity differences were quite large (ie order(s) of magnitudes not factors of 2 or 3 as you might expect to occur from a contaminant problem).
    This explanation is extremely hard to swallow. However, even if we grant the authors this highly improbable outcome, there is additional evidence of outright fraud.

    2) The NovoTim catalytic data does not match what would be expected from a contaminant.
    The reported kcat and Km for NovoTim are ~0.1 s-1 and ~150µM, respectively whereas the wild-type TIM’s kcat and Km are ~3000s-1 and 1.5mM respectively. While the reported kcats could be consistent with a contaminant, the apparent Kms are not. If the ‘NovoTim activity’ was actually from contaminating wildtype TIM, then the Km would be at least 1.5mM or higher. One could argue that the actual NovoTim protein present could potentially bind substrate but not perform catalysis, thus skewing the kinetic data. However, this would make the apparent Km go up not down. The fact that the reported Km is an order of magnitude lower than that of WT-TIM is completely incompatible with the explanation of contaminating wild-type TIM.

    3) The in vivo data is, of course, not at all explained by contaminating wt-TIM.
    In the paper, the authors claim that they could complement a TIM-deficient strain of E.coli with plasmid containing NovoTim. This result cannot be explained at all be contaminating TIM since the gene for TIM is supposedly knocked out in the E.coli that they were using. In addition, the authors then go to use this system to enhance the NovoTim activity by selecting for NovoTim variants with enhanced activity. This makes no sense given the authors explanation.

    Taken together, a large fraction of the data presented in the Dwyer paper are completely inconsistent with the authors’ explanation that NovoTim activity was from contaminating wt-TIM. Until the authors give an adequate explanation, the only conclusion that I can come to is that the authors’ used fraudulent data. Perhaps even worse than the fraud is the putative coverup to this fraud.

    Finally, “In the Know” claims that there were obvious holes in the data from the beginning, but this just doesn’t seem to be the case from my position. I read the paper and heard Hellnga give lectures and the data seemed airtight (assuming, of course, that they had done the simple part of purifying away the endogenous TIM.) I’d love to know what other evidence ‘In the Know’ has that this was wrong from the start (other than the lack of data about purification, of course.)

  10. whimple said

    To be fair, you should refer to this as the “Dwyer, Looger, Hellinga paper”, not as simply the “Dwyer paper”.

  11. noblesse d'epee said

    I suggest that the scientific community attempt to repeat the data Hellinga’s Protein Science papers [among them: Protein Sci. 2006 Aug;15(8):1936-44 and Protein Sci. 2005 Feb;14(2):284-91] on fluorescent biosensors. Among other things, note that the experiments in table 1 in the first paper are annotated with the following subscript: “determined from a single experiment.” Why is this important? Because former members of the Hellinga lab tell me that the biosensors in one or both papers only worked with certain batches of fluorophore. This is not in itself damning, but should have been explained. More interesting, however, is that Hellinga lab members with direct knowledge of the experiments have, in the past few years, privately commented upon the extreme difficulty of reproducing these papers’ results. I’m told that “good” batches of fluorophore also had interesting quenching results (e.g. the buffer blank). This is all merely hearsay, but suggests that the scientific community should try to verify these results. After all, they form the basis for Hellinga’s (possibly wasted) DARPA funding. Has anyone outside the Hellinga Lab repeated these findings?

  12. knowing is better than believing said

    Very true, whimple. I meant to write Dwyer etal paper, but didn’t proofread well enough to catch it. I guess that’s what I get for writing that in the middle of the night. Thanks for picking that up.

  13. In the Know said

    To Knowing is better than Believing,

    I agree with you that knowing is better than believing. I think all of this needs to be examined very carefully and the truth of the situation stated publicly, if for no other reason than to serve as a warning to others of where things can go wrong.

    To address my reactions to your points:
    1) Apparently the ‘catalytic mutants’ expressed at a different level than the ‘active’ NovoTIM. I could believe this, but there are other possibilities that I’ll explain below.

    2) I don’t have an explanation for why the kinetic properties reported differ from that of endogenous TIM. Off the cuff I could wonder if the solution conditions differ, or perhaps the TIM contaminant had to compete for binding with the NovoTIM (even if NovoTIM was catalytically dead, it may have still bound the substrate). I don’t have time right now to chase down these ideas, though.

    3) I agree; the in vivo data just can’t be right. I’m not familiar enough with TIM biology/genetics to suggest a plausible good-faith explanation for this.

    In an earlier post I mentioned that the lead author had reservations about the data. This was because, as I understand it, the activity of the protein preps was inconsistent. Perhaps the complementation experiments were as well. As the story came to me, she wanted to purify the protein using different methods, different tags, ect. ect., but wasn’t allowed to by the PI. Others that had left that lab had told me that they were uncomfortable with Homme’s tendency to cherry pick the data that he liked, and ignore anything that didn’t agree with a pet theory or have the potential to make him famous. To be fair, I haven’t heard his side of the story, but I have heard the same story from multiple former lab members. It is also consistent with Noblesse D’epee’s observations, from an earlier post.

    So data may not have been manufactured for this paper, but data could have been readily ignored in the writing of the paper. Both matters are egregious; the only real difference between these two possibilities is that the lead author would be the most suspect for manufacturing data, while either of them could have ignored data. Bear in mind that she has been already investigated and cleared of manufacturing data. Only a thorough investigation will reveal what really happened.

    As for my criticism of the review process, perhaps it is the result of the field I work in. My field is prone to artifactual results (but isn’t everybody’s?), so when I review a paper where somebody has reported that protein X has a novel activity unpredicted by sequence homologies, the burden of proof is very high. As presented, the data does not have holes in it, but there is plenty of space around the edges. I agree that showing the catalytically dead protein result would have been good proof, but when I saw that the protein had been purified by a very simple single column protocol after expression in a strain that also had endogenous TIM activity, it left a lot of room for improvement. Why wouldn’t one express this protein in a host strain deficient for this activity when this strain readily exists? This is the standard procedure in my field, and the leaders of the field won’t believe any data you generate unless you do this. I think there is a prevalent idea recently that only one control is ever needed to validate an experiment, especially in journals like Science and Nature where the timing of publication of a study is perhaps more important than the completeness of the study. The bottom line is this: I think when you are reviewing any paper, but especially one reporting something this novel and significant, it is your job to critically evaluate the work and ask everything reasonable of the authors to support the claim.

  14. karma said

    As I understand it, Homme’s main competition, D. Baker’s lab, has a completely transparent program- everyone in the lab can see the source code, modify it, make it better, etc. Kudos to them for taking an academic approach to a similar problem which has resulted in designs that actually work. In other words, they are COLLEGIAL- a word that certain members of the Duke Biochemistry Department do not understand. While many of the faculty in the Biochemistry Department ARE collegial and will undoubtedly receive unfair collateral damage from Homme’s shenanigans, there are others in the department who take an approach to science which is against the spirit of academics. The sad part is that this approach is completely unneccessary. Science is great and it is best approached openly and collaboratively. As such the administration at Duke would be crazy to give a core structural center to such notoriously uncollegial faculty. We can only hope that this event will spur inquiry into the shady science and ongoing mismanagement of the current “shared” X-ray crystallography resource at Duke

  15. knowing is better than believing said

    In the Know,
    Thanks for the thoughtful reply.

    I also feel like I should clarify my position a bit. When I assert that there must have been some fraud going on, I don’t necessarily mean that data was fabricated (although that still seems like a possibility). Omitting conflicting data or performing such sloppy science that the results match a pre-made conclusion is also fraud.

    Also, forgive me if I’m overly skeptical of Duke’s investigation into the possibility of Mary Dwyer fabricating data, but since the problems with this retraction run so deep, it seems hard to exclude this as a possibility without hearing evidence of innocence. Even if Homme pushed her to publish before she was truly comfortable, she should have not sent the paper in with her name on it before she trusted the data. She, at the very least, was complicit in publishing shoddy material.

    I also believe that Homme knew that there were massive problems with the data, but decided to publish this anyway. There definitely should be an investigation into the publications from his lab and his lab practices.

    That being said, I’d like to reply to your responses to my points:

    1) Expression levels of the NovoTim mutants cannot explain the observed results, since none of the NovoTIMs had acitvity to begin with. There could be 1000-fold differences in expression, but the activity would still be zero.
    The only explanation that makes sense to me is that there was differing amounts of contaminating wt TIM in each different prep. However, the chance that the amount of contaminating wt-TIM was present most in the wt-NovoTIM, less in the three single mutant Novos, even less in the double mutants, and the least in the triple mutant is staggeringly small.

    2) If the solution conditions were different, then that is the sloppiest of sloppy science. I would hope the authors had the sense to make sure that this was true. However, this is the only explanation that I can think of for this anomaly in the data.
    If the NovoTIMs bound substrate, thereby competing with the contaminating wt-TIM, then the apparent Km would increase rather than decrease. In other words, it would take a larger amount of substrate to reach saturating enzyme activity.

    3) I also have no idea how the authors could have botched the in vivo experiments. Perhaps they contaminated the TIM-deficient strain with another strain? Perhaps they screwed up the selection protocol (although that seems pretty darn hard to do; the selection appears quite trivial.)

  16. In the Know said

    I agree that cherry picking data to publish is just as bad as fabricating it. I think more people fall into this trap though, because they find ways to rationalize the omissions and publication pressures are what they are. I’m not condoning it in any way, I just think for these reasons it is more common, and I think it is what happened here.

    I was actually somewhat surprised that Mary was cleared so quickly by Duke’s investigation. Homme is a real big fish at Duke, and they had just renovated part of a building and were promoting him to the head of an institute in order to keep him at the University. When the news came about that the paper was crap and that Mary had been reported for misconduct, I feared that she would become the scapegoat. There is a natural predisposition to blame the lead author on these deals anyway, and if she were found guilty then Duke kept its all-star professor’s reputation intact. I have heard that the investigation of her notebooks (and possibly e-mails?) has lead to an investigation of Homme, but I don’t know if this is true. With the publicity, I think they would have to investigate Homme anyway.

    “Even if Homme pushed her to publish before she was truly comfortable, she should have not sent the paper in with her name on it before she trusted the data.”

    What you are saying is technically true, but practically tough. If she were to have insisted that her name was removed from the paper, then her career would have been over. If you have interacted with Homme, you will understand the truth of this statement. As it stood, his first reaction to finding out the activity was a contaminant, without any notice or questioning, was to accuse her of research misconduct and try to have her PhD revoked. Obviously, the way things have turned out isn’t much better, but I have sympathy for the situation she was in. If this is enlightening, the rumors of Homme’s practices have circulated around Duke long enough and pervasively enough that Mary still has the support of many of the faculty of Duke, including her current PI.

    If I had to bet on the outcome of this, I would put my $50 on them discovering that Homme picked and chose the data to shape the most persuasive paper that he could, ignoring data that didn’t fit. I think if there was any compelling evidence that Mary had done this, it would have been seized upon during her misconduct investigation. Hopefully when all the dust settles there will be some official public disclosure of what the investigations found.

    Unfortunately, misconduct investigations are not transparent and usually involve minimal disclosure of the process or even the full findings. For example, the Purdue bubble fusion saga remains ongoing behind closed doors – or so I assume in the absence of any formal announcement. With a persona like Hellinga, the Duke inquiry will no doubt be even less open short of Congressional hearings. This is a good thing, to protect the innocent and the whistleblowers, but one hopes it will progress to ORI, in which case we can anticipate a synopsis of the confirmed findings. – writedit

  17. […] … the temptation to look the other way in cases of “normal misbehavior” or worse so as to maintain their revenue stream would be just too great, […]

  18. noblesse d'epee said

    Skip R – Go Right to D‘ links to an interesting, well-written essay. Since I am a “basic sciences” (e.g. biochemistry) graduate student in the Duke University Medical Center, I directly observe the oversight problems that you mention in your last paragraph. DUMC’s response to the ongoing Homme Hellinga scandal (and the related problem of [Hellinga’s wife] Lorena Beese’s outrageously unethical mismanagement of the “shared” Duke X-ray crystallography center) will demonstrate whether its ‘revenue stream’ takes precedence over its ethics. Since the DUMC is essentially a business masquerading as an educational institution, I suspect that conservation of revenue will trump all other considerations.

    In fairness, I should concede that DUMC is a profitably-run business that provides superb patient care and excellent translational medical research. The problem, as so nicely described in ‘Skip R, Go Right to D”s post, is that the same administrators who effectively manage the treatment and development aspects of the institution possess both an inadequate understanding of the basic sciences and the motive to do so. Perhaps basic science research programs are better served under the auspices of Arts and Science colleges. A problem is that most Arts and Science programs are chronically under-funded; consequently professors and grad. students receive (on average) less remuneration (than in medical school divisions) and carry a significantly higher teaching burden. As a result, well-funded but academically isolated medical school research programs are populated with high-calibre faculty and students who sometimes find themselves pressured by their administrators to “find something medically or commercially relevant.” High-quality research is funded by outside entities, the hospital administration skims off its “indirect costs,” and a system is perpetuated wherein festering misconduct disasters are ignored.

  19. In defense of reviewers said

    In the know writes: I think when you are reviewing any paper, but especially one reporting something this novel and significant, it is your job to critically evaluate the work and ask everything reasonable of the authors to support the claim.

    I have to disagree with your conclusion that the reviewers did not properly review the paper. It is impossible to make this conclusion without knowing what went on during the review process. Very often reviewers make requests for additional data that are seen as unreasonable by the authors and sometimes the editors side with the authors in these cases. The data, as presented in the published version of the paper, appear pretty air-tight and I can see a request for additional purifications as being unreasonable (the lack of activity of mutant TIMs is at least a reasonable control for background contamination – given equivalent expression of the mutants). This is why the reason for the retraction that we have been given is so inadequate.

    And no one has mentioned the 2003 Nature paper by the same authors (with Looger as the first author). This one really needs to be examined because it is much more difficult to reproduce (and anyway, look how long it took to reproduce a simple enzyme assay) and in some ways, more complex. Along with the other papers that have been mentioned, this one has to be a high priority for “closer scrutiny”.

  20. In the Know said

    I agree, we don’t know what went on during the review process so perhaps additional information was requested and rebutted by the authors. Also, the methods that they used to purify the proteins weren’t exactly put front and center. I do know that some of these concerns came up in discussions I was involved in amongst the grad students and postdocs before the paper was published (I was obviously at Duke during this period of time). I also know that frequently papers are routed to friendly reviewers via exclusion of all others, and I also know that some PIs don’t put much time into reviewing a paper thoroughly since it isn’t an activity that garners either money nor fame. I don’t have a better system in mind, I just know that this one doesn’t work quite as advertised.

    I also agree that the retraction is inadequate, and as I have stated several times in previous posts, I think misconduct has occurred here and will be reveled by an investigation.

    In other forums people have called for a examination of the 2003 Nature paper. Not really the project I would want to spend time and resources on, but I’m sure Homme’s personality has motivated others to take on the task.

  21. In defense of reviewers said

    In the know writes: In other forums people have called for a examination of the 2003 Nature paper. Not really the project I would want to spend time and resources on, but I’m sure Homme’s personality has motivated others to take on the task.

    This is a much bigger problem in science than peer review, in my opinion. Verification of results is a thankless job and when labs do try and reproduce other’s results and fail, they rarely publish this information (for many reasons). The NIH should set up a lab on the intramural campus whose sole duty is verifying important results from the literature. At first thought, this could be under the direction of the office of scientific integrity (who is hopefully looking into the Hellinga case). It would be money well-spent.

  22. Dave said

    knowing is better than believing said,

    “The NovoTim catalytic data does not match what would be expected from a contaminant.
    The reported kcat and Km for NovoTim are ~0.1 s-1 and ~150µM, respectively whereas the wild-type TIM’s kcat and Km are ~3000s-1 and 1.5mM respectively.”

    I was reading through the follow-up JMB paper (Journal of Molecular Biology
    Volume 366, Issue 3, 23 February 2007, Pages 945-953 Local Encoding of Computationally Designed Enzyme Activity by Malin Allert, Mary A. Dwyer and Homme W. Hellinga) when I noticed a footnote on a table 1 that states regarding the ecNovoTIM1.2 kinetics: “This KM value has been revised [to 7.1 mM] from the originally published value (0.18 mM),9 because the fit to the previously reported measurements was incorrect. The original kcat value of ecNovoTIM1.2 has not been revised.”

    I will note that error was not presented. In hindsight, the wild-type TIM [Km of 1.6 mM] and Km value from the ecNovoTIM1.2 preparation [Km 7.1 mM] is much more consistent with the errors described in the retraction (among the others pointed out here). It is still worth asking whether any NovoTIM even binds to the TIM substrate or product.

  23. knowing is better than believing said

    Thanks for pointing that out to me. That number is indeed more consistent with their explanation of a contaminant than their previously reported Km.

    However, I find it odd that they could be so far off in the estimation of the Km (~50fold). It is much easier to have an error in kcat than Km (eg if the enzyme concentration in the assay is off.) So how could they have a >50-fold error in the Km without a change in kcat? This seems shady to me.

    Although this adjustment of their Km is more consistent with their explanation of contaminating TIM, it seems that there was massive problems with scientific rigor or fraud. Didn’t they check their assay with wtTIM? If there was an incorrect fitting procedure used, then they would see this when they repeat this assay on wt TIM (ie their kinetic constants would be ~50fold off from published, accepted values). If there was indeed an incorrect fitting procedure used, how is it that the original kcat is unchanged? Why don’t the authors show any of the original data? These problems just highlight the deficiencies that the Hellinga group has with producing competent science.

  24. David said

    The second retraction has now been made official. It seems improbable that the pattern of wt TIM contamination could have occurred as reported for this second NovoTIM scaffold. Were the preps reproducibly contaminated? What is the variation between preps? Did all mutants express to the same level? So many questions…so few answers. (thanks for pointing this out, David, and along with many others, raising our awareness & understanding of the scientific issues involved here – writedit)

    RETRACTED: Local Encoding of Computationally Designed Enzyme Activity

    Malin Allert1, Mary A. Dwyer2 and Homme W. Hellinga1, 2
    1Department of Biochemistry, Box 3711, Duke University Medical Center, Durham, NC 27710, USA
    2Department of Pharmacology and Molecular Cancer Biology, Box 3711, Duke University Medical Center, Durham, NC 27710, USA
    Edited by F. Schmid. Available online 5 December 2006.

    This article has been retracted at the request of the authors and the Editor-in-Chief. Please see Elsevier Policy on Article Withdrawal (

    Reason: The recently reported gain of triose phosphate isomerase (TIM) activity in a mutant ribose-binding protein (RBP) from Thermoanaerobacter tengcongensis (tteNovoTIM) through the transplantation of mutants from a computationally designed mutant Escherichia coli RBP (ecNovoTIM) is incorrect. Dr. John P. Richard (Department of Chemistry, Department of Biochemistry, The State University of New York), to whom we provided clones of the ecNovoTIM and ttecNovoTIM mutants, has brought to our attention that reported acitivities observed in our reported preparations of both ecNovoTIM and tteNovoTIM can be attributed to a wild-type TIM impurity which separates from the mutant RBP peaks in preparations that use a continuous rather than step-wise imidazole gradient (as used in the reported work), or that add a second Sepharose column. Richard’s reanalysis has been confirmed in the Hellinga laboratory. Our original interpretation of gain of activity in the tteNovoTIM mutant was based on the similarity of the observed TIM activity compared to the ecNovoTIM, and the apparent lack of activity in wild-type T. tengcongensis RBP purified under identical conditions as the engineered proteins. Unfortunately, the contaminating, endogenous TIM activity was not detected in this negative control.

    We deeply regret that our reports of a designed enzyme activity do not live up to closer scrutiny. We offer our sincere apologies to all researchers whose work was negatively impacted by these reports. We remain optimistic that the problem of structure-based design of enzyme activity will be solved and that novel catalysts will be produced rationally by computational methods.

  25. David said

    There are now two electronic letters in Science in response to the Hellinga Science retraction. One is by Jack Kirsch and the other by John Richard. They bring up all the issues pointed out on this blog.

  26. David said

    In a blog entry at, Andrea Gawrylewski mentions this very blog. One very interesting section of Andrea’s blog entry that has not appeared online before is that “Shortly after the original paper appeared in Science in 2004, Hellinga went to give a seminar in Berkeley to present his new findings. [Jack] Kirsch said he brought up the issue with the Km and asked to see Hellinga’s data but never received it.” Yet another lesson to learn from this whole fiasco.

  27. austen said

    Here’s the new address for my coverage

  28. […] part of our lengthy discussion of the retraction of Hellinga’s 2004 Science paper (here and earlier here) and then his 2007 JMB paper … and his accusation of misconduct laid against his grad student […]

RSS feed for comments on this post · TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: