You are here

Can I believe the antibody design result?

7 posts / 0 new
Last post
Can I believe the antibody design result?
#1

I am trying to do antibody design with the  RosettaAntibodyDesign, but some of my colleagues told me that they fell to obtain successful designers with the  RosettaAntibodyDesign (RAbD). They ran the RosettaAntibodyDesign on the crystal structure of an ab-ag complex with a dG_seperate of about -20 REU, and got some designers with dG_seperate of about -40 REU. However, they did experiments including ELISA and SPR and found that these antibody designers did not bind to the antigen at all!

Do you have any concept that in what a degree can we believe the results of RosettaAntibodyDesign?

Best regards,

Category: 
Post Situation: 
Mon, 2020-11-30 03:29
Sunyp_IM

I'm a little biased, being a Rosetta developer, but I'd say the answer to "Can I trust the results of RosettaAntibodyDesign?" is "Yes", but with qualifications.

Computational protein design is hard. Even when it "works", that's often because the designer created a large number of potential designs, tested all of them, and found a few which did work amoung the bulk which didn't. Single-digit percent success rates are not uncommon. Success in the computer is not a gurantee of success in the lab. The way to think of it is more that the computational approach will narrow things down over approaches which function on a more random manner.

Some systems are more difficult to design than others, and there's various ways to try to improve your success percentages. The naive approach of just running the design algorithm over the system and taking what comes out only works in some cases. Often you'll need to try to pour more system-specific information into the design process. This often comes as post-design filtering steps, where you throw out potential designs which don't meet various design criteria (even though they seem to have decent binding energies).

It's hard to list all of such filtering criteria a priori, because it often comes down to details about the system of interest. (What in particular is needed for a "success" in your assays.) General such considerations for binding are looking at the results from things like the InterfaceAnalyzer and seeing if your designs fit within the typical range found in similar native systems. You can also try doing a redocking of the designed complex, to see if you can recapitulate the designed results. I'd also highly recommend reading the literature for similar such design processes and see which other sort of post-filtering other groups are doing. You can pick and choose based on what makes sense for your system of interest.

But again, even with the post-filtering, you may need to test a large number of designs in order to find one which works. You may also need to do multiple rounds of computational design and testing, rolling what you learn from the wet-lab testing back into the computational design process to further improve it. (Many of the successful design results went through several rounds, changing the computational process based on what was seen in testing.)

 

Mon, 2020-11-30 08:24
rmoretti

Hello.  First, protein design is notoriously difficult.  One sided interface design - even more so.   How many designs did your colleague make and order?  Was your colleague redesigning a protein or attempting de-novo antibody design (which has still not been done)?  We were able to redesign many antibodies - including most recently COV1 antibodies to bind and neutralize COV2. I would talk to your colleagues - even for redesign, you still want around 20 designs per project (at least) - and many times you will get weak binding and then need to optimize that further.  Were the computations run on a large cluster?  How many decoys were created?  Did your colleagues try to redesign every CDR or were selective in what they were designing?  Did they try every length and cluster, or within clusters?  Are they designing H3?  H3 has not been designed through graftdesign - so if it's H3 - it's also really really challenging and personally, I would not attempt it without creating 100s of structures in the lab.  Also - did your colleagues run pareto-optimal relax on the native before design?  If not, that would be why you are seeing such drastic dG values.  Finally, what mode did your colleagues run?  Did they cutoff by total score in addition to dG_separated?  These are questions that are important to ask when designing any protein. 

-Jared


Protein Design Laboratory
Institute for Protein Innovation

Mon, 2020-11-30 09:38
jadolfbr

Hello Jared,

Thank you for your reply. I don't know many details about my colleagues' work, but their results make me realize that Ab design can be a really challenge.  They also tried to redesign COV1 antibodies to bind and neutralize COV2, but are not successfull till now. 

Actually, I am doing de-novo antibody design. I have generated 5000 decoys with dg_seperate ranging from  5~-265 REU. People in this forum seem to think that normal dG_seperate should be within -20~-50 REU, but I have more than 3000 decoys with dg_seperate <-50 REU. I can not make sure whether the extremely low dg_seperate make sense or not, and whether I should experimentally test the top 20 decoys with dg_seperate <-150 REU, or 20 top decoys with dg_seperate within -20~-50 REU.  I guess I should also include the "total score" in consideration. The total score range of all the decoys is -835~9228, so what is a proper cutoff value for total scroe?  And is there any other important filter that must be considered? I really wish to make this project work and need you guys' suggestions. 

Best regards

 

 

Mon, 2020-11-30 19:05
Sunyp_IM

Honestly, I wrote RAbD for my PhD, have helped guide the redesign of COV1 antibodies to bind and neutralize COV2 (for which we had MANY that work), and I would still not do de novo antibody design without trying a large library with yeast display - and even then - the challenge exists.  5000 decoys is still not quite enough - and 20 designs for de-novo design is still too few for both a de-novo design and one-sided interface design.  You want 20 designs for a redesign.  You would want 200+ for a denovo design.  It may still work, but I would be extremely cautious and have low expectations.  I don't mean to take the thunder out of your project, but its just a very very hard feat right now.  For dg_seperate - that seems definitely a bit on the lower end.  I've never seen numbers that low.  I would definitely double check the specific residues that seem to be for binding, make sure that you relaxed both starting structures before starting RAbD.  The pareto-optimal protocol by Nivon et all in Plos One is what I use.  And I would not design H3.  Maybe sequence design, but definitely not graft-design at the moment unless you are testing a library with yeast display.  

Sorry that this may have been a bit overwhelming, but I want to temper your expectations as others have tried to do.  The program can technically do what you want.  And I wrote it that way to eventually attempt something like this - but I also know what has and what hasn't actually been done in the field.  De novo computational design of antibodies just hasn't been done yet - and given the normal challenges of even redesigning antibodies computationally, you have a very very large challenge ahead of you that you may want to reconsider.  

Mon, 2020-11-30 19:07
jadolfbr

Hi, jadolfbr,

Thanks. I will bear these warnings in mind and never pose much expectations on this.  But if it worked, it will be your great feat to write the program.

since I have  had this panel of decoys, I will have to test some of them, maybe 20——testing 200 is too hard in our lab. So the question now is how to pick out the most promising 20 decoys.  I plan to filter them by dG_seperate and total score so as to get the 20 decoys with the lowest dG_seperate and total score <-500. Do you have any suggestions on this aspect?

Certainly I will continue trying to find ways to dignose the very low dG_seperate, but I assure that both starting structures were relaxed before starting RAbD.

Best regards.

Mon, 2020-11-30 22:23
Sunyp_IM

Sorry for the slow reply.  First, make sure if you are using these for production runs - you use the relax as the mintype.  The dGs should be more in-line with what is a good dG.  One of my colleagues and I have found a bug in the InterfaceAnalyzerMover that has something to do with disulfides which will give wrong dG values when using mintype min this week.  We have some ideas as to where it's coming from, but all of the experimental benchmarking was done using relax as the mintype.  It's more expensive computationally, but IMO, gives better results.  The new set of designs for COV2 used mintype min to get an idea of the cluster and length space available to the antibody (and threw out any super low dG anamolies), and then used relax as the mintype afterwards.  As for the designs, the plan sounds fine, though I would be very careful if your designs include H3 designs.    If you want to get around the dG stuff for now, you can use RosettaScripts and virtualize each chain individually, storing the interface residues and repacking them, and then outputting each score using SImpleMetrics.   If you are serious about de-novo design, I would also think about getting a yeast display platform up and running.  That way you could test thousands for the same cost as testing 20...

Sun, 2020-12-06 17:41
jadolfbr