I am trying to run the ddg_monomer high-res protocol on a protein. I am running this on a Linux system with 4 GB of RAM. Immediately after the protocol begins to run, RAM usage gets very high and my computer freezes. It remains frozen indefinetely. Is this normal? Usually when I run other ROSETTA protocols, the computer is perfectly usable with the job running in the background.
I am using all the default settings from the high-res protocol (row 16 of Kellogg et al) from the documentation:
https://www.rosettacommons.org/docs/latest/application_documentation/analysis/ddg-monomer
I can post all my scripts and options if required, but maybe there is something more punctual that I can look at.
Category:
Post Situation:
It might help to see what options you're running with to see if there's other issues, but if you're running with the `-restore_pre_talaris_2013_behavior` option, you may want to try adding the `-analytical_etables true` option.
This option is default on with newer scorefunctions (talaris, ref), but off for the pre-talaris environment. With recent versions of Rosetta the amount of memory needed with it off grows somewhat significantly, so it's worth turning it on.
I only used `-restore_pre_talaris_2013_behavior` with the `minimize_with_cst.static.linuxgccrelease`, but not with ddg_monomer. Here are all the options I am using.
minimize_with_cst
ddg_monomer
Here is an example mutfile I am using:
total 4
4
L 13 R
Q 17 E
Q 113 T
S 117 R
The INPUTPDB file can be downloaded here: https://files.fm/f/eyxjb7pq
The constrains file INPUTCST is here: https://files.fm/u/k5jnvzvu
The option `-analytical_etables true` gives an error with ddg_monomer:
ERROR: Option matching -analytical_etables not found in command line top-level context
Note that the high RAM usage is during the ddg_monomer step.
Sorry, I mis-rememebered the option name. The option is actually `-analytic_etable_evaluation true`.
If you're using score12, you will want to use the -restore_pre_talaris_2013_behavior flag. (Whether you actually want to use score12 rather than a more current scorefunction is another question.) You probably also want to match what you're doing with the pre-minimization with what you're doing for the ddg_monomer minimization step. (So both with score12 and -restore_pre_talaris_2013_behavior, or both with ref2015, etc.)
I updated options as follow:
It seems to be working now. Thanks.
The option analytic_etable_evaluation does not change the results? I want to use a protocol that is as close as possible to the Kellogg2011 paper.
There's going to be a *slight* difference between using and not using -analytic_etable_evaluation. When it's false, Rosetta uses an interpolated approximation (that's what's taking up all the memory - the lookup tables). It's close to the underlying function, but has slight variations. It's one of those things where we originally thought it would speed up energy evalution, but it turned out the speedup is minor and certainly not worth the increase in memory (even before the memory usage expoded).
I would say that the differences are minor, and are comparable with other minor variations introduced due to code updates. If you want to *exactly* replicate the protocol, you really should be using the identical Rosetta version. If all you're looking for is a "functionally comparable" protocol, then I would say that using the -analytic_etable_evaluation option qualifies.