# ddg_monomer consumes massive RAM

8 posts / 0 new
ddg_monomer consumes massive RAM
#1

I am trying to run the ddg_monomer high-res protocol on a protein. I am running this on a Linux system with 4 GB of RAM. Immediately after the protocol begins to run, RAM usage gets very high and my computer freezes. It remains frozen indefinetely. Is this normal? Usually when I run other ROSETTA protocols, the computer is perfectly usable with the job running in the background.

I am using all the default settings from the high-res protocol (row 16 of Kellogg et al) from the documentation:

I can post all my scripts and options if required, but maybe there is something more punctual that I can look at.

Category:
Post Situation:
Fri, 2018-03-09 17:02
cossio

It might help to see what options you're running with to see if there's other issues, but if you're running with the -restore_pre_talaris_2013_behavior option, you may want to try adding the -analytical_etables true option.

This option is default on with newer scorefunctions (talaris, ref), but off for the pre-talaris environment. With recent versions of Rosetta the amount of memory needed with it off grows somewhat significantly, so it's worth turning it on.

Mon, 2018-03-12 13:48
rmoretti

I only used -restore_pre_talaris_2013_behavior with the minimize_with_cst.static.linuxgccrelease, but not with ddg_monomer. Here are all the options I am using.

minimize_with_cst

minimize_with_cst.static.linuxgccrelease -in:file:l "pre-minimize-pdb-list.txt" \
-in:file:fullatom -ignore_unrecognized_res -fa_max_dis 9.0 \
-database "$ROSETTA_HOME/main/database/" -ddg::harmonic_ca_tether 0.5 \ -score:weights pre_talaris_2013_standard -restore_pre_talaris_2013_behavior \ -ddg::constraint_weight 1.0 -ddg::out_pdb_prefix min_cst_0.5 -ddg::sc_min_only false \ -score:patch "$ROSETTA_HOME/main/database/scoring/weights/score12.wts_patch" \
-ignore_zero_occupancy false \
> mincst.log

ddg_monomer

ddg_monomer.static.linuxgccrelease \
-in:file:s "$INPUTPDB" \ -ddg::mut_file "$MUTFILE" \
-ddg:weight_file soft_rep_design \
-ddg:minimization_scorefunction score12 \
-database "$ROSETTA_HOME/main/database" \ -fa_max_dis 9.0 \ -ddg::iterations "$ITER" \
-ddg::dump_pdbs true \
-ignore_unrecognized_res \
-ddg::local_opt_only false \
-ddg::min_cst true \
-constraints::cst_file "$INPUTCST" \ -ddg::suppress_checkpointing true \ -in::file::fullatom \ -ddg::mean false \ -ddg::min true \ -ddg::sc_min_only false \ -ddg::ramp_repulsive true \ -mute all \ -unmute core.optimization.LineMinimizer \ -ddg::output_silent true Here is an example mutfile I am using: total 4 4 L 13 R Q 17 E Q 113 T S 117 R The INPUTPDB file can be downloaded here: https://files.fm/f/eyxjb7pq The constrains file INPUTCST is here: https://files.fm/u/k5jnvzvu Mon, 2018-03-12 14:00 cossio The option -analytical_etables true gives an error with ddg_monomer: ERROR: Option matching -analytical_etables not found in command line top-level context Note that the high RAM usage is during the ddg_monomer step. Mon, 2018-03-12 14:37 cossio Sorry, I mis-rememebered the option name. The option is actually -analytic_etable_evaluation true. If you're using score12, you will want to use the -restore_pre_talaris_2013_behavior flag. (Whether you actually want to use score12 rather than a more current scorefunction is another question.) You probably also want to match what you're doing with the pre-minimization with what you're doing for the ddg_monomer minimization step. (So both with score12 and -restore_pre_talaris_2013_behavior, or both with ref2015, etc.) Mon, 2018-03-12 14:58 rmoretti I updated options as follow: minimize_with_cst.static.linuxgccrelease -in:file:l "pre-minimize-pdb-list.txt" \ -in:file:fullatom -ignore_unrecognized_res -fa_max_dis 9.0 \ -database "$ROSETTA_HOME/main/database/" -ddg::harmonic_ca_tether 0.5 \
-score:weights pre_talaris_2013_standard -restore_pre_talaris_2013_behavior \
-ddg::constraint_weight 1.0 -ddg::out_pdb_prefix min_cst_0.5 -ddg::sc_min_only false \
-score:patch "$ROSETTA_HOME/main/database/scoring/weights/score12.wts_patch" \ -ignore_zero_occupancy false \ -analytic_etable_evaluation true \ > mincst.log  ddg_monomer.static.linuxgccrelease \ -in:file:s "$INPUTPDB" \
-ddg::mut_file "$MUTFILE" \ -ddg:weight_file soft_rep_design \ -ddg:minimization_scorefunction pre_talaris_2013_standard -restore_pre_talaris_2013_behavior \ -ddg::minimization_patch "$ROSETTA_HOME/main/database/scoring/weights/score12.wts_patch" \
-database "$ROSETTA_HOME/main/database" \ -fa_max_dis 9.0 \ -ddg::iterations "$ITER" \
-ddg::dump_pdbs true \
-ignore_unrecognized_res \
-ddg::local_opt_only false \
-ddg::min_cst true \
-constraints::cst_file "\$INPUTCST" \
-ddg::suppress_checkpointing true \
-in::file::fullatom \
-ddg::mean false \
-ddg::min true \
-ddg::sc_min_only false \
-ddg::ramp_repulsive true \
-unmute core.optimization.LineMinimizer \
-ddg::output_silent true
-analytic_etable_evaluation true \



It seems to be working now. Thanks.

Thu, 2018-03-15 06:03
cossio

The option analytic_etable_evaluation does not change the results? I want to use a protocol that is as close as possible to the Kellogg2011 paper.

Thu, 2018-03-15 06:04
cossio

There's going to be a *slight* difference between using and not using -analytic_etable_evaluation. When it's false, Rosetta uses an interpolated approximation (that's what's taking up all the memory - the lookup tables). It's close to the underlying function, but has slight variations. It's one of those things where we originally thought it would speed up energy evalution, but it turned out the speedup is minor and certainly not worth the increase in memory (even before the memory usage expoded).

I would say that the differences are minor, and are comparable with other minor variations introduced due to code updates. If you want to *exactly* replicate the protocol, you really should be using the identical Rosetta version. If all you're looking for is a "functionally comparable" protocol, then I would say that using the -analytic_etable_evaluation option qualifies.

Mon, 2018-03-19 12:12
rmoretti