I'm trying to use the make_fragments.pl script but i do not know where to find the launch_on_slave_strict.py script necessary to run it in parallel, to launch the job serially dosen't seem to be feasible. please see
# This is an optional script that you can provide to launch picker and pdb2vall jobs on
# free nodes to run them in parallel. If left as an empty string, jobs will run serially.
# Make sure your script will run the command passed to it on another machine. If it is
# set up to run parallel jobs on the local machine, the local machine may run into CPU
# and memory issues when running multiple picker and pdb2vall jobs in parallel.
my $SLAVE_LAUNCHER = "/work/robetta/src/rosetta_server/python/launch_on_slave_strict.py";
someone has it?
or there is another way to get around it ?
here is the original script
What about running it serially isn't feasible? I don't think fragment generation for one job takes a huge amount of time (maybe a couple of hours at most?), and independent jobs can run independently without a wrapping script (I think).
That said, I've always found make_fragments.pl to be hugely hard to work with, and always use robetta (http://robetta.bakerlab.org/) or the fragment picker (http://www.rosettacommons.org/manuals/archive/rosetta3.4_user_guide/dc/d...) instead.
I also find make_fragments.pl very hard to work with, but how's make_fragments.pl different from fragment picker in terms of generating fragment files for structure modeling?
Robetta is a webserver that used to run make_fragments.pl itself, and probably still does.
make_fragments.pl is an ancient perl script for picking fragments. I think it relies on some fortran code (nnmake) along the way.
Rosetta3 has a fragment picker, which is C++ code that replaces make_fragments.pl. It's easier to work with and has a useable demo in the demos folder.
make_fragments.pl attempts to call external secondary structure prediction code; this is 95% of the problem with using it is the fragility of the links to those other applications. The new fragment picker just requires that you have the ss predictions already done and reads them in from files. It's much more robust, but also requires you to do more work beforehand.
The new fragment picker lets you set weights, etc, during picking, to adapt the picking algorithm to your problem. I don't think make_fragments.pl does that.