it is normal for cluster.py occupy much space in the swap? I am following this tutorial, I had problems occupy the entire swap, and the program stops running, it increased to 54Gb, and still is using almost entirely.
It is running for more than 22 hours, and apparently will take all of the memory again and crash the program.
the script is:
python /home/jrcf/rosetta/tools/protein_tools/scripts/clustering.py --silent=cluster_all.out --rosetta=/home/jrcf/rosetta/main/source/bin/cluster.linuxgccrelease --database=/home/jrcf/rosetta/main/database/ --options=cluster.options cluster_summary.txt cluster_histogram.txt
The old cluster application is known to perform miserably when handed a large number of structures. It keeps them all in memory. So, yes, this is normal. You should just filter out the top few percent by energy and cluster those.
You could also look into Calibur to do clustering instead? I know Calibur was recently put into Rosetta but I don't know when that will be released.
I trying to use the Calibur now.