I have a question about how atom coordinates are determined for predicted structures during the ab inito folding protocol and with the extraction of the PDB of the predicted structure from the output silent file.
I am currently working on development of a new score term, called ms_labeling, to be added to Rosetta. This term is based off of experimental mass spec covalent labeling data. In short, the score is calculated by counting the number of neighboring atoms within a certain radius of each LYS. This atom count is compared to an input file containing experimental data and each LYS is scored either 0 or -1 based upon its agreement with the input data (this is a really rough model). I have a working code that correctly counts the atom number of neighbors and I have been able to implement this into Rosetta and run ab initio folding and calculate total Rosetta scores including my new score term. The problem I am running into is that for a few of the predicted structures, if I extract the PDB from the silent file and rescore the structure using either the ./score.linuxreleasegcc application or a simple application that I wrote, the ms_labeling score term that I added gives different results. I don't understand how this could be possible since the since it is based strictly on the atom count.
I'm wondering whether there is some sort of subtle conformationaly change that occurs between the structure that gets scored during the ab initio calculation and the structure that can be extracted from the output silent file. I've tried to find an answer to this, but haven't had much luck. Any insight would be very much appreciated!