AMBER Archive (2006)

Subject: RE: AMBER: checkoverlap doubt [was: Post-MD minimization]

Date: Tue Jun 20 2006 - 08:21:11 CDT

Thanks a lot, Dr.Ross and Dr. Carlos for your help reg. the minimization
protocol. I'll implement your suggestions.

I have one more question. I would like to identify the structures from MD
trajectory which have short contacts. For this purpose, when I use ptraj
to check overlaps between atoms, (checkoverlap command of ptraj), how do I
get the list of "flagged" atoms? In the output of my trajin script, it
only says:
"PTRAJ: Successfully read in 10000 sets and processed 10000 sets.
       Dumping accumulated results (if any)"
Where are these results dumped? How do I save them? (Unlike some other
commands there is no "out" option for checkoverlap).

Thanks again.

 On Mon, 19 Jun 2006, Ross Walker wrote:

> > the production run (10000 structures). I used the following
> > input script
> > to minimize each frame:
> > maxcyc = 20500, ncyc = 1000,
> > 1. The minimization is going very very slow - on an altix
> > machie, using 4
> > processors. In 48 clock hours, after submitting the job, only
> > ~4600 frames
> > (out of 10000) have been minimized. There are no other jobs
> > running on the
> > machine. Could there be any problem with the system so that the
> > minimization is going so slow?
> You don't say how long your 10ns simulation took but at 2fs this 10ns
> simulation represents 5,000,000 MD steps. Then in your minimisation you are
> asking for 20,500 steps which if every frame of your 10,000 structures took
> the full 20,500 steps then you are asking for a total of 205,000,000 so you
> can expect your complete set of minimisations to take around 41 times longer
> than your 10ns MD simulation took. Is this roughly what you are seeing? If
> so then nothing is wrong.
> Although one might question why you want to minize the 10,000 structures and
> if you are looking to locate specific minima on the energy surface is a gap
> between structure samples of only 1ps appropriate? Also is necessary for you
> to run such a huge number of minimisation steps? Do you really need to get
> that close to the minima in each case?
> > 2. I would like to get the coordinates of the minimzed
> > structures. How do
> > I specify this in the sander.MPI command? B'se with reference to the
> > previous discussions on AMBER reflector, -x is the flag for
> > reading in the
> > trajectory file. I also checked the trajene files of the test
> > set provided
> > with AMBER. In this set also, I could not figure out where
> > the minimized
> > structrues are stored. In the minimiztion run that I am
> > doing, even the
> > .rst file has not been created.
> The coordinates from a minimization get written to the restart file as
> specified with the -r flag so in your case:
> mpirun -np 4 $AMBERHOME/exe/sander.MPI -O -i -p
> ../../prmtop -o cga_gb_fnl_min.out -c ../cga_gb_prod10ns.rst -x
> ../cga_gb_prod10ns_trj.crd -r cga_gb_10ns_fmin.rst > out_cga_gb_fnl_min
> cga_gb_10ns_fmin.rst which if you use the same name for all 10,000
> minimisations just gets overwritten on every run such that you only have the
> coordinates for the last calculation that ran. it is probably blank because
> the run is still in progress when you looked at it. (-x is not required in
> minimization)
> Note also that by specifying -c ../cga_gb_prod10ns.rst as the input
> structure for the minimisation, assuming you used the same rst structure for
> each case you will have successfully run the same minimisation 10,000
> times... There is no function to read the coordinate file in minimisation.
> You will need to use ptraj to split your mdcrd file into 10,000 sequentially
> numbered rst files. You will then need to script the running of your
> minimisation such that each run picks the next sequentially numbered input
> rst file and writes a unique (probably sequential) -r rst file on each run.
> You can then post process all these rst files produced from the minimisation
> in ptraj or ambpdb to obtain pdb files if you need them.
> All the best
> Ross
> /\
> \/
> |\oss Walker
> | HPC Consultant and Staff Scientist |
> | San Diego Supercomputer Center |
> | Tel: +1 858 822 0854 | EMail:- |
> | | PGP Key available on request |
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to
> To unsubscribe, send "unsubscribe amber" to

The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" to