AMBER Archive (2005)

Subject: Re: AMBER: GB/SA/LD parallel sander

From: Carlos Simmerling (carlos.simmerling_at_stonybrook.edu)
Date: Wed Jun 08 2005 - 15:09:59 CDT


here are my opinions...

Lwin, ThuZar wrote:

>Dear Amber users,
> I am doing simulation on a homo tetramer (2116 atoms) using Generalized Born solvation model (igb=5) with surface area. I use Langevin Dynamics to control the system temperature and amber 8 sander compiled for parallel processors. I run multiple simulations, each for 5 ns. Most of my simulations behaved as expected except for one run, in which the rmsd analysis showed significant deviation from my reference crystal structure. I tried to investigate the cause of the problem and found that if I take away one of the three (igb=5, Langevin Dynamic, Parallel Computing), the problem seems to have gone away. I am wondering what may have gone wrong in this particular simulation.
>
>
simulations on that time scale may be poorly converged and also may show
different time-dependent
behavior for each run. if you change the random number seed you may get
different behavior. this
means that LD/GB/parallel does not have to be the source of the change
even if it disappears
when you change it. it is just a different simulation. if those caused
the problem, wouldn't you
expect to see it in more of the runs?

> When I looked at the structure at the end of this simulation, the helix in each monomer is forming a kink in similar fashion.
>
it is indeed inteeresting that ALL of the monomers changed in this run.
it sounds like you might want to do some more analysis of the energies,
etc to investigate this.

>I wonder if each of the processors uses the same sequence of random numbers, causing the structural change in each of the four monomers to be very similar.
>
the random numbers are generated so that the same sequence is obtained
regardless of the # cpus used.
it will not affect the results- the test cases should provide the same
output no matter how
many CPUs you use.

>Here is my input file for production run.
>
> &cntrl
> imin = 0,
> ntx = 5, irest = 1,
> ntpr = 500, ntwx = 500, ntwr = 50000,
> ntf = 2, ntb = 0, igb = 5, gbsa = 1, rgbmax = 30, saltcon=0.1,
> cut = 30.,
> nstlim = 2500000, dt = 0.002, nscm = 1000, nrespa = 1,
> temp0 = 300, tempi = 300, ntt = 3, gamma_ln=0.5,
> ntc = 2, tol = 0.00001,
> &end
>
>
>
the gamma value is pretty weak. also, using a cutoff (even a long one)
may not
be the best approach. In systems this size, a PME run with explicit solvent
may actually run faster. I also tend to get better stability using
dt=0.001 and nrespa=2
with GB.

GB runs tend to be much more sensitive to careful equilibration than
ones in explicit solvent (especially with the low viscosity that you use).
That may be one source of the changes that you see (or perhaps it
is "real", we don't know enough about your system to be able to say).
Maybe this is due to difference between crystal and solution?
It sounds like a challenging research problem.
overall I don't see anything really wrong with the inputs.

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu