|
|||||||||||||||||||||||||||||||||
AMBER Archive (2008)Subject: Re: AMBER: comparison of MD trajectories recorded with pmemd and sander
From: Robert Duke (rduke_at_email.unc.edu)
Hi Therese,
I often use sander for equilibration and pmemd for production, and consider this homogeneous. I think that is fine. the differences between the programs are comparable to differences one would expect between different architectures or running on different levels of parallelism. the same would be true for running the same force field with another program, under the assumption that all other things match, such as thermostat/barostat, and that the energies/forces have been validated to match sander, etc.
On Wed, Dec 17, 2008 at 9:04 AM, Thérèse Malliavin <terez_at_pasteur.fr> wrote:
Dear Prof. Duke,
I am sorry for having forced you to answer again to questions you already discussed in the past in the AMBER discussion list.
But, I am concerned by the following problem. Molecular modeling studies are often based on the comparison of MD trajectories run with several conditions. In that way, two sander trajectories are recorded
Also, if one uses an additional trajectory recorded by CHARMM, GROMACS or NAMD with the AMBER force-field, will the pmemd trajectory be "closer" to the sander trajectory than the CHARMM, GROMACS or NAMD trajectory?
If two trajectories are recorded with pmemd and sander starting from the same input, should we consider that they are no more different than
Another question is: let one suppose that a trajectory was recorded using alternatively sander and pmemd for different time intervals, in the following way: some ns with pmemd, then restart with keeping velocities
I am sorry for insisting on these questions, but they are important for me in order to plan future calculations. I hope that I do not waste too much your time. Also, I realize that it is probably difficult to answer these questions, except by doing tests on each studied system, but I am just interested to read your opinion about these points.
Best regards,
Therese Malliavin
On Tue, 16 Dec 2008, Robert Duke wrote:
Okay, this has been discussed a lot. PMEMD should replicate sander results for a couple of hundred steps at least, unless you have an unbelievably bad starting configuration with a couple of atoms on top of each other (in which case some of the force gradients are huge and the simulation is bad anyway). However, the thing with MD is that there are on the order of millions, if not billions, of calculations per step, including additions, and the thing about addition of floating point numbers on computers is that it is not truly associative - the order in which the additions are performed DOES matter, due to truncation in the floating point representation of the number. So what this means is that if you have an algorithm that is different AT ALL, even in logically insignificant ways, there will be a rounding error, and due to the nature of MD, this rounding error will rather quickly grow. The main sources of difference between pmemd and sander are probably the following: 1) a different splining function for the erf() function in pmemd for some implementations (there is an optimization, and pmemd is actually more accurate than sander), 2) workload distribution differences running in parallel (which effect which force additions will occur with net-limited precision of a 64 bit floating point number), and 3) differences in the order of force additions arising from differences in calculation and communication order. The thing to note about rounding error - we are talking about a loss in precision down around 1e-17 I believe - rather small. Now, the erf() splining errors are probably closer to 1e-11 - probably the lowest precision transcendental we have, but the other transcendental functions are probably between these two numbers in precision (rough guess, have not looked recently, and it will be machine-dependent). Now all this junk does not really matter, because your calculation is probably off by at least 1e-5 (actually much worse) based on precision of forcefield parameterization, the fact that coulomb's law does not really get electrostatics just right, the fact that (substitute here the next force term generator) just right, ... And the standard justification for not being disturbed by all this - the different errors just mean that you sample different parts of phase space, and if you run long enough, you will get it all (this last point is why I have labored so long to make pmemd fast). Run your system on some other software and you will see some more dramatic differences in phase space sampling... Heck, just change the cutoffs a bit, the fft grid densities, etc. etc. etc. I have gone on-and-on about this stuff for the last several years on the amber reflector (see ambermd.org for links), probably hitting different high and low points - perhaps worth going back to look over, if you want the complete discussion. I always jump on these questions, but am sort-of answering for Ross here because I am 3 hrs closer to Europe and he is hopefully still asleep ;-)
----- Original Message ----- From: "Thérèse Malliavin" <terez_at_pasteur.fr>
Hi Ross,
Thank you for your mail. Finally, I tried to use AMBER 10 in place of
Thank you for your help,
Best regards,
Therese
On Mon, 15 Dec 2008, Ross Walker wrote:
Hi Therese,
First thing to check. PMEMD when built in parallel (which I assume you did)
Also I would make sure you do the following to run cleanly in your script:
export AMBERHOME=/foo/bar/amber10
Then you can nohup the entire script. You should probably make sure you kill
export DO_PARALLEL='mpirun -np 4'
Good luck,
-----Original Message-----
Dear AMBER Netters,
I have a question about the use of PMEMD. It is probably a trivial
I am doing the parallel calculations with sander.MPI using a lamd deamon
. /Bis/shared/centos-3_x86_64/etc/custom.d/amber9_intel8.1_lam-
before starting the AMBER calculations. The typical command line for
mpirun -np 4 ${AMBERHOME}/exe/sander.MPI -O -i mdr1.in -o mdr1.out -inf
But, if I replace in the command line sander.MPI by pmemd.MPI:
mpirun -np 4 ${AMBERHOME}/exe/pmemd.MPI -O -i mdr1.in -o mdr1.out -inf
I get an error saying that lamboot was not started.
I am trying to do these calculation on an 64 bits 8-proc Linux machine,
Also, I am only using features which should exist in PMEMD according to
Do you have any idea what I could check or what to find information to fix
Thank you in abvance for your help,
Therese Malliavin
-----------------------------------------------------------------------
-----------------------------------------------------------------------
-----------------------------------------------------------------------
-----------------------------------------------------------------------
| |||||||||||||||||||||||||||||||||
|