AMBER Archive (2008)

Subject: RE: AMBER: SANDER and PMEMD with openmpi. Failure 'make test.pmemd'

From: Ross Walker (ross_at_rosswalker.co.uk)
Date: Mon Jun 02 2008 - 12:52:33 CDT


Hi Francesco

> Great! Finally pmemd running with openmpi. All test PASSED, no diff file
> present.

Good to hear.
 
> I started chunk 6 from chunk 1-5 already carried out (for a large system
> in a POPC membrane) with sander.MPI using the command
>
> mpirun -np 8 $AMBERHOME/exe/pmemd -O -i prod.in ....(as with sander)

This should be fine - PMEMD is designed to be a drop in replacement for
sander for a specific subset of functionality. If it doesn't support an
option you selected it should quit with an error message. Bob has put in an
incredible amount of work to make PMEMD completely compatible with sander so
it is perfectly valid to use any combination of these codes, I.e. run PMEMD
then restart with sander.MPI, then restart with PMEMD etc. Quite why someone
might want to do that I don't know but it should be formally correct to do
it if one chose to do it.

> where prod.in reads:
>
> prod protein & ligand & membrane box80x80
> &cntrl
> imin=0, irest=1, ntx=5,
> nstlim=333334, dt=0.0015,
> cut=10, ntb=2, ntp=1, taup=2.0,
> ntc=2, ntf=2,
> ntpr=1000, ntwx=1000,
> ntt=3, gamma_ln=2.0,
> temp0=300.0,
> /
>
> It is running with 0.3% MEM usage (out of the 24GB available), while with
> sander it was 0.4%. Of course all cpus 100%.

This is normal - classical MD doesn't use much memory in the first place but
generally PMEMD will use less than sander because it is spatially
decomposed, I.e. the coordinate / force arrays etc are not duplicated on
each node so the actual memory required by PMEMD per processor should drop
as a function of the number of processors.

> In view of pmemd and 8 cpus, do you see any improvement to the above
> prod.in?

Only that using langevin can limit scaling, although this would be minor at
8 processors. Hence for large scale production runs you might want to switch
to using NVE after you have equilibrated things. To do this effectively you
may want to up dsum_tol and the shake tolerance by an order of magnitude or
so to ensure good energy conservation. Other than that things look good.
Increasing ntwx will help improve performance a little bit, especially if
you have slow disk but again at 8 cpus you probably won't notice any change.
You could also probably use an 8 angstrom cut off for better performance but
that of course is your choice... Don't go below 8 angstroms...

You might also want to set iwrap=1 so you don't have to worry about waters
diffusing too far in long >50ns simulations.

> I noticed your suggestion some time ago as to run with pmemd in chunks,
> though now I want to check the speed with respect to sander (and I am at
> modest 8 processors).

The chunks is really just a safety feature + for ease of handling. I.e. if
you run your simulation as a series of 2 or 5ns chunks so you limit the size
of an individual mdcrd file which makes it easier to handle and also less
likely to get corrupted. Also if something goes wrong you risk losing less
data. Plus if you keep all the intervening restart files at most you only
have to rewind the size of a single chunk if say you suddenly find things
have diffused too far.

The same exact arguments apply to sander runs as well so there is nothing
specific to pmemd that requires chunks. I just think it is more convenient
to do things this way.

Good luck,
Ross

/\
\/
|\oss Walker

| Assistant Research Professor |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross_at_rosswalker.co.uk |
| http://www.rosswalker.co.uk | PGP Key available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
      to majordomo_at_scripps.edu