AMBER Archive (2007)

Subject: Re: AMBER: installing amber 8 on 5 nodes cluster PC

From: Syed Tarique Moin (tarisyed_at_yahoo.com)
Date: Wed Jan 03 2007 - 12:25:45 CST


Kindly explain the following lines or commands
  ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
  (5) To build a parallel version, do the following:
  
  cd $AMBERHOME/src
  make clean
  ./configure -mpi ifort (as an example)
  make parallel
  
  To test parallel programs, you need first to set the DO_PARALLEL environment variable
  as follows:
  
  cd $AMBERHOME/test
  setenv DO_PARALLEL ’mpirun -np 4’
  make test.parallel
  
  The integer is the number of processors; if your command to run MPI jobs is something
  different than mpirun (e.g. it is dmpirun on Tru64 Unix systems), use the command
  appropriate for your machine.
  (6) If you are planning to run periodic, explicit solvent simulations using the paricle-mesh
  Ewald (PME) method, you will probably want to compile the pmemd program. See
  Chapter 6 for information on this.
  2/25/04
  Installation Page 8
  1.3.1. More information on parallel machines or clusters
  This section contains notes about the various parallel implementations supplied in the current
  release. Only sander and pmemd are parallel programs; all others are single threaded.
  NOTE: Parallel machines and networks fail in unexpected ways. PLEASE check short parallel
  runs against a single-processor version of Amber before embarking on long parallel simulations!
  The MPI (message passing) version was initially developed by James Vincent and Ken
  Merz, based on 4.0 and later an early prerelease 4.1 version [18]. This version was optimized,
  integrated and extended by James Vincent, Dave Case, Tom Cheatham, Scott Brozell, and Mike
  Crowley, with input from Thomas Huber, Asiri Nanyakkar, and Nathalie Godbout.
  The bonds, angles, dihedrals, SHAKE (only on bonds involving hydrogen), nonbonded
  energies and forces, pairlist creation, and integration steps are parallelized. The code is pure
  SPMD (single program multiple data) using a master/slave, replicated data model. Basically, the
  master node does all of the initial set-up and performs all the I/O. Depending on the version
  and/or what particular input options are chosen, either all the non-master nodes execute force() in
  parallel, or all nodes do both the forces and the dynamics in parallel. Communication is done to
  accumulate partial forces, update coordinates, etc.
  The MPI source code is generally wrapped with C preprocessor, cpp, directives:
  #ifdef MPI
  ...parallel sections with calls to MPI library routines...
  #endif
  If you plan on running with an MPI version and there is no pre-made configuration file, then you
  will need to modify the config.h file as follows:
  (1) Add ’-DMPI’ to the FPPFLAGS variable.
  (2) Add the path for include file for the (implementation supplied)
  mpif.h file to the FPPFLAGS variable; for example:
  FPPFLAGS="-DMPI -I/usr/local/src/mpi/include"
  (3) Reference any necessary MPI libraries in the LOADLIB variable.
  For reasons we don’t understand, some MPI implementations require a null file for stdin,
  even though sander doesn’t take any input from there. This is true for some SGI and HP
  machines. If you have troubles getting going, try the following:
  mpirun -np <num-proc> sander [ options ] < /dev/null
  2/25/04
  Installation Page 9
  1.3.2. Installing Non-Standard Features
  The source files of some Amber programs contain multiple code paths. These code paths
  are guarded by directives to the C preprocessor. All Amber programs regardless of source language
  use the C preprocessor. The activation of non-standard features in guarded code paths can
  be controlled at build time via the -D preprocessor option. For example, to enable the use of a
  Lennard-Jones 10-12 potential with the sander program the HAS_10_12 preprocessor guard must
  be activated with -DHAS_10_12.
  To ease the installers burden we provide a hook into the build process. The hook is the environment
  variable AMBERBUILDFLAGS. For example, to build sander with -DHAS_10_12, assuming
  that a correct configuration file has already been created, do the following:
  cd $AMBERHOME/src/sander
  make clean
  make AMBERBUILDFLAGS=’-DHAS_10_12’ sander
  Note that AMBERBUILDFLAGS is accessed by all stages of the build process: preprocessing,
  compiling, and linking. In rare cases a stage may emit warnings for unknown options in
  AMBERBUILDFLAGS; these may usually be ignored.
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Regards
  
  
  "David A. Case" <case_at_scripps.edu> wrote: On Wed, Jan 03, 2007, Syed Tarique Moin wrote:
>
> I require full installation guide of amber 8 on 5 nodes cluster PC.

As I have already written: please look at the installation section in the
users' manual. If you have questions, ask specific ones.

...dac

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu

 __________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu