Getting started with AMBER

Here are the basics to make sure your enviroment is setup properly to run the AMBER suite of programs in the Vanderbilt CSB. Please note that we currently support AMBER10 on Linux platforms (including SGI Altix) but only AMBER9 on older SGI/MIPS platforms.

Setting your AMBER environment

AMBER requires modifications to your environment, specifically setting the $AMBERHOME environment variable and adding an entry to your PATH. On Linux, AMBER9 and AMBER10 were compiled with version 9.1 of the the Intel Fortran and C compilers, and version 8.0 of the MKL performance libraries. Therefore, to make use of any AMBER9 or 10 Linux binaries, you must also have your environment setup for the Intel compilers and libraries. Here are some example sbset commands that will do these things for you automatically:

On Linux (including SGI Altix):

% sbset amber10 intel9

On SGI/mips (amber10 is not supported on the old SGI/MIPS platform)

% sbset amber9

If you plan to use AMBER frequently, it's most convenient to place these commands in the Linux and/or IRIX section(s) of your ~/.cshrc file so they are run automatically each time you log in.

SGI MPI AMBER binaries on CSB SGI systems (mips and Altix/ia64).

Parallelized MPI versions of sander, pmemd, sander.LES and sander.PIMD are available. The parallel binary executable files are named with the .MPI extension (e.g. sander.MPI and pmemd.MPI). On SGI systems (both mips/IRIX and Itanium/Linux), these binaries were built to utilize the SGI MPI interface. To run one of these executables in parallel, use the following type of command:

% mpirun -np 2 $AMBERHOME/exe/pmemd.MPI <pmemd options>

MPICH2 AMBER binaries on CSB i686 and x86_64 Linux systems.

Parallelized MPI versions of sander, pmemd, sander.LES and sander.PIMD are available. The parallel binary executable files are named with the .MPI extension (e.g. sander.MPI and pmemd.MPI). Our parallel i686 and x86_64 versions of AMBER9 were built to utilize the MPICH2 message passing interface. Therefore, to make use of any parallel i686 or x86_64 AMBER binaries, you must also have your environment setup for MPICH2. Here is an example sbset command that will do these things for you automatically:

% sbset amber10 intel9 mpich2-icc

MPICH2 works differently than previous versions of MPICH, or SGI's MPI implementation. MPICH2 first requires you to run a program called an "mpd" which coordinates all the nodes that will be used in the calculation. The mpd is a persistent program and therefore once the calculation is finished you should shut the mpd process down.

The mpd can be setup in a variety of ways. For details, you should consult the MPICH2 users guide. A simple example (a 2-cpu job running on a single host) follows:

  1. Create a basic ~/.mpd.conf with the proper file permissions:

    % echo "MPD_SECRETWORD=${user}$$" > ~/.mpd.conf
    % chmod 600 ~/.mpd.conf

  2. Start the mpd on the local host:

    % mpdboot

  3. Start your MPI program on 2 CPUs with mpiexec:

    % mpiexec -np 2 $AMBERHOME/bin/sander.MPI <sander options>

  4. Terminate the mpd once the calculation is finished:

    % mpdallexit

A more complicated example (a 4-cpu job running on two dual-cpu hosts) follows:
  1. Create a basic ~/.mpd.conf with the proper file permissions:

    % echo "MPD_SECRETWORD=${user}$$" > ~/.mpd.conf
    % chmod 600 ~/.mpd.conf

  2. Create an mpd.hosts file listing some machines you would want to participate in an MPICH2 "ring":

    % echo "myhost1" > mpd.hosts
    % echo "myhost2" >> mpd.hosts
    % echo "myhost3" >> mpd.hosts
    % echo "myhost4" >> mpd.hosts

  3. Start the mpd on the first two hosts listed in the mpd.hosts file:

    % mpdboot -n 2

  4. Check to see that the mpd is started on hosts "myhost1" and "myhost2":

    % mpdtrace -l

  5. Start your MPI program on 4 CPUs with mpiexec:

    % mpiexec -np 4 $AMBERHOME/bin/sander.MPI <sander options>

  6. Terminate the mpd on all hosts, once the calculation is finished:

    % mpdallexit

Overwriting pre-existing AMBER output files

One thing to be aware of if you are just getting started with AMBER is that, by default, some of the programs do not allow you to overwrite files that already exist. This is a safety feature to keep you from accidentally deleting output files that took a lot of expensive computation to create.

You will usually run into this when running sander or pmemd. If you're restarting a run and some or all of the output files already exist from a previous attempt, the program will die with an error message that says somthing like:

Unit 6 Error on OPEN: mdout

To remedy this, either remove the file(s) or add the -O option to the sander or pmemd command line.