AMBER Archive (2004)

Subject: AMBER: PMEMD and ifort 8 update

From: Robert Duke (rduke_at_email.unc.edu)
Date: Tue Apr 27 2004 - 13:08:05 CDT


Folks -
For those of you interested in being able to use the Intel Fortran Compiler,
version 8, on Linux IA32 machines, Intel has finally, after 3 months or
thereabouts, released a fix. The file to pick up from
https://premier.intel.com is l_fc_pc_8.0.046.tar.gz. Any releases in the
version 8 lineage earlier than this won't work. I of course can't predict
what will happen with versions after this one, so if you update later,
beware!

This version of the compiler produces a PMEMD that is roughly 5-10% faster
than the PMEMD produced by ifc 7.1, and I have run regressions (21 tests) on
a dual xeon, 1 and 2 processors and all tests pass. However, when Intel
giveth, Intel tends to also take away. There are new issues on getting
pmemd to build and run with mpich. These are not insurmountable; it is just
disappointing that you have to do the workarounds. So here are the details
for pmemd 8 on redhat 9 or rhel 3. This probably also applies to redhat 8,
but I don't have a test system.

First of all a sample config.h for a uniprocess build, followed by a sample
config.h for an mpich build, both for the pentium and redhat 9 or rhel 3:

1 processor config.h:

#!/bin/csh -f
setenv PREPROCFLAGS "-DDIRFRC_VECT_OPT"

setenv CPP "/lib/cpp -traditional "

setenv OPT_LO "ifort -c -auto -tpp7 -mp1 -O0"
setenv OPT_MED "ifort -c -auto -tpp7 -mp1 -O2"
setenv OPT_HI "ifort -c -auto -tpp7 -xW -mp1 -ip -O3"

setenv LOAD "ifort"
setenv LOADLIB " -limf -lsvml "

mpich config.h:

#!/bin/csh -f
setenv MPICH_HOME /opt/pkg/mpi
setenv MPICH_INCLUDE $MPICH_HOME/include
setenv MPICH_LIBDIR $MPICH_HOME/lib
setenv MPILIB "-L$MPICH_LIBDIR -lmpich"

setenv PREPROCFLAGS "-DMPI -DSLOW_NONBLOCKING_MPI -DDIRFRC_VECT_OPT"

setenv CPP "/lib/cpp -traditional -I$MPICH_INCLUDE"

setenv OPT_LO "ifort -c -auto -tpp7 -mp1 -O0"
setenv OPT_MED "ifort -c -auto -tpp7 -mp1 -O2"
setenv OPT_HI "ifort -c -auto -tpp7 -xW -mp1 -ip -O3"

setenv LOAD "ifort"
setenv LOADLIB " -limf -lsvml $MPILIB"

Now for the additional caveats. With ifort 8, either on an IA32 chip or on
the itanium, a lot more stack is used by the executables produced if you
actually use some of the more modern f90 capabilities (like pmemd does).
Thus it is important to (on the csh or tcsh) do a "limit stacksize
unlimited" in your .login script (for sh and it's variants I think you have
to use a ulimit, and different syntax). In all past experience, this is
only required in .login. I don't know what has happened in ifort 8, but now
for mpich runs it is necessary to put the "limit stacksize unlimited" in
.cshrc (which all invocations of csh execute). This is very strange because
limits are supposed to be inherited without any such action (kind of like
environment variables - next topic). Also, in the past you needed to source
the appropriate intel fortran environment variable script in your .login (or
.profile for sh-ish shells). Well, for mpi executables built by ifort 8,
you also need to source the fortran environment variable script in .cshrc
(or .bashrc or whatever). It is probably sufficient to just set
LD_LIBRARY_PATH. An alternative to this is to put the intel libraries path
in /etc/ld.so.conf, but you must then remember to run /sbin/ldconfig as
root.

Thanks to David Konerding for help on all this; we were both testing stuff
yesterday. If you are still running pmemd 3.1, by the way, and want to use
it on these later redhat releases, please be sure to remove the -static
option from whatever MACHINE file you use (due to a static threads library
stack overflow issue, I believe - now threads code is used in all builds,
due to intel library stuff).

One additional note on building mpich 1.2.5.2 under ifort 8. It DOES work,
but it is fairly common to build it as root, and then it is important to
remember to set up the intel fortran compiler environment (source the script
in the root account). If you forget to do this, you get the errors in
construction of mpif.h that have been reported on the ifc developer's forum
(MPI_ADDRESS_KIND and MPI_OFFSET_KIND are set to 0; they should be 4 and 8,
contrary to what it says on the forum).

Sorry for the mess. It's not my fault! What you save in cash on Linux
systems, you pay back in other ways...

Regards - Bob Duke

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu