AMBER Archive (2004)Subject: RE: AMBER: Problems with CPU scaling on cluster
From: Ross Walker (ross_at_rosswalker.co.uk)
Date: Mon Jul 12 2004 - 13:38:27 CDT
Dear Jacob,
> 1: 1246s
> 2: 755s
> 4: 440s
> 8: 355s
> 16: 439s
>
> As you can see, the walltime actually rises when going from 8
> to 16 CPUs..
This can sometimes happen, especially if your system is small or your cutoff
is small. Typically the larger your system and the larger your cutoff the
better the scaling. Are you sure your system isn't doing anything else while
this calculation is going on? Are the 16 cpus on a dedicated network? Are
they writing NFS data over this network at the same time? Are other machines
using the same switch?
Also the DNA tutorial is really a very small system designed to run on the
timescale of a tutorial and so will not run well in parallel. Try something
more appropriate such as the Joint Amber Charm benchmark and see how your
timings compare to ours:
http://amber.scripps.edu/amber8.bench1.html
See $AMBERHOME/benchmarks for the files.
> I've tested with more systems, and it seems that the more
> atoms that are in the system, the worse are the scaling problems.
This is strange, normally the more atoms you have the better the scaling.
Although the longer the simulation takes obviously.
> With 4 CPUs, each of the processors are only around 65% utilized, and
> this figure drops drastically with the number of CPUs, consistant with
> the walltimes.
This is typical for communication over TCP/IP since there is a lot of cpu
overhead. You don't specify what your machines are. Are they dual cpu
machines? Or single cpu? How much memory etc? Also if they are hyperthreaded
cpus you may want to turn off hyperthreading as in my experience this can
often cause problems with mpi scheduling etc.
> Our cluster is a large number of intel desktop machines joined on a
> gigabit ethernet. On a single machine, amber runs at the
> speed expected at that clock speed. Do anyone have suggestions for
improving the
> performance.
Gigabit will normally take you out to about 16 cpus if and only if you have
a top quality switch. What is the make of your switch? Note a lot of cheap
switches on the market are not what is known as 'non-blocking' that is they
have say a 10GBps backplane for 24 ports. As such it is not possible for all
ports to talk to each other at 1GBps at the same time. So although the
switch claims to be gigabit you can quickly saturate it with an intensive
mpi job such as Sander. Nonblocking switches are typically more expensive
but will give you much better performance. See
http://amber.ch.ic.ac.uk/archive/200304/0179.html for some more info. Note,
if you start chaining switches together the setup becomes significantly more
complex.
Also, if you can route the NFS traffic over a separate network from the mpi
traffic that would be good.
Some of other things to do (if you have plenty of ram on your machines),
that may or may not help:
As root do (these will up the network buffers from the default of 64KB to
256KB), probably best to put this in the /etc/rc.d/rc.local of each machine:
echo 262142 >/proc/sys/net/core/wmem_default
echo 262142 >/proc/sys/net/core/rmem_default
echo 524284 >/proc/sys/net/core/wmem_max
echo 524284 >/proc/sys/net/core/rmem_max
Also, if you can you should consider 64bit gigabit network cards instead of
32bit. Although this will depend on the motherboards.
> 1: Would upgrading the cluster network to something with a lot smaller
> latency give a better performance?
Almost certainly, although it will be expensive. I have got scaling of 23x
faster on 64 cpus using scali. Myrinet and Quadrics are also good but again,
very expensive compared to gigabit.
> 2: Is the problem inherent to Amber7, and would upgrading to
> Amber8 help?
Amber 8 has better scaling than amber 7 but not significantly better. You,
may if you are doing periodic boundary calcuations, want to use the pmemd
module which has much better scaling (although less features) than sander.
An old version is available here (http://amber.scripps.edu/pmemd-get.html).
The latest version ships with amber8.
I hope this helps.
All the best
Ross
/\
\/
|\oss Walker
| Department of Molecular Biology TPC15 |
| The Scripps Research Institute |
| Tel:- +1 858 784 8889 | EMail:- ross_at_rosswalker.co.uk |
| http://www.rosswalker.co.uk/ | PGP Key available on request |
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
|