AMBER Archive (2003)Subject: Re: AMBER: PMEMD Performance on Beowulf systems
From: David E. Konerding (dekonerding_at_lbl.gov)
Date: Mon Dec 22 2003 - 09:53:20 CST
Viktor Hornak wrote:
>
> A7M266-D Dual Socket A Motherboard (AMD 762 Chipset). It has 3 32bit
> 33MHz PCI slots and 2 64/32bit 66/33MHz PCI slots. To get a noticable
> speedup in networking, the gigabit card (Intel Pro1000) needs to be
> placed into 64bit 66MHz PCI slot.
>
A couple more notes:
1) Make sure the slot is configured in BIOS for the maximum speed. For
example, our latest board has a 133MHz PCI slot, but the BIOS sets the
default to 100MHz
2) Specific chips in the Intel gigabit series are better than others.
For example, from our local networking guru:
>In general, copper GigE (specially Intel onboard 84540EM) is cheap chipset.
>That is, they do not work well -- chews to much CPU. Try to avoid Intel
>82540/82541 GigE NICs.
>Intel 82545EM copper GigE is the only one that we found works well.
>
>
>Fiber NICs are usually better, but not always. Becareful what you want.
>
From what I can tell in the code (I'm not an MPI expert, just enough to
get into trouble) PMEMD makes more use of overlapping
communication and computation than AMBER (Robert & AMBER gurus let me
know if this is wrong). This is interesting
because, for example, the 82545EM can sustain gigabit speeds in our
machine (dual Xeon 2.8GHz on SuperMicro
X5DPL-iGM motherboard) while leaving 75% of one CPU and 100% of the other free.
The other interesting thing I noticed is that HyperThreading gave an unexpected benefit- if I run PMEMD or AMBER
with np=4 on a 2 processor (+2 HT processors) machine, I actually get a mild speedup relative to just 2 cpus. This means, for example, that single
SMP boxes can run ~1ns sims in reasonable time, so we're not really using the gigabit interconnect much at all.
Also, at Supercomputing some data about 10GbE was released. There are some suggestions 10GbE made a huge difference in latency relative to 1GbE,
meaning there may be plain 10GbE-based interconnects that are competitive with the proprietary interconnects, but hopefully at a lower
cost. That, combined with Infiniband interfaces on the north bridge should definitely help a lot.
Dave
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
|