AMBER Archive (2004)

Subject: Re: AMBER: PMEMD scaling

From: Robert Duke (
Date: Sun Apr 25 2004 - 10:24:11 CDT

Lubos -
On the quadrics interconnect on the terrascale computer at PSC
(alphaservers, 4 cpu's per node, I believe) pmemd scales well out to about
128 processors (50% efficiency - off the top of my head, so don't hold me to
exact numbers). If your quadrics h/w is equivalent, you will probably do
around this well. Scaling is a balance between interconnect speed and cpu
speed. As the cpu's get faster, they increase the demand on the
interconnect, so with faster cpu's, you MAY not scale as well, but you will
get good throughput at the point where scaling starts dropping off. Vendors
tend to not build balanced systems with good interconnect speed because it
is cheaper to get better cpu speeds, and the bulk of the marketplace is
satisfied with faster cpu's. Also, scaling is better on larger problems
(more atoms) due to basic geometry considerations (cutoff radius vs. size of
the chunk of atoms any given processor owns).
Regards - Bob
----- Original Message -----
From: "Lubos Vrbka" <>
To: <>
Sent: Sunday, April 25, 2004 10:07 AM
Subject: AMBER: PMEMD scaling

> hi,
> i need to know what could be theoretical pmemd scaling (up to hundreds
> of processors) on HP linux cluster with 1.5ghz itanium2 processors and
> Quadrics QsNet interconnect. i was looking to the pmemd release notes,
> but i am not sure which of the values could be appropriate for the
> configuration above.
> can anyone give me a hint?
> regards,
> --
> Lubos
> _@_"
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to
> To unsubscribe, send "unsubscribe amber" to

The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" to