AMBER Archive (2008)Subject: Re: AMBER: massively parallel computation
From: Robert Duke (rduke_at_email.unc.edu)
Date: Tue May 20 2008 - 09:46:34 CDT
Yes, as Lars does point out here, as we increase atom count, our ability to
effectively use more processors does improve, and I would expect that on
good hardware like an ibm sp[45] or cray xt[34] that you would be able to
effectively do a pme simulation on 1K processors somewhere in the range of
500,000-999,999 atoms. Our current top-end is 999,999 atoms due to some
file format issues; larger problems are going to be heck to do significant
work with at any rate - somebody better versed in stat mech than I could
probably tell you how long to plan on doing a run, but a few nsec is not
going to give you adequate statistics... I have not done a bunch of
benching yet on generalized Born in pmemd 10, but I did make changes there
that should increase the usable processor count for a given problem by
roughly a factor of 3 (so whereas some folks were running 1024 cpu jobs with
GB, you could probably go higher - I have not done the analysis yet to see
what the practical vs theoretical upper limit really is now though).
Regards - Bob
----- Original Message -----
From: "Lars Skjærven" <lars.skjarven_at_biomed.uib.no>
To: <amber_at_scripps.edu>
Sent: Tuesday, May 20, 2008 10:28 AM
Subject: Re: AMBER: massively parallel computation
>A quick comment.. We observe increased performance on up to 1024 cores
> on our cray xt4 for a system of ~600.000 atoms using pmemd. not
> exactly linearly scaling that far though, but I expect even better
> performance when we optimize the installation. so pmemd is most
> definitively suited for use on several hundreds cpu's.
> LS
>
> On Tue, May 20, 2008 at 4:11 PM, Adrian Roitberg <roitberg_at_qtp.ufl.edu>
> wrote:
>> Oh man, you got yourself in trouble now ...
>> Amber and more particularly PMEMD already use hundreds of cpus VERY
>> efficiently !
>>
>> It might be time to post to this list the benchmarks showing how good
>> pmemd
>> really is against the other supposedly faster programs.
>>
>> Ross ?
>> Bob ?
>>
>> Cheers
>>
>> Adrian
>>
>>
>> Mingfeng Yang wrote:
>>>
>>> Recently, a few algorithms have been developed to enable massively
>>> parallel
>>> computation which can efficiently use hundreds of CPUs simultaneously
>>> for
>>> MD
>>> simulation. For example, J Comput Chem 26: 1318–1328, 2005.
>>>
>>> Is there a plan to implement such algorithm in Amber/PMEMD? As computer
>>> cluster is getting cheaper and cheaper, the cluster size keeps expanding
>>> quickly as well. Such algorithms should be very helpful and
>>> indispensable
>>> to
>>> reach >ms scale simulation.
>>>
>>> Thanks,
>>> Mingfeng
>>>
>>
>> --
>> Dr. Adrian E. Roitberg
>> Associate Professor
>> Quantum Theory Project and Department of Chemistry
>>
>> University of Florida PHONE 352 392-6972
>> P.O. Box 118435 FAX 352 392-8722
>> Gainesville, FL 32611-8435 Email adrian_at_qtp.ufl.edu
>> ============================================================================
>>
>> To announce that there must be no criticism of the president,
>> or that we are to stand by the president right or wrong,
>> is not only unpatriotic and servile, but is morally treasonable
>> to the American public."
>> -- Theodore Roosevelt
>> -----------------------------------------------------------------------
>> The AMBER Mail Reflector
>> To post, send mail to amber_at_scripps.edu
>> To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
>> to majordomo_at_scripps.edu
>>
>
>
>
> --
> mvh Lars Skjærven
> Institutt for Biomedisin
> Universitetet i Bergen
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber_at_scripps.edu
> To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
> to majordomo_at_scripps.edu
>
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
to majordomo_at_scripps.edu
|