AMBER Archive (2008)Subject: Re: AMBER: Support for intel mpi in pmemd 10; better support for intel MKL
From: Robert Duke (rduke_at_email.unc.edu)
Date: Thu Oct 02 2008 - 17:39:28 CDT
Francesco -
I don't know, not having access to such machine, not being sure what exactly
you are using as a baseline from Ross, etc. And I have not played with the
various performance tweaks available from intel mpi. I would do this, but
don't really have access to anything where I would anticipate a big
difference, or anything that folks are currently buying, for that matter. I
would not expect a huge difference in shared memory performance, but it is
entirely possible the intel guys have been clever (I didn't see a gain with
shared memory on my setup which is an older intel dual cpu, but I also did
not take time to work through all the performance options). I would expect
that Klaus-Dieter may provide us with more details on other interconnects,
but I have only seen infiniband stuff so far. You can always get an eval
copy of the s/w from Intel and check it out on your specific machine (but
read the reference manual on tuning too...).
Regards - Bob
----- Original Message -----
From: "Francesco Pietra" <chiendarret_at_gmail.com>
To: <amber_at_scripps.edu>
Sent: Thursday, October 02, 2008 5:35 PM
Subject: Re: AMBER: Support for intel mpi in pmemd 10; better support for
intel MKL
> Thanks indeed. However, for shared-memory amd machine, is there an
> advantage with respect to the recipe furnished previously by Ross
> Walker?
>
> archive:
> From: Francesco Pietra <chiendarret.yahoo.com>
> Date: Sun Jun 01 2008 - 00:12:12 PDT
>
>
> If advantageous, I'll try to implement the new recipe.
>
> francesco
>
> On Thu, Oct 2, 2008 at 10:56 PM, Robert Duke <rduke_at_email.unc.edu> wrote:
>> Folks,
>> I have finally gotten around to checking out modifications to support
>> Intel
>> MPI for pmemd 10 and generating a patch. These mods also are supposed to
>> better support configuration for the Intel MKL, though I have not checked
>> out the plusses and minuses compared to my last code. These changes were
>> first generated by Klaus-Dieter Oertel of Intel (thanks much,
>> Klaus-Dieter!), and I have tweaked them a bit, mostly for cosmetic
>> issues. I
>> will have the patch posted to the amber website, ambermd.org (not sure
>> how
>> you "patch" a new file yet). Why use this stuff? At least as far as I
>> know, Intel MPI offers superior performance on Infiniband, and possibly
>> other interconnects. I have not done extensive testing myself, as I only
>> had an evaluation license for a little while and was wrapped up doing
>> other
>> stuff, and did not see much difference for gigabit ethernet, but I also
>> did
>> not work on performance tuning - my goal was to insure that the patch
>> worked. For Infiniband, I have seen quite impressive numbers on
>> benchmarks
>> run by Klaus-Dieter, and I believe Ross Walker is going to make this info
>> available.
>>
>> So, anyway, what's here and what to do?
>>
>> Attached is:
>> pmemd10.patch2
>> interconnect.intelmpi
>>
>> Take interconnect.intelmpi and move it to
>> $AMBERHOME/src/pmemd/config_data.
>> Take pmemd10.patch2 and move it to $AMBERHOME. Then execute the command
>> "patch -p0 -N < pmemd10.patch2" from $AMBERHOME. You can then build for
>> intel mpi by specifying intelmpi as the interconnect to configure.
>>
>> The interconnect file should also work for pmemd 9, though the patch file
>> certainly won't (but is not necessary for the interconnect fix).
>>
>> Best Regards - Bob Duke
>>
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber_at_scripps.edu
> To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
> to majordomo_at_scripps.edu
>
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
to majordomo_at_scripps.edu
|