AMBER Archive (2009)

Subject: Re: [AMBER] Compiling and running NAB programs in parallel using MPI

From: Kaushik Raha (chemvishnu_at_gmail.com)
Date: Tue Jun 30 2009 - 09:48:11 CDT


Dr. Case,

I am very interested in using the LMOD package for conformational search of
protein-ligand complexes. But currently it seems like these calculations are
very expensive. LMOD seems to be using xmin for the minimization part so I
was interested in its parallelization. Do you have any other suggestions for
speeding up these calculations besides parallelization? For example, I have
noticed in the lmod example of beta_secretase, the gb cutoffs are very
liberal (cut=99, rgbmax=99). Shorter cutoffs would obviously speed up the
calculations so I was wondering if there is a physical reason behind these
cutoffs w.r.t an LMOD calculation, in the example? I also came across some
problems with running a gbsa (gbsa=1 to include the non-polar part)
calculation on protein-ligand complexes. The problem appears to be in LCPO
routine for the non-standard ligand. The ligand parameters were calculated
with antechamber and it seems fine with just a GB calculation. So I was
wondering if there is a known bug? Thanks in anticipation.

Regards,
Kaushik

On Mon, Jun 29, 2009 at 8:19 AM, case <case_at_biomaps.rutgers.edu> wrote:

> On Fri, Jun 26, 2009, Kaushik Raha wrote:
> > Hi Dr. Case,
> >
> > mpiinit() and mpifinalize() are not required -- this was an error in the
> > > printed version of the manual (from lulu.com), but is fixed in the
> > > documentation in AmberTools version 1.2.
> > >
> >
> > Thanks for the clarification.
> >
> >
> > >
> > > First, I'm not clear which version of NAB you are using, and would
> > > recommend
> > > upgrading to AmberTools 1.2 if you are not already doing that. (Your
> > > description makes me think you are not running the current version.)
> > >
> > > Second, I agree that the documentation for MPI is pretty sparse, and
> > > assumes
> > > you understand how MPI coding works. The mpirun program will indeed
> spawn
> > > off
> > > multiple copies of the same job. Division of work among processors is
> > > controlled by the mytaskid variable, or the get_mytaskid() function.
> So,
> > > there is no automatic parallelization -- the -mpi option just assists
> you
> > > to
> > > in writing MPI programs.
> > >
> > > However, the nab energy routines *are* written for MPI, and I am
> surprised
> > > by
> > > the behavior you report, that messages from the nab energy routines are
> > > repeated n times. The code only prints energy results when
> get_mytaskid()
> > > ==
> > > 0 (see sff.c or eff.c). The codes in amber10/test/nab (such as
> gbrna.nab)
> > > should work without modification with MPI, and should show speedups
> > > (although
> > > they are so short that you might not see it; see the programs in
> > > amber10/benchmarks/nab for longer examples).
> > >
> > > ....hope this helps....dac
> > >
> >
> > I think it was a version issue. I compiled the 1.2 version and it seems
> to
> > have worked. I was able to run gbrna & gbrna_long in parallel and it
> scales
> > up nicely with number of processors. However, the speed up don't seem
> that
> > obvious in other examples. For example in the enerny routines that use
> *xmin
> > *. So I was wondering if xmin is also written for MPI?
>
> It's the energy routines that are parallelized, but they take most of the
> time, so xmin calculations should benefit as well. But I haven't done
> benchmarks in this area; I'm mainly relying on reports from Istvan
> Kolossvary.
>
> ...dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER_at_ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER_at_ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber