AMBER Archive (2004)

Subject: Re: AMBER: recovering total force array

From: Carlos Simmerling (carlos_at_csb.sunysb.edu)
Date: Wed Mar 31 2004 - 15:39:18 CST


commsander is for AMBER*, so MPI_COMM_WORLD should
do it for AMBER7. that seems to work for me, maybe you should
try a very small system on 2 cpus and have each write the forces
before and after the allreduce to see what's happening.
sorry I can't be more help, maybe someone else has done what you want.
carlos

----- Original Message -----
From: "Dave S Walker" <dswalker_at_darkwing.uoregon.edu>
To: <amber_at_scripps.edu>
Sent: Wednesday, March 31, 2004 4:29 PM
Subject: Re: AMBER: recovering total force array

> Carlos,
> My apologies for not catching that 35. I still don't know exactly
> what all the components mean to the call statement (besides 'f' and
> 'forcetmp', which are described in parallel.f), but this was the most
> recent form I used for mpi_allreduce which didn't work:
>
> call mpi_allreduce(F,ftmp,3*natom,MPI_DOUBLE_PRECISION,
> + MPI_SUM,MPI_COMM_WORLD,ierr)
>
> Not knowing the size of ftmp I set it to have 3*natom elements (a guess,
> along with 3*natom+40 which I tried once from looking at locmem.f). I
> also have the call statement standing alone, which I thought would be okay
> seeming as it adds all values from all nodes and sends the results out to
> all nodes. I'm running AMBER7 (should have pointed that out earlier). Is
> there a difference between MPI_COMM_WORLD and commsander that could cause
> my problem? The latter isn't recognized upon compilation and I would only
> try to make it as close to the former as possible if I kept it in! Thanks
> again for your time.
>
> dsw
>
> On Tue, 30 Mar 2004, Carlos Simmerling wrote:
>
> > Dave,
> > I've done collection of all forces using this:
> > call mpi_allreduce(f, forcetmp, 3*natom, &
> > MPI_DOUBLE_PRECISION,mpi_sum,commsander,ierr)
> > it seems to work ok, have you tried it?
> >
> > why did you use 35 in your allreduce?
> >
> > Carlos
> >
> > ----- Original Message -----
> > From: "Dave S Walker" <dswalker_at_darkwing.uoregon.edu>
> > To: <amber_at_scripps.edu>
> > Sent: Tuesday, March 30, 2004 8:51 PM
> > Subject: Re: AMBER: recovering total force array
> >
> >
> > > Hello all,
> > > Sorry for the vague description of my problem (see below). A few
> > > weeks ago I realized that the values for the force array are
distributed
> > > between nodes before they're written over by coordinates in runmd.f.
In
> > > my attempt to recover all of these values from the nodes I found
myself
> > > stumbling into fdist() in parallel.f, and it was in my poor attempts
to
> > > call on fdist() directly that caused all my problems.
> > > Looking closer at fdist() after getting David's response I
> > > realized that calling on mpi_allreduce() directly would recover all
the
> > > values. I declared a "scratch space" variable named 'ftmp' and
attempted
> > > to call on mpi_allreduce() from runmd.f:
> > >
> > > call mpi_allreduce(F,ftmp,35,MPI_DOUBLE_PRECISION,MPI_SUM,
> > > + MPI_COMM_WORLD,ierr)
> > >
> > > Sander compiles fine this way but the total force array still isn't
> > > recovered. I hope I have described my actions clearer; any input into
how
> > > I can resolve this matter would be greatly appreciated. Thanks again
for
> > > your time.
> > >
> > > dsw
> > >
> > > On Fri, 12 Mar 2004, David A. Case wrote:
> > >
> > > > On Thu, Mar 11, 2004, Dave S Walker wrote:
> > > >
> > > > > I've been trying (with no success) to figure out how to output the
> > > > > total force array from runmd.f when running sander parallel. It
looks
> > > > > like fdist() in parallel.f would do the trick, but I haven't been
able
> > to
> > > > > access it without sander freezing on me when I run it (compiles
fine).
> > Is
> > > > > there a simple way to resolve this problem?
> > > >
> > > > I don't think the mailing list is going to be of much help here,
given
> > the
> > > > limited amount of information you provide. It sounds like you
modified
> > > > the code somehow(?), and then it freezes. This is not much to go
on.
> > > >
> > > > To output forces to a disk file, you will probably need an
mpi_allreduce
> > > > inside fdist() to make sure that all the forces get to the master
> > processor,
> > > > so that they can be written out. If you have already done that, and
it
> > > > doesn't work, you can hone and exercise your debugging skills :-)
> > > >
> > > > ...dac
> > > >
> > > > --
> > > >
> > > > ==================================================================
> > > > David A. Case | e-mail: case_at_scripps.edu
> > > > Dept. of Molecular Biology, TPC15 | fax: +1-858-784-8896
> > > > The Scripps Research Institute | phone: +1-858-784-9768
> > > > 10550 N. Torrey Pines Rd. | home page:
> > > > La Jolla CA 92037 USA | http://www.scripps.edu/case
> > > > ==================================================================
> > >
> -----------------------------------------------------------------------
> > > > The AMBER Mail Reflector
> > > > To post, send mail to amber_at_scripps.edu
> > > > To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
> > > >
> >
> -----------------------------------------------------------------------
> > > The AMBER Mail Reflector
> > > To post, send mail to amber_at_scripps.edu
> > > To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
> > >
> >
> > -----------------------------------------------------------------------
> > The AMBER Mail Reflector
> > To post, send mail to amber_at_scripps.edu
> > To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
> >
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber_at_scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
>

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu