AMBER Archive (2007)

Subject: Re: AMBER: amber 9 - output of forces

From: Eric Shamay (eric.shamay_at_gmail.com)
Date: Mon Mar 26 2007 - 16:08:58 CST


After a bit of work with the source I've come to the following stopping
point:

In digging through the reflector a bit more I found DSW's message (and your
response) as follows:

------

*> In runmd.f, I had been trying to replace the velocity array with the
force *
*> array when making the call to CORPAC(). The forces (and perhaps the *
*> velocities and coordinates) in runmd.f I suspect are distributed among
the *
*> nodes and I had been trying to use a reduce statement to recover all the
*
*> forces before they're written over in Step 3. Doing this has failed to *
*> reproduce the array I get with 1 cpu. *

Did you try just forcing the code to go through the "if( init==3 )" block of

code in subroutine fdist()? That should put all of the forces onto all
nodes,
including the master node, where they can be printed out.
....dac
--------

I received the code from DSW and it looks like a handful of very manageable
changes were made to successfully recover the force array. I modified the
code noting that various pieces are altered in the amber 9 source. In Amber
9 the Velocity archive section is much more terse and there only exists a
single call to corpac(). I made the following change in runmd.f:
---------
      ! Velocity archive -> Now the force archive called from the master
node
      if (master) then
         if (ivdump) call corpac(f,1,nrx,MDVEL_UNIT,loutfm)
      end if
---------
Thus calling on the master node as per DSW's code. Note also that I'm
sending in the 'f' force array.

Beyond that, I changed dynlib.f to reflect the added precision I need:
-----------
      if (three_digit_fractional_part) then
         write(nf,'(9(1X,F12.7))') (r(j),j=istart,n)
      else
         write(nf,'(9(1X,F13.7))') (r(j),j=istart,n)
      end if
-----------

And lastly, I modified parallel.f to force the 'if (init == 3)' code by
commenting out the 'if (init /= 3)' section. The forced code was unaltered
as follows:
-----------
         call trace_mpi('mpi_allreduce', &
               3*natom,'MPI_DOUBLE_PRECISION',mytaskid)
         call mpi_allreduce(f, forcetmp, 3*natom, &
               MPI_DOUBLE_PRECISION,mpi_sum,commsander,ierr)
         do i=1, 3*natom
            f(i) = forcetmp(i)
         end do
----------

The result of all this is the following:
* when running sander without any MPI routines on one cpu the force array is
recovered, but it prints out with only 3 decimal point precision, as if I
hadn't modified the dynlib.f code at all.

* when running in parallel (sander.MPI) the force values are not recovered
and the velocities are printed out, but with the new formatting.

The two tell me that for parallel runs the corpac call is being made, but
I'm not doing something necessary with the MPI reduction of the force array
from the other nodes. I'm still digging around, but I'm afraid the my
knowledge of the MPI infrastructure is poor. Which crucial step have I
missed in getting the force array into the corpac call? Or am I missing the
whole point, and that for MPI runs corpac is never actually called?

~Eric Shamay

On 3/23/07, David A. Case <case_at_scripps.edu> wrote:
>
> On Fri, Mar 23, 2007, Eric Shamay wrote:
> >
> > if (ivdump) call corpac(f,1,nrx,MDVEL_UNIT,loutfm)
> >
> > the only change being that I'm feeding corpac my force array 'f' instead
> of
> > the velocities 'v'
> >
>
> sounds OK to me...of course you need to check that the results make sense,
> perhaps by printing a few values to stdout directly from runmd() itself.
>
> ...dac
>

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu