AMBER Archive (2007)

Subject: Re: AMBER: amber 9 - output of forces

From: Eric Shamay (eric.shamay_at_gmail.com)
Date: Tue Mar 27 2007 - 14:36:34 CST


For the sake of adding some concrete information about amber9 force output
to the mail archive, here is my final code alteration to the sander source
code. This is something so fundamental that I'm very surprised we don't yet
have an option to write a force file just as we do for coordinates and
velocities. It's a minor alteration so I'm curious to hear why it's not in
sander releases by now.

But I digress -

The overall idea is to take the force archive and dump it where the velocity
archive used to go. I'm personally not interested in velocities at the
moment, so not having the velocity information for the sake of writing out
forces is a quick and easy fix.

In runmd.f
-------------
I've moved the velocity archive section (consists solely of a call to
corpac()) in step 9 to the very beginning of step 3, before anything is done
to alter the force array. Here is the altered code:

   !-------------------------------------------------------------------
   ! Step 3: update the positions, putting the "old" positions into F:
   !-------------------------------------------------------------------

   ! Force archive happens first - before we change out the old forces for
positions ~ESS
   ! added 'master' switch
   ! added ntwv and nstep/onstep checking
   if (master) then
      if (ntwv>0) then
         if (mod(nstep,ntwv).eq.0 .and. onstep) then

            !call corpac(f,1,nrx,MDVEL_UNIT,loutfm)
            call corpac(f,1,nrx,13,loutfm)

         end if
      end if
   end if

------------------

In parallel.f (subroutine = fdist()) I've commented out the code (c.a. line
320) to force an mpi_allreduce to gather all the forces for archiving.
---------------

! if (init /= 3 &
!#ifdef PIMD
! .AND.ineb==0 ) then
!#else
! ) then
!#endif
! ---Do a distributed sum of the force array:

! call fsum(f,forcetmp)
!
! else

         ! Due to lack of parallelization in the initial parts
         ! of runmd in the init=3 case, the more efficient
         ! reduce_scatter needs to be replaced with an mpi_allreduce
call;
         ! this is also required for NEB simulations so that all the
forces
         ! are availble for dot products

         call trace_mpi('mpi_allreduce', &
               3*natom,'MPI_DOUBLE_PRECISION',mytaskid)
         call mpi_allreduce(f, forcetmp, 3*natom, &
               MPI_DOUBLE_PRECISION,mpi_sum,commsander,ierr)
         do i=1, 3*natom
            f(i) = forcetmp(i)
         end do

! end if

   end if ! (mpi_orig)

   call trace_exit( 'fdist' )
   return
end subroutine fdist
--------------------------

And lastly, in dynlib.f I modified the corpac() subroutine to change the
precision of my output.
---------------------------

      if (three_digit_fractional_part) then
         write(nf,'(9F12.7)') (r(j),j=istart,n)
      else
         write(nf,'(9F13.7)') (r(j),j=istart,n)
      end if
   end if

   return
end subroutine corpac
-----------------

After recompilation I have forces sent to wherever my mdvel parameter points
to in the call to sander.MPI. Compiling also works for the serial sander. I
haven't thoroughly tested this, but prelim runs seem to be doing fine. Any
feedback is greatly appreciated, especially if someone knows the code and
some fundamental flaw in what I've done.

~Eric Shamay

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu