AMBER Archive (2009)

Subject: [AMBER] Problem with entropy calculation of bigger system using NMODE and NAB.

From: Marek Malý (maly_at_sci.ujep.cz)
Date: Tue Jan 20 2009 - 12:24:55 CST


Dear Amber community,

I am dealing with simulations of dendrimer/(RNA,DNA) complexes.
Of course that one of the main information of the interest is free
energy of binding including also the entropic contribution. But I
recently found out that the entropy calculation for the bigger
systems ( let say around 10000 atoms) is really very problematic task.

I tried both (NMODE and NAB) for my actual system ( cca 15000 atoms ) but
I didn't succeed even if I for example regarding to NMODE increased
significantly all constants in "sizes2.h" (include maxatom) and recompiled
NMODE.

I always got the same system response:

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
nmode 000000000043227C Unknown Unknown Unknown
nmode 00000000004336B9 Unknown Unknown Unknown
nmode 0000000000404E48 Unknown Unknown Unknown
nmode 0000000000403922 Unknown Unknown Unknown
libc.so.6 00002B520BD45374 Unknown Unknown Unknown
nmode 0000000000403869 Unknown Unknown Unknown
         /opt/amber/exe/nmode -O -i nmode_com.in -o nmode_com.1.out -c
sanmin_com.1.restrt -p ./g7C_DNAds.prmtop not running properly

in spite the fact that I am working on "AltixXE 310" nodes with Intel Xeon
Quad-core 5365 CPUS. Each node = 8 CPUs and
16GB SHARED memory (64 bit op. system, 64 bit processors ).

I verified by simple c routine "alokuj" which just alocates given amount
of memory, that each processor can allocate
as much of memory as is in the given moment available on the node (usually
about 14 GB).

    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
   566 mmaly 20 0 10.4g 10g 888 R 83 65.7 0:20.74 alokuj.out

 From the above statement is clear that 10GB (see RES column) from the
total request (10.4GB - VIRT column) is actually allocated in
RAM memory of the node for my single process alokuj.out.

Here is the most important part of the system monitoring during my NMODE
attempts with (15000 atoms system) - with step 0.02s.

  PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32266 mmaly 20 0 660m 483m 4012 R 83 3.0 0:45.09 sander
#
32266 mmaly 20 0 660m 483m 4012 S 0 3.0 0:45.09 sander
#
32266 mmaly 20 0 600m 346m 4020 R 0 2.2 0:45.09 sander
#
32266 mmaly 20 0 130m 12m 4088 S 84 0.1 0:45.11 sander
#
32269 mmaly 20 0 37088 4784 2196 S 0 0.0 0:00.00 nmode
#
32269 mmaly 20 0 38116 6004 2364 R 0 0.0 0:00.00 nmode
#
32269 mmaly 20 0 8836m 6164 2392 R 83 0.0 0:00.02 nmode
#
32269 mmaly 20 0 8836m 6892 2400 R 83 0.0 0:00.04 nmode
#
32269 mmaly 20 0 8836m 7532 2404 R 167 0.0 0:00.08 nmode
#
32269 mmaly 20 0 8836m 8180 2404 R 83 0.0 0:00.10 nmode
#
32269 mmaly 20 0 8836m 7828 2432 D 83 0.0 0:00.12 nmode
#
32269 mmaly 20 0 8835m 8896 2580 S 83 0.1 0:00.14 nmode
#
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
nmode 000000000043227C Unknown Unknown Unknown
nmode 00000000004336B9 Unknown Unknown Unknown
nmode 0000000000404E48 Unknown Unknown Unknown
nmode 0000000000403922 Unknown Unknown Unknown
libc.so.6 00002B520BD45374 Unknown Unknown Unknown
nmode 0000000000403869 Unknown Unknown Unknown
....

 From the above rows is clear that during the sander minimisation 660MB is
requested and
483 MB is actually allocated in RAM memory. When mm_pbsa script switches
 from sander minim. to
nmode analysis, the total memory request is 8.84GB but there is evident
problem with allocation
of relevant amount of this request in RAM memory (RES column), it seems
that given process (nmode)
even do not try to alocate RAM memory except that funny 9MB.

The problem is the same if I run NMODE using mm_PBSA script or using
direct command:

/opt/amber/exe/nmode -O -i nmode_com.in -o nmode_com.1.out -c
sanmin_com.1.restrt -p ./g7C_DNAds.prmtop

If anybody has an idea where could be the problem I would appreciate it
lot.

So I switched to NAB but there is analogous problem which ends vith the
statement:

"allocation failure in vector: nh = 2159739729"

Here is whole output from NAB:

################NAB OUTPUT##############NAB
OUTPUT####################################################
Reading parm file (g7C_DNAds.prmtop)
title:

         mm_options: cut=999.
         mm_options: ntpr=1
         mm_options: nsnb=99999
         mm_options: diel=C
         mm_options: gb=0
         mm_options: dielc=1.0
       iter Total bad vdW elect nonpolar genBorn
frms
ff: 0 672691.69 40992.92 8545.93 623152.83 0.00 0.00
3.82e+01
ff: 1 672691.69 40992.92 8545.93 623152.83 0.00 0.00
3.82e+01
ff: 2 672691.68 40992.92 8545.93 623152.83 0.00 0.00
3.82e+01
         mm_options: ntpr=1
allocation failure in vector: nh = 2159739729

#####################################################################################################

Here is an input file:

#############NAB INPUT FILE######### ##NAB INPUT FILE######### ##NAB INPUT
FILE######### ##NAB INPUT FILE#########

    molecule m;

  float x[46473], fret;

  m = getpdb( "g7C_DNAds_fin.pdb");

  readparm( m, "g7C_DNAds.prmtop");

  getxyz( "g7C_DNAds.inpcrd", 15491, x );

  mm_options( "cut=999., ntpr=1, nsnb=99999, diel=C, gb=0, dielc=1.0" );

  mme_init( m, NULL, "::Z", x, NULL);

  setxyz_from_mol( m, NULL, x );

  // conjugate gradient minimization
   conjgrad(x, 3*m.natoms, fret, mme, 0.1, 0.001, 2 );

  // Newton-Raphson minimization\fP
   mm_options( "ntpr=1" );
   newton( x, 3*m.natoms, fret, mme, mme2, 0.00000001, 0.0, 2 );

// get the normal modes:
   nmode( x, 3*m.natoms, mme2, 0, 0, 0.0, 0.0, 0);

##########################################################################################################

And here is again TOP memory scan with step 0.02s.

top - 15:43:07 up 34 days, 8:41, 2 users, load average: 8.58, 8.15, 8.01
Tasks: 176 total, 3 running, 173 sleeping, 0 stopped, 0 zombie
Cpu(s): 100.0% us, 0.0% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0%
si
Mem: 16461092k total, 2845656k used, 13615436k free, 5008k buffers
Swap: 0k total, 0k used, 0k free, 1695072k cached

   PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
  5215 mmaly 20 0 1885m 931m 1148 R 83 5.8 0:43.14 makej

  5215 mmaly 20 0 1885m 931m 1148 R 83 5.8 0:43.16 makej

  5215 mmaly 20 0 1885m 931m 1148 R 0 5.8 0:43.16 makej

  5215 mmaly 20 0 1885m 931m 1148 R 125 5.8 0:43.19 makej

  5215 mmaly 20 0 0 0 0 R 42 0.0 0:43.20 makej

  5215 mmaly 20 0 0 0 0 R 83 0.0 0:43.22 makej

  5215 mmaly 20 0 0 0 0 S 0 0.0 0:43.22 makej

 From the above is clear that the conjugate gradient minimisation (just 2
steps) is finished
without problems and that the total memory requirements are 1.9GB (VIRT)
 from whitch 930MB is allocated in RAM
memory of the node. But unfortunely calculation crashed with: "allocation
failure in vector: nh = 2159739729"
when reaches Newton-Raphson minimization part. From the above is also
pretty seen that the available memory
on the nod was in given moment 13.5GB

Of course that I tried to perform the same calculation without
Newton-Raphson minimization part,
but the result is the same:

Reading parm file (g7C_DNAds.prmtop)
title:

         mm_options: cut=999.
         mm_options: ntpr=1
         mm_options: nsnb=99999
         mm_options: diel=C
         mm_options: gb=0
         mm_options: dielc=1.0
       iter Total bad vdW elect nonpolar genBorn
frms
ff: 0 672691.69 40992.92 8545.93 623152.83 0.00 0.00
3.82e+01
ff: 1 672691.69 40992.92 8545.93 623152.83 0.00 0.00
3.82e+01
ff: 2 672691.68 40992.92 8545.93 623152.83 0.00 0.00
3.82e+01
         mm_options: ntpr=1
allocation failure in vector: nh = 2159739729

So even if are results obtained by using NAB a little bit more optimistic
(no SEGMENTATIONAL FAULT), on
the end I should say that nor NMODE nor NAB is solution for my scientific
problem.

I have to say that before posting this contribution I spent lot of time by
reading relevant contributions
in Amber mailing list but I found only several guys with the similar
problem like is that mine but no
ideas for the solution.

So I have two general questions on the end:

#1 - Just regarding to my particular problems with memory - does anyone
has any idea where could be a core of this my troubles ?
      This is just to know why the things doesn't work like they should if
they could (at least theoretically).

#2 - Regarding to calculation of the entropy of the big systems in general:

A) Is it even sensible to try calculate entropy of the whole system
composed of 15000 atoms ? Could for example someone estimate
    roughly the necessary time (on 3GHz processor) - only normal mode
analysis since I assume that 99% of minimisation could be
    done by parallel way (using Sander). (I know I can estimate it by
myself using several smaller systems - I am working on it, but
    have again some problems :)) )

B) If it is nonsence (the normal mode analysis of such systems could be
really tooooooo long) are there any alternatives how to
    estimate however not so precisely an entropy change of the
rezeptor-ligand system (between bounded and unbounded states) ?
    Does eventually Amber provides some of this alternative/s ? Or is it
possible with some alternative software (here will be probably
    problem with forcefield dependence so if one calculate enthalpy
contribution to free energy of binding, it is necessary
    to use the same forcefield for calc. of entropy however in alternative
software) ?

     Any idea (or relevant www link) is highly appreciated. I hope that I
am not alone in this community who could be pleased with
     some sensible solutions...

Here are just for eventual testing purposes my files (complex + NAB file).

http://physics.ujep.cz/~mmaly/AMBER/2009_01_20/

    Thanks in adavnce for any resonable ideas !

                  Marek

-- 
Tato zpráva byla vytvořena převratným poštovním klientem Opery:  
http://www.opera.com/mail/

_______________________________________________ AMBER mailing list AMBER_at_ambermd.org http://lists.ambermd.org/mailman/listinfo/amber