AMBER Archive (2002)

Subject: Re: allocation memory error

From: Ioana Cozmuta (ioana_at_nas.nasa.gov)
Date: Tue Oct 22 2002 - 16:52:56 CDT


Hi amber users,

Just to follow up with the problem I had last week. The failure in
allocating memory was originating from the fact that amber7 is by default
built using the -n32 compiler option which should be plenty of space for
small to medium problems and small number of processors.
I've recompiled the code with the -64 option replaced in the MACHINE file
and rerun the cases that failed before. This time I've succedded.
I was advised that in all the cases where the number of MPI processes is
above 100 it's a good choice to build amber on a 64 bit addressing space.

Thanks,
Ioana

On Fri, 18 Oct 2002, David A. Case wrote:

> On Fri, Oct 18, 2002, Ioana Cozmuta wrote:
> >
> > I'm getting the following error when running a minimization in sander.
> >
> > | Flags: SGIFFT MEM_ALLOC MPI RLE ROWAT HAS_FTN_ERFC
> > | NONPERIODIC ntb=0 and igb=0: Setting up nonperiodic simulation
> > | New format PARM file beingparsed.
> > | Version = 1.000 Date = 10/07/02 Time = 12:55:44
> > NATOM = 34759 NTYPES = 16 NBONH = 18344 MBONA = 16779
> > NTHETH = 36099 MTHETA = 22715 NPHIH = 68670 MPHIA = 41972
> > NHPARM = 0 NPARM = 0 NNB = 181415 NRES = 2869
> > NBONA = 16779 NTHETA = 22715 NPHIA = 41972 NUMBND = 43
> > NUMANG = 88 NPTRA = 40 NATYP = 29 NPHB = 1
> > IFBOX = 0 NMXRS = 24 IFCAP = 0 NEXTRA = 0
> >
> > Failed to allocate memory for ipairs: 604076661
> >
> > The last suggestion was to find a cluster with at least 600MB of physical
> > memory. I am running this job on an origin cluster with 200MB memory per
> > CPU. I've tried to run this job on increased number of processors from 8
> > to 128 but it does not seem to make any difference.
>
> With a infinite cutoff, you will need space for a very large number of
> non-bonded pairs (600 million in this case). Even if you find enough
> memory, it seems to me your simulations will be extremely slow. Do you
> really need such a large cutoff?
>
> For nonperiodic systems, the memory requirements (per node) are the same,
> no matter how many processors you ask for. I'm cc-ing this to Mike Crowley,
> who wrote the nonperiodic list-builder, because I don't understand why.
>
> Mike: in locmem.f:
>
> c
> c --- cap at maximum possible number of pairs:
> c
> natom_float = natom
> n2_float = natom_float*(natom_float-1.d0)/2.d0
> if( maxpr_float .gt. n2_float ) maxpr_float = n2_float
> c
> c --- check that MAXPR fits into 32 bit integer:
> c
> if( maxpr_float .lt. 2.147d9 ) then
> MAXPR = maxpr_float
> else
> write(6,'(a,e12.2)' )
> $ 'Unreasonably large value for MAXPR: ',maxpr_float
> call mexit(6,1)
> end if
> #ifdef MPI
> if(periodic.eq.1)MAXPR = MAXPR/numtasks
> #endif
>
> Why is it that we only divide MAXPR by numtasks for periodic simulations?
> (I could examine the code, but hoping you will remember right away).
>
> ..thx....dac
>
> --
>
> ==================================================================
> David A. Case | e-mail: case_at_scripps.edu
> Dept. of Molecular Biology, TPC15 | fax: +1-858-784-8896
> The Scripps Research Institute | phone: +1-858-784-9768
> 10550 N. Torrey Pines Rd. | home page:
> La Jolla CA 92037 USA | http://www.scripps.edu/case
> ==================================================================
>
>