AMBER Archive (2009)Subject: Re: [AMBER] Problem build pmemd
From: Giorgos Lamprinidis (lamprinidis_at_pharm.uoa.gr)
Date: Wed Mar 18 2009 - 08:28:06 CDT
Dear members,
I am working with Mr Georgiadis from the University of Athens, Greece.
I am trying to run the MM-PBSA tutorial from
http://ambermd.org/tutorials/advanced/tutorial3/
On my linux machine (P4) it works fine.So now we are trying to run it on our
supercomputer which is an HP.
I run the pmemd Georgiadis just compiled but i get this error when i run the
heat.in procedure:
| WARNING: Stack usage limited by a hard resource limit of 134217728 bytes!
| If segment violations occur, get your sysadmin to increase the
limit.
| WARNING: Stack usage limited by a hard resource limit of 134217728 bytes!
| If segment violations occur, get your sysadmin to increase the
limit.
| WARNING: Stack usage limited by a hard resource limit of 134217728 bytes!
| If segment violations occur, get your sysadmin to increase the
limit.
| WARNING: Stack usage limited by a hard resource limit of 134217728 bytes!
| If segment violations occur, get your sysadmin to increase the
limit.
STOP PMEMD Terminated Abnormally!
| WARNING: Stack usage limited by a hard resource limit of 134217728 bytes!
| If segment violations occur, get your sysadmin to increase the
limit.
| WARNING: Stack usage limited by a hard resource limit of 134217728 bytes!
| If segment violations occur, get your sysadmin to increase the
limit.
| WARNING: Stack usage limited by a hard resource limit of 134217728 bytes!
| If segment violations occur, get your sysadmin to increase the
limit.
STOP PMEMD Terminated Abnormally!
STOP PMEMD Terminated Abnormally!
STOP PMEMD Terminated Abnormally!
| WARNING: Stack usage limited by a hard resource limit of 134217728 bytes!
| If segment violations occur, get your sysadmin to increase the
limit.
MPI Application rank 0 exited before MPI_Init() with status 0
STOP PMEMD Terminated Abnormally!
Failed sending message: (Unable to connect to socket (Connection refused)).
the error comes 8 times since i am trying to run on 8 CPUs
My question is, how much i must increase the stack limit?
when i run the tutorial on Linux i see that the memory used by sander is
about 150MB.
Thx for your help guys,.....
Dr. George Lamprinidis
> It worked
> Thank you very much for your response and your detailed explanation !
>
> Yiannis Georgiadis
>
> Robert Duke wrote:
>> This is a simple matter of name mangling conventions between c and
>> fortran, and you have some combination of c and fortran compilers that
>> are not agreeing with each other. To figure out what you need to do, try
>> the following in the pmemd directory after the compilation has been done
>> (been linking has not succeeded):
>>
>> nm pmemd.o | grep unlimit_stack
>> nm pmemd_clib.o | grep unlimit_stack
>>
>> Look at the outputs - they probably differ. You need to get the c
>> compiler to be producing stuff from pmemd_clib.c that looks like what
>> fortran is expecting (which is what you see in pmemd.o, related to a call
>> from fortran to the c routine).
>>
>> So at the top of the pmemd_clib.c source file there is a manifest
>> constants hack that handles this. Depending on which of the defined
>> constants is defined by CFLAGS in config.h, the names of the pmemd c
>> library routines will be mangled differently. The current choices for
>> the defined constants are CLINK_CAPS, NO_C_UNDERSCORE, DBL_C_UNDERSCORE,
>> and nothing.
>> So do this in config.h by including the manifest constant in the config.h
>> file - for example, you might use:
>> CFLAGS = -DNO_C_UNDERSCORE
>>
>> For some compilers it is possible to change the name mangling expected by
>> the fortran compiler by including a fortran compiler flag instead.
>> Determining this requires reading the compiler manuals. I don't have a
>> clue what compilers you have on this platform, but this covers all
>> possible avenues. Presuming that the mpi routines are linking properly,
>> you want to approach this problem by using the CFLAGS defined constant
>> (if mpi is not linking either, then you have to get fortran to recognize
>> the mpi names or recompile the mpi libraries in such a way that the
>> exported mpi_* names are recognized).
>>
>> Regards - Bob Duke
>>
>> ----- Original Message ----- From: "Yiannis Georgiadis"
>> <giannis_at_cc.uoa.gr>
>> To: "AMBER Mailing List" <amber_at_ambermd.org>
>> Sent: Friday, March 13, 2009 8:14 PM
>> Subject: [AMBER] Problem build pmemd
>>
>>
>>>
>>> Has anyone managed to build pmemd in hpux 11i platform ?
>>> I am using hpf90 and gcc and I am getting at linking :
>>>
>>> /usr/ccs/bin/ld : Unsatisfied symbols:
>>>
>>> get_wall_time
>>> unlimit_stack
>>>
>>> Yiannis Georgiadis
>>> UoA Computer Center Sys Admin
>>>
>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER_at_ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER_at_ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>> ------------------------------------------------------------------------
>>
>>
>> No virus found in this incoming message.
>> Checked by AVG - www.avg.com Version: 8.0.237 / Virus Database:
>> 270.11.12/1998 - Release Date: 03/12/09 18:23:00
>>
>>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER_at_ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER_at_ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
|