AMBER Archive (2004)Subject: Re: AMBER: Question on Amber7...
From: Pmartin (pmartin_at_erskine.edu)
Date: Tue Apr 06 2004 - 16:44:34 CDT
Thanks for your help, i ran the general tests for the suite on xleap and
sander, both worked with the exception of a rounding problem in sander,
which is not a problem from what i have read in the user guide. i then
tried running the single processor md run but it gave me this:
Rohan 7# mpirun -np 1 ./sander -O -i moldyin -o yayout -p yaytop -c redo
-r retry
MPI: Program ./sander, Rank 0, Process 2057 called
MPI_Abort(<communicator>, 1)
MPI: --------stack traceback-------
PC: 0x5ddb100 MPI_SGI_stacktraceback in /usr/lib32/libmpi.so
PC: 0x5e02a70 PMPI_Abort in /usr/lib32/libmpi.so
PC: 0x5e32968 pmpi_abort_ in /usr/lib32/libmpi.so
PC: 0x101064dc mexit in ./sander
PC: 0x1008b874 compute_nfft in ./sander
PC: 0x10091d8c read_ewald in ./sander
PC: 0x1008e178 load_ewald_info in ./sander
PC: 0x10048dc4 mdread1 in ./sander
PC: 0x10008c58 sander in ./sander
PC: 0xad39d74 main in /usr/lib32/libftn.so
MPI: dbx version 7.3.4 (86441_Nov11 MR) Nov 11 2002 11:31:55
MPI: Process 2057 (sander) stopped at [__waitsys:24 +0x8,0xfa60f48]
MPI: Source (of
/xlv41/6.5.21m/work/irix/lib/libc/libc_n32_M4/proc/waitsys.s) not
available for Process 2057
MPI: > 0 __waitsys(0x0, 0x80a, 0x7ffefdb0, 0x3, 0x2, 0x61, 0x69, 0x6e)
["/xlv41/6.5.21m/work/irix/lib/libc/libc_n32_M4/proc/waitsys.s":24,
0xfa60f48]
MPI: 1 _system(0x7ffefe80, 0x80a, 0x7ffefdb0, 0x3, 0x2, 0x61, 0x69,
0x6e) ["/xlv41/6.5.21m/work/irix/lib/libc/libc_n32_M4/stdio/system.c":116,
0xfa6d398]
MPI: 2 MPI_SGI_stacktraceback(0x0, 0x80a, 0x7ffefdb0, 0x3, 0x2, 0x61,
0x69, 0x6e)
["/xlv4/mpt/1.8/mpi/work/4.3/lib/libmpi/libmpi_n32_M4/adi/sig.c":242,
0x5ddb268]
MPI: 3 PMPI_Abort(0x0, 0x1, 0x7ffefdb0, 0x3, 0x2, 0x61, 0x69, 0x6e)
["/xlv4/mpt/1.8/mpi/work/4.3/lib/libmpi/libmpi_n32_M4/misc/abort.c":75,
0x5e02a70]
MPI: 4 pmpi_abort_(0x0, 0x80a, 0x7ffefdb0, 0x3, 0x2, 0x61, 0x69, 0x6e)
["/xlv4/mpt/1.8/mpi/work/4.3/lib/libmpi/libmpi_n32_M4/sgi77/abort77.c":25,
0x5e32968]
MPI: 5 mexit(0x1018fb8c, 0x1018fb88, 0x7ffefdb0, 0x3, 0x2, 0x61, 0x69,
0x6e) ["/usr/local/amber7/src/sander/_mexit_.f":65, 0x101064dc]
MPI: 6 compute_nfft(0x7fff1030, 0x1036b684, 0x7ffefdb0, 0x3, 0x2, 0x61,
0x69, 0x6e) ["/usr/local/amber7/src/sander/_ew_setup_.f":387, 0x1008b874]
MPI: 7 read_ewald(0x7fff12a0, 0x7fff12a8, 0x7fff12b0, 0x3, 0x20000000,
0x61, 0x69, 0x6e) ["/usr/local/amber7/src/sander/_ew_setup_.f":3660,
0x10091d8c]
MPI: 8 load_ewald_info(0x0, 0x1036d930, 0x1036bf58, 0x3, 0x2, 0x61,
0x69, 0x6e) ["/usr/local/amber7/src/sander/_ew_setup_.f":2276, 0x1008e178]
MPI: 9 mdread1(0x0, 0x80a, 0x7ffefdb0, 0x3, 0x2, 0x61, 0x69, 0x6e)
["/usr/local/amber7/src/sander/_mdread_.f":799, 0x10048dc4]
MPI: 10 sander(0x1, 0x28000000, 0x20, 0x28000000, 0x20, 0x2002000, 0x1,
0x28000000) ["/usr/local/amber7/src/sander/_sander_.f":1061, 0x10008c58]
MPI: More (n if no)? 11 main(0x0, 0x80a, 0x7ffefdb0, 0x3, 0x2, 0x61,
0x69, 0x6e) ["/j10/mtibuild/v741m/workarea/v7.4.1m/libF77/main.c":101,
0xad39d74]
MPI: 12 __start()
["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177,
0x10008868]
MPI: -----stack traceback ends-----
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job
i am just learning the ropes of the program so i apologize if this question is stupid, but do you have any ideas as to what might be going wrong? you said that an input file would help you figure out, so here is the program i am using... it is the program given in the user's guide for a sample with a few numbers rearranged to try and make it work(no success. my professor and i both have tried to figure out what is wrong, but we are out of ideas. program:
moldyin file, first dynamics run
&cntrl
imin=0, irest=0, ntx=2,
ntt=1, temp0=300.0, tautp=0.2,
ntp=0,
ntb=1, ntc=2, ntf=2,
nstlim=500,
ntwe=100, ntwx=100, ntpr=100,
&end
i appreciate any help.
thanks,
pmartin
David A. Case wrote:
>On Thu, Apr 01, 2004, Pmartin wrote:
>
>
>
>>I am working on Amber 7 and as I try to run an md simulation, i get this:
>>
>>Rohan 8# mpirun -np 2 ./sander -O -i moldyin -o finout -p yaytop -c redo
>>-r retry
>>MPI: Program ./sander, Rank 0, Process 1484 called
>>MPI_Abort(<communicator>, 1)
>>
>>MPI: --------stack traceback-------
>>PC: 0x5ddb100 MPI_SGI_stacktraceback in /usr/lib32/libmpi.so
>>PC: 0x5e02a70 PMPI_Abort in /usr/lib32/libmpi.so
>>PC: 0x5e32968 pmpi_abort_ in /usr/lib32/libmpi.so
>>PC: 0x101064dc mexit in ./sander
>>PC: 0x1008b874 compute_nfft in ./sander
>>
>>
>
>This looks like the same error reported by Anshul Awasthi on March 27 (check
>the amber mail archives).
>
>The first thing we need is confirmation that you have run the test suite
>on this machine. That will help decide whether or not the failure is
>specific to your problem (in particular, try the dhfr test case, for example).
>
>If the standard test cases work, then try a non-parallel run with your inputs.
>That will at least narrow things down. Some user that has this problem is
>going to have to be willing to post their input files. This is the first time
>in two years that this problem has come up, which only means that we are
>unlikely to be able to help much with just a stack trace....
>
>...regards...dac
>
>
>
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
|