AMBER Archive (2009)Subject: Re: [AMBER] PMEMD 9 on MVAPICH / Infiniband problem
From: Robert Duke (rduke_at_email.unc.edu)
Date: Tue Mar 31 2009 - 07:21:22 CDT
Okay, this is unlikely to be a pmemd-specific problem, though it is possible
that the mix of mpi calls pmemd does would cause more problems; certainly it
puts more demand on the system. Have you tried doing exactly the same thing
with sander.MPI? The biggest differences between the two are that 1) sander
will use more memory and 2) sander won't do as much async point-to-point mpi
(it uses more collectives). Sander is also slower, but the effect of that
and total interconnect loading is a bit hard to predict. But at any rate,
chances are you will see the problem either way. What happens if you run 1
32 cpu job with pmemd? What you are describing could well be caused by an
intermittent interconnect hardware problem, and you only hit it when you
happen to allocate the problem node. You have to find the problem
interconnect in that case. Those are sys admin problems, not amber
problems. But jobs disappearing and nodes hanging generally indicate
something messy on the machine; in the absence of any stderr output
indicating that somebody aborted the run, it is likely in the system
somewhere. Are the cpu's set up for exclusive use? Just lots of stuff we
can only begin to guess at. Please be sure to be running one of the
standard amber benchmarks for all this - that way you know the problem is
not sourcing from your input, and you also have some guidelines for what to
expect from performance.
Regards - Bob Duke
----- Original Message -----
From: "fohm fohm" <fohmsub_at_gmail.com>
To: "AMBER Mailing List" <amber_at_ambermd.org>
Sent: Tuesday, March 31, 2009 7:04 AM
Subject: Re: [AMBER] PMEMD 9 on MVAPICH / Infiniband problem
> Hello Ross,
>
> I'm working with Nick on this problem and can try to fill in some details
> ..
>
> Firstly, by "single pmemd job" we mean a multiprocessor pmemd job running
> alone on the cluster, not a serial pmemd job. Our problems arise when we
> try
> to run two (multiprocessor) pmemd jobs on the cluster at the same time. To
> give an example: if we submit a single 16cpu pmemd job via SGE we
> get reasonable pmemd performance. Only when we simultaneously submit a
> second 16 cpu pmemd job (the cluster has >32 cpus) do the problems start
> ...
>
> Secondly, we don't see any error messages: the pmemd output files are
> there
> and look normal, and the SGE logfiles don't report any problems either. As
> Nick said, what happens is that pmemd jobs disappear from the queueing
> system, but continue to run on the compute nodes.
>
> To add some specific information: we have used both "round-robin" and
> "fill-up" allocation/scheduling rules under SGE. With "fill-up" we
> sporadically (and, at the moment, non-reproducibly!) see the issue
> described
> above. With "round-robin" we additionally notice a drastic slow-down --
> jobs
> running side-by-side with another complete an order of magnitude fewer
> timesteps per unit walltime than a jobs running alone on the cluster. For
> both allocation rules, the SGE delete command "qdel" removes the job from
> the queue but it persists on the compute node.
>
> If anyone has seen anything like this and can direct us to the source of
> the
> problem, we'd be very grateful,
>
> Many thanks,
>
> Frank.
>
>
> On Sat, Mar 28, 2009 at 2:04 AM, Ross Walker <ross_at_rosswalker.co.uk>
> wrote:
>
>> Hi Nick,
>>
>> There really isn't enough information in here to be able to tell what is
>> going on. Do you get any type of error message? Do you see an output
>> file?
>> What about the log files produced by the queuing system do they tell you
>> anything? Normally stderr will have been redirected somewhere and you
>> would
>> need to find this to see what was said. There are a number of problems
>> that
>> could be occurring including file permission / path problems if all nodes
>> don't share the same filesystem, problems with shared libraries due to
>> environment variables not being exported correctly, stack limitation
>> issues
>> causing segfaults, insufficient memory etc etc. Clues to which of these
>> it
>> is will be in the log file.
>>
>> Note, you say you can launch single pmemd jobs but don't explain this.
>> The
>> parallel version of pmemd can only run at 2cpus and greater. Did you
>> compile
>> a serial version as well? Is this what you means by single pmemd jobs?
>>
>> All the best
>> Ross
>>
>> > -----Original Message-----
>> > From: amber-bounces_at_ambermd.org [mailto:amber-bounces_at_ambermd.org] On
>> > Behalf Of Nick Holway
>> > Sent: Friday, March 27, 2009 8:56 AM
>> > To: amber_at_ambermd.org
>> > Subject: [AMBER] PMEMD 9 on MVAPICH / Infiniband problem
>> >
>> > Dear all.
>> >
>> > We've compiled PMEMD 9 using ifort 10, MVAPICH2 1.2 and OFED 1.4 on
>> > 64bit Rocks 5.1 (ie Centos 5.2 and SGE 6.1u5). I'm able to launch
>> > single pmemd jobs via qsub using mpirun_rsh and they run well. The
>> > problem we see is when two jobs are launched at once is that some of
>> > the jobs disappear from qstat in SGE as well as continue to run
>> > indefinitely.
>> >
>> > I'm calling PMEMD with this line - $MPIHOME/bin/mpirun_rsh -np $NSLOTS
>> > -hostfile $TMPDIR/machines $AMBERHOME/pmemd -O -i xxxx.inp -c
>> > xxxx_min.rest -o xxxx.out -p xxxx.top -r xxxx_eqt.rest -x xxxx.trj
>> >
>> > Does anyone know what I've got to do to make the PMEMD jobs run
>> > properly?
>> >
>> > Thanks for any help.
>> >
>> > Nick
>> >
>> > _______________________________________________
>> > AMBER mailing list
>> > AMBER_at_ambermd.org
>> > http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER_at_ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
> _______________________________________________
> AMBER mailing list
> AMBER_at_ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER_at_ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
|