AMBER Archive (2009)Subject: Re: [AMBER] multisander error
From: David Watson (dewatson_at_olemiss.edu)
Date: Mon Jan 12 2009 - 16:15:26 CST
On Jan 12, 2009, at 4:03 PM, Justine Condo wrote:
> Dear AMBER-list,
> I am trying to do a thermodynamic integration calculation following
> the
> online tutorial example for AMBER 9. I have created my .group
> files and
> all their respective dependencies and a script file. When I submit
> the job
> I receive the following error message:
>
> -catch_rsh
> /opt/gridengine/default/spool/compute-0-19/active_jobs/4596.1/
> pe_hostfile
> compute-0-19
> compute-0-19
> compute-0-19
> compute-0-19
> Warning: no access to tty (Bad file descriptor).
> Thus no job control in this shell.
>
From a cursory glance, it would appear that something is going wrong
when you rsh with this implementation of MPI.
You may want to ensure that you can rsh into your account
successfully, and if so, that you have your environmental variables
set appropriately.
You may also need to set up rsh for unattended login, which is beyond
the scope of what I can help you with.
> Running multisander version of sander amber9
> Total processors = 4
> Number of groups = 2
>
> Looping over processors:
> WorldRank is the global PE rank
> NodeID is the local PE rank in current group
>
> Group = 0
> WorldRank = 0
> NodeID = 0
>
> WorldRank = 1
> NodeID = 1
>
> Group = 1
> WorldRank = 2
> NodeID = 0
>
> WorldRank = 3
> NodeID = 1
>
> rank 2 in job 1 compute-0-19_48554 caused collective abort of all
> ranks
> exit status of rank 2: killed by signal 9
> ...
> ...
> ...
>
> I have no idea what is wrong and am hoping someone can assist me in
> this
> detective work!
>
> Thanks in advance,
> Justine
>
_______________________________________________
AMBER mailing list
AMBER_at_ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
|