AMBER Archive (2008)

Subject: RE: AMBER: problems with restart of MD

From: Yong Duan (duan_at_ucdavis.edu)
Date: Mon Jan 14 2008 - 12:16:24 CST


 
Vijay,
 
At 800K, the ions, or whatever the small ligands, will fly away. Having them
in the simulations is not meaningful, other than causing you headache (and
an opportunity to appreciate entropy).
 
If you need them to be around, restrain them with weak force. If you want to
use PBC, you need to implement PBC in GB code. Removal of COM is not going
to help much.
 
 
yong
 
-----Original Message-----
From: owner-amber_at_scripps.edu [mailto:owner-amber_at_scripps.edu] On Behalf Of
Vijay Singh
Sent: Sunday, January 13, 2008 11:53 AM
To: amber_at_scripps.edu
Subject: Re: AMBER: problems with restart of MD

Thanks Yong,

     Although I may not necessarily need the presence of the 3 ions, but I
am anyway persisting with them to have maximum consistency. All my other
results have these ions incorporated.

     I will probably do both - removal of the center of mass motion as well
as invoke PBC and compare them both.

Thanks again,
Vijay

     

On Jan 13, 2008 1:08 PM, Yong Duan <duan_at_ucdavis.edu> wrote:

 
Physically, this is caused by entropy, something that we do not want to run
against :).
 
You were running gb simulations with 3 ions in an infinitely dilluted
environment where "water" is modeled by GB. If everything is modeled
accurately, this effectively mimicks a water box of infinite size (inifinte
as the size of the universe :)). Imagine a full bucket (say, 1 gallon) of
water and you add ONE protein molecule and add 3 ions. Do you expect them to
stay together? At extremely low T, they might. At room T, they should not.
At 800K, as is the case in your simulation, they should simply fly away from
each other. This is not necessarily due to repulsive energy (as in
electrostatic or van der Waals), but most likely due to entropy, despite the
fact they might be attracted to each other by the electrostatic force.
Entropy is something that we all have to live with when T is not zero. Yet,
we all tend to forget about it.
 
How to solve the problem? You need to add periodic boundary condition to
mimick adequate ion/protein concentration if you really need those three
ions. If, on the other hand, those three ions can be removed, or you don't
want to play with the code, remove them.

 
yong

-----Original Message-----
From: owner-amber_at_scripps.edu <mailto:owner-amber_at_scripps.edu>
[mailto:owner-amber_at_scripps.edu] On Behalf Of Vijay Singh

Sent: Sunday, January 13, 2008 7:45 AM
To: amber_at_scripps.edu
Subject: Re: AMBER: problems with restart of MD

This problem occurred after 40ns. But on an earlier occasion it occurred
even at 10ns. In amber9 nscm=1000 is set as default, isn't it? Does that
mean that even when not specified, nscm=1000 ? I will goahead and try this
and maybe another with smaller value of nscm.

I don't have have water in my simulation but I do have have ions - 3 Na+.

Thanks a lot for your response

Regards,
Vijay

On Jan 12, 2008 9:58 PM, Ross Walker <ross_at_rosswalker.co.uk > wrote:

Hi Vijay,
 
Here's the problem - line 488: 174.1520850
-93.25969211280.4501416************-208.2782643************
 
Your coordinates have increased so that they no longer fit in the space
allocated for them in the file and it prints *'s. If you run this in serial
it will probably quit with an error about problems doing formatted read but
in parallel sometimes things can just hang. If you find weird errors in
parallel it is always best to rerun in serial interactively so any errors
are not "lost".
 
How long has your simulation been running at this point? Normally it takes
at least 50ns or so for things to have diffused far enough to cause the
above problem. If your simulation time is much less than this then you could
have problems that are causing your system to blow up.
 
I would recommend going back to the previous restart file (that doesn't have
star's in it) and rerun the simulation but this time set nscm=1000 which
will remove center of mass motion and stop your system translating through
space. However, the fact that line 489 has:
 

1867.9560045-227.8549628 617.5614458

while everything else is around:
 
 337.8596051 341.2630291-537.0048903 337.8908765 342.2356117-536.7168717

It suggest to me that some part of your system has took off and translated a
long way away. You don't have ions or water here do you?
 
I would both check on your system - perhaps run from the restart with ntwx=1
for a few hundred steps and visualize it to see what is happening. Also make
sure you have nscm set or you will still have problems later when the entire
system ends up translating too far due to center of mass motion imparted by
having a thermostat present.
 
All the best
Ross
 
/\
\/
|\oss Walker

| Assistant Research Professor |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross_at_rosswalker.co.uk |
| http://www.rosswalker.co.uk <http://www.rosswalker.co.uk/> | PGP Key
available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
 

  _____

From: owner-amber_at_scripps.edu [mailto:owner-amber_at_scripps.edu] On Behalf Of
Vijay Singh

Sent: Saturday, January 12, 2008 01:49
To: amber_at_scripps.edu
Subject: Re: AMBER: problems with restart of MD

Dear Dr.Ross,

    The results continues to be same even with NTX = 5. The error log
file shows the following -

forrtl: severe (64): input conversion error, unit 9, file
/mnt/home/singhvij/mdout800_1.rst
Image PC Routine Line Source
sander.MPI 00000000007E2492 Unknown Unknown Unknown
sander.MPI 00000000007E1692 Unknown Unknown Unknown
sander.MPI 0000000000798246 Unknown Unknown Unknown
sander.MPI 000000000074D63E Unknown Unknown Unknown
sander.MPI 000000000074CC5A Unknown Unknown Unknown
sander.MPI 000000000076D3A9 Unknown Unknown Unknown
sander.MPI 00000000004F4556 Unknown Unknown Unknown
sander.MPI 00000000004B8854 Unknown Unknown Unknown
sander.MPI 00000000004B2BAD Unknown Unknown Unknown
sander.MPI 0000000000405E32 Unknown Unknown Unknown
libc.so.6 00002BA629E1A154 Unknown Unknown Unknown
sander.MPI 0000000000405D6A Unknown Unknown Unknown
forrtl: error (78): process killed (SIGTERM)
Image PC Routine Line Source
sander.MPI 000000000073763F Unknown Unknown Unknown
sander.MPI 0000000000736740 Unknown Unknown Unknown

I have no idea of how to proceed from here.Incase needed, I am attaching the
"mdout800_1.rst" file for your perusal.

Thanks a lot,
Vijay

On Jan 11, 2008 3:04 PM, Vijay Singh <vijayratan.singh_at_gmail.com
<mailto:vijayratan.singh_at_gmail.com> > wrote:

Hi,

   Thanks for the response. I actually tried NTX = 5 too. Result was same as
that with NTX=7. But, I will go ahead and try once again and may be wait a
little more longer to see if the output file updates properly.

Thanks again,
Vijay

On Jan 11, 2008 2:56 PM, Ross Walker < <mailto:ross_at_rosswalker.co.uk>
ross_at_rosswalker.co.uk> wrote:

Hi Vijay,
 
The issue is that you are running a non-periodic simulation here (ntb=0) but
when you restart you are setting ntx=7 which tells sander to expect box
information from the input coordinate file. Since your input coordinate file
does not have any box info the code is hanging there waiting for that
information to be appended to the file. I realize we should probably find a
better way to do this in the code so it fails gracefully rather than just
hanging but this isn't always easy in parallel.
 
Anyway, to answer your problem set ntx=5 and everything should be good. Also
note that with Amber 9 you can always set NTX=5 and it will auto load the
box info if you are running a periodic simulation. Thus ntx=7 is actually
deprecated as an option hence why it is no longer in the manual.
 
All the best
Ross

/\
\/
|\oss Walker

| Assistant Research Professor |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross_at_rosswalker.co.uk |
| http://www.rosswalker.co.uk <http://www.rosswalker.co.uk/> | PGP Key
available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.

 

  _____

From: owner-amber_at_scripps.edu [mailto:owner-amber_at_scripps.edu] On Behalf Of
Vijay Singh

Sent: Friday, January 11, 2008 09:30

To: amber_at_scripps.edu
Subject: AMBER: problems with restart of MD

Hi,

Not sure if my messages are reaching the right destination. I did not get
any response on 2 different occasions earlier. Neverthless, another try.

I am using amber9 and doing some very basic MD.I am having some trouble with
the restart of MD production run. Not sure where I am going wrong. After
initial minimization, the first part of run is fine

The input files looks -

&cntrl
 imin = 0, ntb = 0, irest = 0,
 igb = 1, ntpr = 10000, ntwx = 1000,
 ntt = 3, gamma_ln = 1.0,
 temp0 = 800.0,tempi = 800.0,
 nstlim = 40000000, dt = 0.001,
 cut = 999
/

mpiexec $AMBERHOME/exe/sander.MPI -O -i md_800k_1.in -o md800_1.out -c
t57c_min.rst -p t57c.prmtop -r mdout800_1.rst -x mdout800_1.mdcrd

Till this point I get all the output as needed. But the 2nd step below is
where I get stuck on the restart part, the input is as follows -

&cntrl
 imin = 0, ntb = 0, irest = 1, ntx = 7,
 igb = 1, ntpr = 10000, ntwx = 1000,
 ntt = 3, gamma_ln = 1.0,
 temp0 = 800.0,
 nstlim =40000000, dt = 0.001,
 cut = 999
/

#mpiexec $AMBERHOME/exe/sander.MPI -O -i md_800k_2.in -o md800_2.out -c
mdout800_1.rst -p t57c.prmtop -r mdout800_2.rst -x mdout800_2.mdcrd

>From here I don't get any output. The mdout file stops with -

Langevin dynamics temperature regulation:
   ig = 71277
   temp0 = 800.00000, tempi = 0.00000, gamma_ln= 1.00000
| INFO: Old style inpcrd file read

----------------------------------------------------------------------------
----
3. ATOMIC COORDINATES AND VELOCITIES
----------------------------------------------------------------------------
----

Could someone please help me on that.

Regards
Vijay

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu