AMBER Archive (2009)

Subject: Re: [AMBER] Number of Cycles

From: Jason Swails (jason.swails_at_gmail.com)
Date: Tue Dec 15 2009 - 11:10:31 CST


I think he just scales ntwr, not ntwx. I see the problem in scaling
ntwx as far as assuming you're pulling snapshots every ps, but if ntwx
is scaled to processor count it won't be the case. However, ntwr
should have no bearing at all on the frequency with which frames are
written to the trajectory.

>From the pmemd source, it appears that only ntwr is scaled

#ifdef MPI
  if (numtasks .gt. 10) ntwr = 50 * numtasks
#endif

Also, setting ntwr in the mdin fixes it (overriding the scaling)
(fixes as in makes it constant, not makes it unbroken :) ). ntwx is
set to 0 by default (in both sander and pmemd), at least in mdread.f
for sander and mdin_ctrl_dat.fpp for pmemd. I'm not exactly sure what
a value of 0 does, or if it's overridden somewhere else, but I'm
pretty sure we're arguing different variables here. I always set ntwx
in my simulations anyway, whether using sander or pmemd, so following
that procedure it's all a non-issue.

On Tue, Dec 15, 2009 at 11:53 AM, Carlos Simmerling
<carlos.simmerling_at_gmail.com> wrote:
> I mean that if changing processor count changes ntwx, then you need to be
> very careful in a project with multiple trajectory files since the time per
> frame increment is not constant. correct?
> I don't mean the data will be wrong, just that one must be careful in
> processing the data, such as when combining multiple trajectory files.
> that's why I said it can be confusing- not incorrect.
>
>
> On Tue, Dec 15, 2009 at 9:55 AM, Robert Duke <rduke_at_email.unc.edu> wrote:
>
>> Hi Carlos,
>> Sorry, but I don't understand the problem here...  Changing the processor
>> count should have close to zero impact on what is happening in the runs (not
>> completely true, as increasing the processor count probably does ever so
>> slightly increase rounding error in the force summation).  In the case of
>> the restart files, unless there is a disaster the only one you should be
>> interested in ever is the last one written, always the last step executed.
>> So perhaps there is additional grief with things like REMD or LES that you
>> are more involved with?  I need to understand why you would say this, so I
>> can understand and attempt to fix any real limitations.
>> Regards - Bob
>> (I change processor count all the time, really...)
>>
>> ----- Original Message ----- From: "Carlos Simmerling" <
>> carlos.simmerling_at_gmail.com>
>> To: "AMBER Mailing List" <amber_at_ambermd.org>
>> Sent: Tuesday, December 15, 2009 8:31 AM
>> Subject: Re: [AMBER] Number of Cycles
>>
>>
>>  the automatic scaling is great but people should be careful not to change
>>> processor count during a project - which I often do based on partition
>>> availability at run time. it can make things very confusing later.
>>>
>>>
>>> On Tue, Dec 15, 2009 at 8:23 AM, Robert Duke <rduke_at_email.unc.edu> wrote:
>>>
>>>  And I forgot to mention, but of course if you actually specify a value
>>>> for
>>>> ntwr in &cntrl, that value overrides the "default"
>>>> scaled-by-processor-count
>>>> value in pmemd.  Actually, a great way to screw performance in pmemd is
>>>> to
>>>> specify some small number for ntwr and then throw your job at a couple
>>>> hundred processors...  So I don't recommend putting a value for ntwr into
>>>> &cntrl unless you have a reason.
>>>> Regards - Bob Duke
>>>> ----- Original Message ----- From: "Robert Duke" <rduke_at_email.unc.edu>
>>>> To: "AMBER Mailing List" <amber_at_ambermd.org>
>>>> Sent: Tuesday, December 15, 2009 8:16 AM
>>>> Subject: Re: [AMBER] Number of Cycles
>>>>
>>>>
>>>>  I presume that for sander there is indeed a simple "default" ntwr value,
>>>>
>>>>> giving a rewrite every 500 steps.  In pmemd, the value is actually
>>>>> scaled as
>>>>> a function of processor count once you have more than 10 processors, in
>>>>> order to not increase the frequency in time of rewriting restart files
>>>>> as
>>>>> you run on more and more processors.  This keeps you from getting into a
>>>>> situation where writing this file has a significant performance impact.
>>>>> Regards - Bob Duke
>>>>> ----- Original Message ----- From: "Jason Swails" <
>>>>> jason.swails_at_gmail.com
>>>>> >
>>>>> To: "AMBER Mailing List" <amber_at_ambermd.org>
>>>>> Sent: Tuesday, December 15, 2009 7:10 AM
>>>>> Subject: Re: [AMBER] Number of Cycles
>>>>>
>>>>>
>>>>> Hello,
>>>>>
>>>>> The exact step that the MD died on is fairly irrelevant.  The only
>>>>> thing that really matters is the last step of the MD in which a
>>>>> restart file was written (this is ntwr, which has a default of 500 I
>>>>> believe).  If ntpr is greater than ntwr (especially by multiples of
>>>>> ntwr), then there is no way of isolating exactly which step your
>>>>> calculation ended on.  This is why I typically use the same values for
>>>>> ntwr and ntpr (and ntwx for MD simulations).  If you do use the same
>>>>> value for ntwr and ntpr, and the last step printed for ntpr is 66000,
>>>>> then your restart corresponds to MD step 66000, and you'll need to run
>>>>> an additional 34000 steps to reach 100000 (66000 + 34000 = 100000).
>>>>>
>>>>> Hope this helps!
>>>>> Jason
>>>>>
>>>>> On Tue, Dec 15, 2009 at 3:31 AM, s. Bill <s_bill36_at_yahoo.co.uk> wrote:
>>>>>
>>>>>  Dear AMBER
>>>>>> How can I complete my total number of cycles?
>>>>>> Say, I had submitted my job for 100000 cycle (nstlim=100000), and due
>>>>>> to
>>>>>> the wall clock time my job stopped at cycle number 66000, I asked to
>>>>>> write
>>>>>> out the output every 5000 cycle (NTPR=5000, NTWX=5000). The problem
>>>>>> here is
>>>>>> how to complete the remaining steps. I am not sure if the remaining
>>>>>> steps
>>>>>> are 44000, it may be in between 44000 and 43500, where my output
>>>>>> written
>>>>>> every 5000.
>>>>>> So, how can I complete my number of cycles? is there any keyword manage
>>>>>> this problem?
>>>>>> Thanks in advance
>>>>>> S. Bill
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> AMBER mailing list
>>>>>> AMBER_at_ambermd.org
>>>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> ---------------------------------------
>>>>> Jason M. Swails
>>>>> Quantum Theory Project,
>>>>> University of Florida
>>>>> Ph.D. Graduate Student
>>>>> 352-392-4032
>>>>>
>>>>> _______________________________________________
>>>>> AMBER mailing list
>>>>> AMBER_at_ambermd.org
>>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> AMBER mailing list
>>>>> AMBER_at_ambermd.org
>>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER_at_ambermd.org
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>
>>>>  _______________________________________________
>>> AMBER mailing list
>>> AMBER_at_ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER_at_ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
> _______________________________________________
> AMBER mailing list
> AMBER_at_ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>

-- 
---------------------------------------
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032

_______________________________________________ AMBER mailing list AMBER_at_ambermd.org http://lists.ambermd.org/mailman/listinfo/amber