|
|||||||||||||||||||||||||||||||||
AMBER Archive (2009)Subject: [AMBER] Polarizable simulation of the slab
From: Jan Heyda (Jan.Heyda_at_seznam.cz)
Dear all,
I'm dealing with slab calculation - so NVT calculation, in which the polarizable force field has to be used. The system consists of about 1000 water molecules and few ions. The system size should therefore be something like 32A x 32A x 150A.
Because of polarizable simulation I'm using SANDER.MPI.
The problem which I'm now dealing with is that if I use more than 1 CPU the simulation imidiatelly crashes with the error message
* NB pairs 254 342299 exceeds capacity ( 342510) 3
Because I previously did both bulk polarizable simulation in box - NPT, and also NVT (but with box of size close to that obtained from NPT) just to achieve experimental density. In these case I used SANDER.MPI (on 1 node = 4CPUs) and it works fine.
So why I can run NVT in case where the system behaves like bulk, but can't run NVT for slab system?
That motivated me to do addition checks.
***** Processor 2
Which I believe has no effect on the provided simulation (even the eye judging of trajectory looks fine).
But above this z-size (+- z=64A) the slab simulation crashes when I use SANDER.MPI with more than 1CPU.
In case of using nonpolarizable slab simulation with PMEMD, I didn't deal with any problem and all slab simulations smoothly run. So I think that PMEMD works fine in slab simulations.
Does anyone know if this is a bug in SANDER.MPI code (especially when run at more than 1 CPU), or what is the actual reason for this kind of problem with polarizable/nonpolarizable slab simulations? What bothers me is that PMEMD and SANDER.MPI in this simple simulation setup should work exactly the same, so where does this strange difference come from?
Many thanks for help/explanation.
Best regards,
_______________________________________________
| |||||||||||||||||||||||||||||||||
|