AMBER Archive (2006)

Subject: [RE]RE: AMBER: Problem of QM/MM calculation with amber 9 parallel version

From: Lee Kyung-koo (daum203_at_hanmail.net)
Date: Thu Nov 02 2006 - 02:53:53 CST


Dear Walker,

My qm/mm input file:

NVT ensemble simulation with temperature scaling
&cntrl
imin = 0, irest = 1, ntx = 5,
nstlim = 200000, nsnb = 10, dt = 0.001
temp0 = 300.0,
ntt = 1, ntb = 1, ntp = 0,
ntf = 2, ntc = 2, cut = 9.0, scee = 1.2,
ntwx= 20, iwrap = 1,
ifqnt=1,
/
&qmmm
qmmask=:1-3, qmcharge=0, spin=1, printcharges = 1,
qmtheory=2,
peptide_corr=1, qmshake=0
/






ouput file:

--------------------------------------------------------------------------------
3. ATOMIC COORDINATES AND VELOCITIES
--------------------------------------------------------------------------------


begin time read from input coords = 400.000 ps

Number of triangulated 3-point waters found: 1461
| Atom division among processors:
| 0 2203 4405

|QMMM: Running QMMM calculation in parallel mode on 2 threads.
|QMMM: All atom division among threads:
|QMMM: Start End Count
|QMMM: Thread( 0): 1-> 2203 ( 2203)
|QMMM: Thread( 1): 2204-> 4405 ( 2202)

|QMMM: Quantum atom + link atom division among threads:
|QMMM: Start End Count
|QMMM: Thread( 0): 1-> 11 ( 11)
|QMMM: Thread( 1): 12-> 22 ( 11)

Sum of charges from parm topology file = 0.00000000
Forcing neutrality...
| Running AMBER/MPI version on 2 nodes

QMMM: ADJUSTING CHARGES
QMMM: ----------------------------------------------------------------------
QMMM: adjust_q = 2
QMMM: Uniformally adjusting the charge of MM atoms to conserve total charge.
QMMM: qm_charge = 0
QMMM: QM atom RESP charge sum (inc MM link) = 0.000
QMMM: Adjusting each MM atom resp charge by = 0.000
QMMM: Sum of MM + QM region is now = 0.000
QMMM: ----------------------------------------------------------------------
---------------------------------------------------
APPROXIMATING switch and d/dx switch using CUBIC SPLINE INTERPOLATION
using 5000.0 points per unit in tabled values
TESTING RELATIVE ERROR over r ranging from 0.0 to cutoff
| CHECK switch(x): max rel err = 0.3338E-14 at 2.509280
| CHECK d/dx switch(x): max rel err = 0.8261E-11 at 2.768360
---------------------------------------------------
| Local SIZE OF NONBOND LIST = 598397
| TOTAL SIZE OF NONBOND LIST = 1206201

|QMMM: KVector division among threads:
|QMMM: Start End Count
|QMMM: Thread( 0): 1-> 155 ( 155)
|QMMM: Thread( 0): 156-> 309 ( 154)







output file of crambin_qmmmnmr


-------------------------------------------------------
Amber 9 SANDER 2006
-------------------------------------------------------

| Run on 11/02/2006 at 16:50:19
[-O]verwriting output

File Assignments:
| MDIN: mdin
| MDOUT: mdout.cnmr
|INPCRD: cram_am1.crd
| PARM: cram_am1.top
|RESTRT: restrt
| REFC: refc
| MDVEL: mdvel
| MDEN: mden
| MDCRD: mdcrd
|MDINFO: mdinfo
|INPDIP: inpdip
|RSTDIP: rstdip


Here is the input file:

Crambin: single point NMR calculation
&cntrl
imin =1, maxcyc = 1, drms=0.008,
scee=1.2, ntpr=5, ntb=0, cut=9.0,
ifqnt=1
/
&qmmm
qmtheory = 3,
iqmatoms = 109, 110, 111, 112, 113, 114,
115, 116, 117, 118, 119, 120,
121, 122,
qmcharge= 0,
idc = 1,
/


--------------------------------------------------------------------------------
1. RESOURCE USE:
--------------------------------------------------------------------------------

| Flags: MPI
| NONPERIODIC ntb=0 and igb=0: Setting up nonperiodic simulation
|Largest sphere to fit in unit cell has radius = 41.764
| New format PARM file being parsed.
| Version = 1.000 Date = 10/11/04 Time = 16:39:28
NATOM = 642 NTYPES = 12 NBONH = 315 MBONA = 334
NTHETH = 717 MTHETA = 460 NPHIH = 1277 MPHIA = 844
NHPARM = 0 NPARM = 0 NNB = 3545 NRES = 46
NBONA = 334 NTHETA = 460 NPHIA = 844 NUMBND = 23
NUMANG = 50 NPTRA = 21 NATYP = 16 NPHB = 0
IFBOX = 0 NMXRS = 24 IFCAP = 0 NEXTRA = 0
NCOPY = 0


| Memory Use Allocated
| Real 42189
| Hollerith 3900
| Integer 77930
| Max Pairs 156006
| nblistReal 7704
| nblist Int 2185931
| Total 9857 kbytes
| Duplicated 0 dihedrals
| Duplicated 0 dihedrals
Divcon capability (idc>0) can only run in serial mode for now



Sincerely, yours





---------[ ¹ÞÀº ¸ÞÀÏ ³»¿ë ]----------
Á¦¸ñ : RE: AMBER: Problem of QM/MM calculation with amber 9 parallel version
³¯Â¥ : Wed, 1 Nov 2006 16:04:28 -0800
º¸³½ÀÌ : "Ross Walker"
¹Þ´ÂÀÌ :

Dear Lee,

>I installed parallel version of amber9 in intel itanium server
>usaually called white box with intel compiler 9.
>Installing amber9 went well without a error message.
>When I runned QM/MM calculation with 2 process (mpich option is -np 2),
>it's not working. For long time, sander don't stop itself and don't make
any data.

>In running with 1 cpu (mpich option is -np 1), sander are working normally.


This is definately not right it shouldn't hang. I suspect this may be
related to a bug I have been seeing with Intel's 9.1 compilers on my x86_64
box with our amber10 development code. There was an issue with the Intel
compiler generating incorrect machine code for a loop over MPI_sends that
are in the ewald setup routines in the QM/MM code.

A couple of questions. Can you post the input file that you are using for
QM/MM - are you using ewald or pme?

Can you post the output file up to the point where it hangs?

Did you run the QM/MM test cases in parallel? Do the first test cases
(crambin_2) run correctly but the later ones 1NLN_periodic_lnk_atoms hang?
Or do they all hang?

If it is the former then I have a potential work around that you can try,
get back to me and let me know. If it is the later (i.e. they all hang) then
I will have to investigate it further. Having the output file up to the
point where the calculation hung would be very useful here.

All the best
Ross

/\
\/
|\oss Walker

| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross@rosswalker.co.uk |
| http://www.rosswalker.co.uk | PGP Key available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.


-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber@scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo@scripps.edu

"¿ì¸® ÀÎÅͳÝ, Daum" http://www.daum.net ¡ºÆò»ý¾²´Â ¹«·á ÇѸÞÀϳݡ»
----------------------------------------------------------------------- The AMBER Mail Reflector To post, send mail to amber@scripps.edu To unsubscribe, send "unsubscribe amber" to majordomo@scripps.edu