AMBER Archive (2006)

Subject: AMBER: the parallel efficiency of my sander and pmemd descended rapidly with increasing CPUs

From: Zhihong Yu (computation_at_mail.nankai.edu.cn)
Date: Sun May 14 2006 - 22:55:51 CDT


I have compiled amber8 (including pmemd) successfully on my redhat 9.0 linux cluster using ifort 9.0, the computing nodes of cluster are composed of Xeon (IA-32, dual cpu, 3.06 GHz) through Myrinet network. After I made benckmarks on "jac" and "factor_ix" system, I found that the efficiency of my sander and pmemd looks not very good, especially for Parallel efficiency (defined as: speedup/CPU), the following is my benchmark data:

for jac:

sander:
CPU Ps/Day speedup Parallel efficiency
1 113 1.00 100.00%
2 192 1.70 85.00%
4 332 2.94 73.45%
8 588 5.20 65.04%
16 719 6.36 39.76%

pmemd:
CPU Ps/Day speedup Parallel efficiency
1 158.57 1.00 100.00%
2 259.55 1.64 81.84%
4 480.03 3.03 75.68%
8 853.59 5.38 67.29%
16 1077.31 6.79 42.46%
             

for factor_ix:

sander:
CPU Ps/Day speedup Parallel efficiency
1 50.28 1.00 100.00%
2 82.44 1.64 81.98%
4 139.03 2.77 69.13%
8 194.70 3.87 48.40%

pmemd:
CPU Ps/Day speedup Parallel efficiency
1 77.14 1.00 100.00%
2 144.57 1.87 93.70%
4 226.14 2.93 73.29%
8 284.58 3.69 46.12%

Compared with amber website's data (pmemd, Intel Xeon x86_64, 3.4 GHz, 1cpu, jac, 179 ps/day), I think my 159 ps/day is normal and acceptable, But compared with datas on "http://www.rosswalker.co.uk/amber_sdsc/", my parallel efficiency descended rapidly with increasing CPUs, I think maybe I need some optimization on parallel sander or pmemd, did somebody has similar cases and give me some advice? thanks a lot!

BTW, for sander, my configure command is: ./configure -mpi -p4 ifort
     for pmemd, the command is: ./new_configure linux_p4 ifort mpich_gm

sincerely yours, zhihong

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu