AMBER Archive (2003)

Subject: RE: AMBER: tru64 alpha

From: Yong Duan (yduan_at_udel.edu)
Date: Fri Oct 17 2003 - 06:53:39 CDT


The interconnect is Myrinet under the brand name "SC".
Theoretically, you should be able to get at least 16-cpu level with
pmemd.
You may use "netperf" to check the network.

yong

> -----Original Message-----
> From: owner-amber_at_scripps.edu
> [mailto:owner-amber_at_scripps.edu] On Behalf Of Mu Yuguang (Dr)
> Sent: Thursday, October 16, 2003 10:08 PM
> To: amber_at_scripps.edu
> Subject: RE: AMBER: tru64 alpha
>
>
> I am also new .
> It is from the HP home
> AlphaServer SC45 supercomputer: facts and figures
> Here I attach more:
>
>
> compute building block
> Consists of up to 5 customized AlphaServer ES45 servers, each of which
> include:
> 1 to 4 Alpha-EV68 1.25 GHz processors with 16 MB cache per
> processor OR
> 1 to 4 Alpha-EV68 1.0 GHz processors with 8 MB cache per processor
> 2 to 32 GB of ECC 133 MHz, industry-standard DIMM memory
> 2-port Ultra SCSI storage adapter and disk cage with room for
> up to 6 1"
> hot swap drives
> 10 PCI I/O slots on 4 64-bit PCI buses, delivering a peak bandwidth of
> 1.8 GB/s
> 1 x 1.44 MB diskette drive
> 1 x 600 MB 40X IDE CD-ROM
> 1 or 2 AlphaServer SC Interconnect PCI adapters, capable of over 280
> MB/s sustained bandwidth per adapter
> Tru64 UNIX V5.1a operating system
>
>
> -----Original Message-----
> From: Yong Duan [mailto:yduan_at_udel.edu]
> Sent: Friday, October 17, 2003 10:00 AM
> To: amber_at_scripps.edu
> Subject: RE: AMBER: tru64 alpha
>
>
> Dear Yuguang,
>
> I am a bit curious. What was the interconnect?
>
> yong
>
> > -----Original Message-----
> > From: owner-amber_at_scripps.edu
> > [mailto:owner-amber_at_scripps.edu] On Behalf Of Mu Yuguang (Dr)
> > Sent: Thursday, October 16, 2003 8:41 PM
> > To: amber_at_scripps.edu
> > Subject: RE: AMBER: tru64 alpha
> >
> >
> > Thanks David, Bill and Rob for your helpful reply.
> > Now I try to complie PMEMD with little changed machine file, using
> > mpif90 and mpicc, and then submit with corresponding mpirun.
> > It works well in one node with 4 cpus with scaling up to
> 92%, but the
> > scaling drops to 25% using 2 nodes with 8 cpus.
> > My system is 18er duplex DNA with total 56999 atoms using PME.
> >
> > The inter-node connections should be a little better than
> Myrinet, and
> > here the MPI is mpich-1.2.5.
> > I am not sure that the scaling failure is due to the mpich or
> > something
> > else.
> >
> >
> > -----Original Message-----
> > From: Bill Ross [mailto:ross_at_cgl.ucsf.edu]
> > Sent: Wednesday, October 15, 2003 10:39 PM
> > To: amber_at_scripps.edu
> > Subject: RE: AMBER: tru64 alpha
> >
> > > FATAL dynamic memory allocation error in subroutine
> alloc_ew_dat_mem
> > > Could not allocate ipairs array!
> >
> > In unix,
> >
> > % man ulimit
> >
> > Bill Ross
> >
> > --------------------------------------------------------------
> > ---------
> > The AMBER Mail Reflector
> > To post, send mail to amber_at_scripps.edu
> > To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
> >
> >
> >
> > --------------------------------------------------------------
> > ---------
> > The AMBER Mail Reflector
> > To post, send mail to amber_at_scripps.edu
> > To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
> >
>
>
> --------------------------------------------------------------
> ---------
> The AMBER Mail Reflector
> To post, send mail to amber_at_scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
>
>
>
> --------------------------------------------------------------
> ---------
> The AMBER Mail Reflector
> To post, send mail to amber_at_scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu
>

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber_at_scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo_at_scripps.edu