AMBER Archive (2009)

Subject: RE: [AMBER] XServe cluster

From: Richard Owczarzy (
Date: Thu Oct 22 2009 - 15:02:24 CDT

What is the delay time for request? It is not just the speed. It matters
how quickly can network start data transfer once the initial request is
made (latency time). If Fibre Channel is comparable with InfiniBand then
it could work. You may want to review some presentations where they
compare Infiniband with other networks, e.g. and maybe


Infiniband is marketed as "Signal Rate" not "Data Rate" like Ethernet.
(Watch also for Gbits/second versus GBytes/second).



Richard Owczarzy


>From the presentations:

Gigabit Ethernet Bandwidth/Latency

Theoretical Bandwidth: 1Gbps = 1,000 Mbps, 1,000 Mbps / 8 bytes/bit =

Demonstrated Bandwidth: 112 MB/s, ~90%

Discrepancies: Overhead in TCP stack

 Latency (Pallas): 30 microseconds


InfiniBand - DDR

Signal Rate: 20Gb/s

Data Rate: 16Gb/s

Theoretical Bandwidth (divide by 8): 2GB/s

Observed Bandwidth (ping-pong): 1.5 GB/s

Observed Latency: 1.2 microsec - 1.6 microsec



-----Original Message-----
From: [] On
Behalf Of Abdul Rehman Gani
Sent: Thursday, October 22, 2009 2:15 PM
Subject: [AMBER] XServe cluster




I have been tasked with building a 4 or 8 node Xserve cluster for use

with Amber and Gaussian. The Xserve's are quad core Intel servers. My

concern is the interconnect between the servers.


Would a QLogic Fibre Channel switch running at 4.25GB/s be suitable for

a cluster this size?


Infiniband is probably going to be too expensive. Are there any other










AMBER mailing list

AMBER mailing list