Piranha's homepage

Piranha is our Intel-based Linux cluster. It was installed in the Summer/Fall 2009 and currently resides in the VUIT Data Center in the Hill building.

Piranha System Stats:

  • KVM virtualized gateway node on a Dell PE R415 running 64 bit CENTOS6.
  • 32 Dell PE R610 compute nodes (pir*) running 64 bit CENTOS6.
    • 568 (32 nodes with hyperthreading (HT) on) 2.93GHz Intel Xeon X5570 (pir1 - pir25, 25 nodes) and X5670 (pir26 - pir32, 7 nodes) processors
    • 12GB RAM on 16 nodes and 48GB on 16 nodes. About 896GB of usable memory.
  • 4 Supermicro GPU compute nodes with quad Nvidia GTX 680 GPUs each
  • 5 Dell PE R710 storage nodes (gluster*) running 64 bit CENTOS6.
    • 80 (HT on) 2.26GHz Intel Xeon E5520 "Nehalem" processors.
    • 12GB RAM per server
    • 6 1.5TB SATAII 7.2K RPM drives in a RAID5 per server, 34TB of clustered storage using Gluster over Infiniband
  • QDR Mellanox ConnectX infiniband interconnects and 36 port QDR Mellanox switch
  • GigE on all systems via 48 + 24 port stacked VUIT managed Cisco switches with 10gb uplinks
  • Scheduling done using Torque (PBS) + Moab

Getting an account

To get an account on piranha, you'll first need to obtain a structural biology UNIX account. Then login to the machine "piranha" from any CSB maintained system or "piranha.structbio.vanderbilt.edu" from other systems on campus.

Filesystem layout on Piranha

For usability and convenience, your structural biology home directory is network-mounted on piranha, pir nodes, and gluster servers.

Piranha also features a high performance clustered filesystem which uses Gluster over infiniband. There are two directories /pirstripe and /pirdist mounted on piranha, pir, and gluster machines. For the best performance when dealing with large sequential reads and writes, we recommend using /pirstripe which writes and reads from all gluster machines simultaneously. For the best performance when dealing with many small files we recommend using /pirdist which writes and reads from individual gluster machines. The two directories, /pirstripe and /pirdist, are two separate and unique Gluster filesystems, i.e. they are not linked so you won't see the same files under /pirstripe that you see under /pirdist. Files in the gluster directories /pirdist and /pirstripe are not backed up.

When copying files to and from /pirstripe or /pirdist to your home directories on the BlueArc (BA) you should use one of the 5 gluster (gluster1 .. gluster5) machines. It is also possible to install the gluster stack on your local CSB workstation but unless you have a gigabit connection and are on a 129.59. IP address on the Arts and Sciences side of campus, transfer speeds will be very slow due to the VUMC firewall.

For access to the gluster filesystem on the cluster or for direct access from your local workstation please send an email to support@structbio.vanderbilt.edu or submit a ticket to request tracker.

Submitting compute jobs to Piranha

Please see this howto at the CSB Twiki.

Compiling MVAPICH (MPICH over infiniband) programs for use on Piranha

You should first test and debug your MPICH code on your CSB workstation(s) to make sure it works. Use "sbset mpich2-gcc" or "sbset mpich2-icc11" and mpicc, mpicxx, or mpif90 to compile your code against the GCC compiled MPICH libraries or the Intel v11 compilers compiled MPICH libraries.

After you are confident that your code works, you may use the gluster* machines for compiling code that you intend to run on the pir* nodes that will use MVAPICH. You will need to either use "sbset mvapich2-gcc" or "sbset mvapich2-icc11" again depending on whether you want to compile your code against the gcc compiled MVAPICH or the Intel v11 compilers compiled MVAPICH. Do not run, test, or debug your applications on the gluster machines.


You may run any programs that are available under /sb/apps, in SBGrid, or your own programs as long as the program can be run in non-interactive text-mode.