UMBC logo
UMBC High Performance Computing Facility
System Description for tara
The tara cluster was deployed in November 2009, with the release to HPCF users following in January 2010. Read on for a hardware-level description of this system. For more information about using your account on the system, see this page.
A few photos of the cluster

Nodes

tara has a total of 86 nodes which fall into four different categories. There is one front end node, one management node (for admins only), two development nodes, and 82 compute nodes. Each node features two (2) quad core Intel Nehalem X5550 processors (2.66 GHz, 8192 kB cache), 24 GB of memory, and a 120 GB local hard drive. All nodes run the standard UMBC distribution of the Linux operating system, Redhat Enterprise Linux 5. Attached to the cluster is 160 TB of central storage.

The Intel Nehalem processor series that is used in tara features some innovations that should be very beneficial to scientific computations. Specifically, each processor on a node has its own memory channels to its dedicated local memory, which should improve the speed of memory access. Other improvements by Intel aim at optimizing cache and loop performance and at improving head efficiency; for more information about Nehalem see Intel's website. Also Wikipedia provides a useful comparison between the various models of the chip that are available. Our own initial tests bear out the performance improvements of the processors, since they demonstrate that all cores on a node can be used simultaneously with profitable performance; see the Technical Report HPCF-2010-2.

More details on the different types of nodes in tara:

Network Hardware

Two networks connect all components of the system:

Storage

There are a few special storage systems attached to the cluster, in addition to the standard Unix filesystem. Here we descibe the areas which are relevant to users of the cluster. See Using Your Account for more information about how to access the space.