Our compute cluster contains a variety of machine configurations, from basic systems to large-memory multiprocessors. All systems are accessible to all members of the CS community for research or educational purposes.
All machines mount the department's filesystems. All are on the department's network. Access to the machines is restricted to the slurm workload manager.
Dedicated Machines
The current grid sports 49 nodes with a total of 2288 cores, as detailed in the following table.
Name | Machines | Cores | Total Cores | Memory | Disk | CPU | Clock |
---|---|---|---|---|---|---|---|
mblade12 | 20 | 32 | 640 | 64G | 160G | Opteron 6282 SE | 2.6 |
mblade13 | 10 | 64 | 640 | 256G (1-3,8-10), 128G (4-7) | 1T | Opteron 6380 | 2.5 |
smblade16a | 7 | 48 | 336 | 256G | 1T | Xeon E5-2650 | 2.2 |
smblade16b | 12 | 56 | 672 | 256G | 960G | Xeon E5-2680 | 2.4 |
In the table above, where there are multiple machines of a type, hostnames end with a number (e.g. mblade1301).
All machines are running 64-bit Debian Linux.
GPU Servers
The grid also offers 27 GPU servers, providing a total of 156 GPUs. These machines are available only to jobs that specifically request GPUs.
Name | Machines | Cores | GPUs | Memory | VRAM | Disk | CPU | GPU |
---|---|---|---|---|---|---|---|---|
gpu1601-05 | 5 | 12 | 4 | 256G | 8G | 256G | Xeon E5-2603 | Nvidia GTX 1080 |
gpu1701-08 | 8 | 12 | 4 | 256G | 11G | 256G | Xeon E5-2603 | Nvidia GTX 1080ti |
gpu1801-02 | 2 | 16 | 8 | 128G | 11G | 256G | Xeon E5-2609 | Nvidia GTX 1080ti |
gpu1901-04 | 4 | 16 | 8 | 128G | 11G | 1T | Xeon Bronze 3106 | Nvidia RTX 2080ti |
gpu1905-06 | 2 | 16 | 8 | 128G | 11G | 1T | Xeon Bronze 3106 | Nvidia GTX 1080ti |
gpu1907 | 1 | 16 | 4 | 128G | 24G | 1T | Xeon Bronze 3106 | Nvidia Titan RTX |
gpu2001 | 1 | 16 | 4 | 128G | 24G | 1T | Xeon Bronze 3106 | Nvidia Titan RTX |
gpu2002-03 | 2 | 16 | 8 | 256G | 11G | 1T | Xeon Bronze 3106 | Nvidia RTX 2080ti |
gpu2201 | 1 | 96 | 8 | 512G | 48G | 500G | AMD EPYC 7443 | Nvidia RTX A6000 |
gpu2301 | 1 | 128 | 8 | 1T | 48G | 500G | AMD EPYC 7662 | Nvidia L40 |
Networking
Each machine in the cluster has a 1Gb switched network connection to the department's core switches.
History / Provenance
mblade12
MaxBuilt blades purchased for the grid in 2012.
mblade13
MaxBuilt blades purchased for the grid in 2013.
smblade16a
MaxBuilt blades purchased for the grid in 2016.
gpu1601-05
GPU servers purchased by the department in 2016.
gpu1701-07
GPU servers purchased by the department in 2017.
gpu1708
GPU servers purchased in 2017 for the Data Science Initiative course DSI 1030.
gpu1801-02
GPU servers purchased by the department in 2018.
gpu1901-02,05-07
GPU servers purchased by the department in 2019.
gpu1903-04
GPU server purchased in 2019 for Stephen Bach's research group.
gpu2001-03
GPU servers purchased by the department in 2020.
gpu2201
GPU server purchased by the department in 2022.
gpu2301
GPU server purchased by the department in 2023.