Resources: Difference between revisions

From Cheaha
Jump to navigation Jump to search
No edit summary
(20 intermediate revisions by 2 users not shown)
Line 1: Line 1:
The [[wikipedia:Cyberinfrastructure|Cyberinfrastructure]] supporting UAB investigators includes high performance computing clusters, high-speed storage systems, campus, state-wide and regionally connected high-bandwidth networks, and conditioned space for hosting and operating HPC systems, research applications and network equipment. A description of the facilities available to UAB researchers are described below.  If you would like an account on the HPC system, please [mailto:support@vo.uabgrid.uab.edu submit a request] and provide a short statement on your intended use of the resources and your affiliation with the university.  
The [[wikipedia:Cyberinfrastructure|Cyberinfrastructure]] supporting UAB investigators includes high performance computing clusters, high-speed storage systems, campus, state-wide and regionally connected high-bandwidth networks, and conditioned space for hosting and operating HPC systems, research applications and network equipment.  


== Compute Resources ==
[[Cheaha]] is a campus HPC resource dedicated to enhancing research computing productivity at UAB. Cheaha is managed by UAB Information Technology's Research Computing Services group (UAB ITRCS) and is available to members of the UAB community in need of increased computational capacity. Cheaha supports high-performance computing (HPC) and high throughput computing (HTC) paradigms. Cheaha is composed of resources that span data centers located in the UAB IT Data Centers in the 936 Building and the RUST Computer Center. UAB ITRCS in open collaboration with community members is leading the design and development of these resources. UAB IT’s Infrastructure Services group provides operational support and maintenance of these resources.


=== UAB High Performance Computing (HPC) Clusters ===
A description of the facilities available to UAB researchers are described below.  If you would like an account on the HPC system, please {{CheahaAccountRequest}} and provide a short statement on your intended use of the resources and your affiliation with the university.


The compute resources are organized into a unified Research Computing System. The compute fabric for this system is anchored by the [[Cheaha]] cluster, a commodity cluster with several generations of hardware with a total of 816 cores connected by low-latency Fourteen Data Rate (FDR) and Quad Data Rate (QDR) Infiniband networks.
== UAB High Performance Computing (HPC) Clusters ==


The different hardware generations are summarized in the following table and include:
=== Compute Resources ===
* Gen4: 3 2x8 core (48 cores total) 2.70 GHz Intel Xeon compute nodes with 384GB RAM per node (24GB per core), QDR Infiniband interconnect. (Sponsored by School of Public Health Section on Statistical Genetics)
 
* Gen3: 48 2x6 core (576 cores total) 2.66 GHz Intel Xeon compute nodes with 48GB RAM per node (4GB per core), QDR Infiniband interconnect. (Supported by NIH grant S10RR026723-01)
The current compute fabric for this system is anchored by the [[Cheaha]] cluster, a commodity cluster with 2800 cores connected by low-latency Fourteen Data Rate (FDR) and Enhanced Data Rate (EDR) InfiniBand networks.
* Gen2: 24 2x4 (192 cores total) Intel 3.0 GHz Intel Xeon compute nodes with 16GB RAM per node (2GB per core), DDR Infiniband interconnect. (Sponsored by UAB IT) [set to be decommissioned by December 2016]
 
* <strike>Gen1: 60 2-core (120 cores total) AMD 1.6GHz Opteron 64-bit compute nodes with 2GB RAM per node (1GB per core), and Gigabit Ethernet connectivity between the nodes</strike>. Gen1 decomissioned June 2013.
A historical description of the different hardware generations are summarized in the following table:
* Gen7: 18 2x14 core (504 cores total) 2.4GHz Intel Xeon E5-2680 v4 compute nodes with 256GB RAM, four NVIDIA Tesla P100 16GB GPUs, and EDR InfiniBand interconnect (supported by UAB, 2017).
* Gen6: 96 2x12 core (2304 cores total) 2.5 GHz Intel Xeon E5-2680 v3 compute nodes with FDR InfiniBand interconnect. Out of the 96 compute nodes, 36 nodes have 128 GB RAM, 38 nodes have 256 GB RAM, and 14 nodes have 384 GB RAM. There are also four compute nodes with the Intel Xeon Phi 7210 accelerator cards and four compute nodes with the NVIDIA K80 GPUs (supported by UAB, 2015/2016).
* Gen5: 12 2x8 core (192 cores total) 2.0 GHz Intel Xeon E2650 nodes with 96GB RAM per node and 10 Gbps interconnect dedicated to OpenStack and Ceph (supported by UAB IT, 2012).
* Gen4: 3 2x8 core (48 cores total) 2.70 GHz Intel Xeon compute nodes with 384GB RAM per node (24GB per core), QDR InfiniBand interconnect (supported by Section on Statistical Genetics, School of Public Health, 2012).
* Gen3: 48 2x6 core (576 cores total) 2.66 GHz Intel Xeon compute nodes with 48GB RAM per node (4GB per core), QDR InfiniBand interconnect (supported by NIH grant S10RR026723-01, 2010)
* Gen2: 24 2x4 (192 cores total) Intel 3.0 GHz Intel Xeon compute nodes with 16GB RAM per node (2GB per core), DDR InfiniBand interconnect (supported by UAB IT, 2008).
* Gen1: 60 2-core (120 cores total) AMD 1.6GHz Opteron 64-bit compute nodes with 2GB RAM per node (1GB per core), and Gigabit Ethernet connectivity between the nodes (supported by Alabama EPSCoR Research Infrastructure Initiative, NSF EPS-0091853, 2005).


{{CheahaTflops}}
{{CheahaTflops}}


=== Regional and National Resources ===
=== Storage Resources ===
 
In 2009, annual investment funds were directed toward establishing a fully connected dual data rate Infiniband network between the compute nodes added in 2008 and laying the foundation for a research storage system with a 60TB DDN storage system accessed via the Lustre distributed file system. In 2010, UAB was awarded an NIH Small Instrumentation Grant (SIG) to further increase analytical and storage capacity by an additional 120TB of high performance Lustre storage on a DDN hardware. In Fall 2013, UAB IT Research Computing acquired an OpenStack cloud and Ceph storage software fabric through a partnership between Dell and Inktank in order to extend cloud-computing solutions to the researchers at UAB and enhance the interfacing capabilities for HPC. This storage system provides an aggregate of half-petabytes of raw storage that is distributed across 12 compute nodes each with node having 16 cores, 96GB RAM, and 36TB of storage and connected together with a 10Gigabit Ethernet networking. During 2016, as part of the Alabama Innovation Fund grant working in partnership with numerous departments, 6.6PB raw GPFS storage on DDN SFA12KX hardware was added to meet the growing data needs of UAB researchers.


==== Alabama Supercomputing Center (ASC) ====
=== Network Resources ===
 
==== Research Network ====
'''UAB 10GigE Research Network''' The UAB Research Network is currently a dedicated 40GE optical connection between the UAB Shared HPC Facility and the RUST Campus Data Center to create a multi-site facility housing the Research Computing System, which leverage the network for connecting storage and compute hosting resources. The network supports direct connection to high-bandwidth regional networks and the capability to connect data intensive research facilities directly with the high performance computing services of the Research Computing System. This network can support very high speed secure connectivity between nodes connected to it for high speed file transfer of very large data sets without the concerns of interfering with other traffic on the campus backbone ensures predictable latencies.
 
==== Campus Network ====
'''Campus High Speed Network Connectivity''' The campus network backbone is based on a 40 gigabit redundant Ethernet network with 480 gigabit/second backplanes on the core L2/L3 Switch/Routers. For efficient management, a collapsed backbone design is used. Each campus building is connected using gigabit Ethernet links over single mode optical fiber. Within multi-floor buildings, a gigabit Ethernet building backbone over multimode optical fiber is used and Category 5 or better, unshielded twisted pair wiring connects desktops to the network. Computer server clusters are connected to the building entrance using Gigabit Ethernet. Desktops are connected at 1 gigabits/second speed. The campus wireless network blankets classrooms, common areas and most academic office buildings.
 
==== Regional Networks ====
'''Off-campus Network Connections''' UAB connects to the Internet2 high-speed research network via the University of Alabama System Regional Optical Network (UASRON), a University of Alabama System owned and operated DWDM Network offering 100Gbps Ethernet to the Southern Light Rail (SLR)/Southern Crossroads (SoX) in Atlanta, Ga. The UASRON also connects UAB to UA, and UAH, the other two University of Alabama System institutions, and the Alabama Supercomputer Center. UAB is also connected to other universities and schools through Alabama Research and Education Network (AREN).
 
UAB was awarded the NSF CC*DNI Networking Infrastructure grant ([http://www.nsf.gov/awardsearch/showAward?AWD_ID=1541310 CC-NIE-1541310]) in Fall 2016 to establish a dedicated high-speed research network (UAB Science DMZ) that establishes a 40Gbps networking core and provides researchers at UAB with 10Gbps connections from selected computers to the shared computational facility.
 
== Regional and National Resources  ==
 
=== Alabama Supercomputing Center (ASC) ===


Alabama Supercomputer Center (ASC) (http://www.asc.edu) is a State-wide resource located in Hunstville, Alabama. The ASC provides UAB investigators with access to a variety of high performance computing resources. These resources include:
Alabama Supercomputer Center (ASC) (http://www.asc.edu) is a State-wide resource located in Hunstville, Alabama. The ASC provides UAB investigators with access to a variety of high performance computing resources. These resources include:
* An SGI Altix Cluster has 162 CPU cores, 1340 GB of shared memory, and 19 terabytes in the Panasas file system. Each CPU is a 64 bit Intel Itanium 2 processor. The system consists of a SGI Altix 350 front end node with 1.4 GHz processors and Altix 450 nodes with dual core 1.6 GHz and 9.67 GHz processors. This gives the entire system a floating point performance of 1035 GigaFLOPS. Sets of from 6 to 72 CPUs are grouped together into shared memory nodes. There are multiple networks connecting the processors. These include: NUMAlink for sharing memory, Infiniband for file system access, gigabit ethernet for internet connectivity, and a secondary ethernet connection as a redundant fail over and management network.
* The SGI UV (ULTRAVIOLET) has 256 Xeon E5-4640 CPU cores operating at 2.4 GHz and 4 TB of shared memory, and 182 terabytes in the GPFS storage cluster.
* A Dense Memory Cluster (DMC) HPC system has 1800 CPU cores and 10 terabytes of distributed memory. Each compute node has a local disk (up to 1.9 terabytes of which are accessible as /tmp). Also attached to the DMC is a high performance Panasas file server, which has 17 terabytes of high performance storage accessible as /scratch from each node. Home directories as well as third party applications use a separate Panasas Filesystem and share 47 terabytes of storage. The machine is physically configured as a set of 8 or 16 CPU core SMP boards. Forty nodes have 2.3 GHz quad-core AMD Opterons and 64 gigabytes of memory. Ninety-six nodes have 2.26 GHz Intel quad-core Nehalem processors. Forty nodes have 2.3 GHz AMD 8-core Opteron processors and 128 gigabytes of memory. The DMC has sixteen GPU (Graphic Processing Unit) chips. These are a combination of: two Tesla S1070 units (external GPUs connected in pairs to four DMC nodes); four DMC nodes configured with a pair of Tesla M2070 cards each. These multicore GPU chips are similar to those in video cards, but are installed as math coprocessors.
* A Dense Memory Cluster (DMC) HPC system has 2216 CPU cores and 16 terabytes of distributed memory. Each compute node has a local disk (up to 1.9 terabytes of which are accessible as /tmp). Also attached to the DMC is a high performance GPFS storage cluster, which has 45 terabytes of high performance storage accessible as /scratch from each node. Home directories as well as third party applications use a separate GPFS volume and share 137 terabytes of storage. The machine is physically configured as a cluster of 8, 16, or 20 CPU core SMP boards. Ninety-six nodes have 2.26 GHz Intel quad-core Nehalem processors and 24 gigabytes of memory. Forty nodes have 2.3 GHz AMD 8-core Opteron Magny-Cours processors and 128 gigabytes of memory. Forty nodes have 2.5 GHz Intel 10-core Xeon Ivy Bridge processors and 128 gigabytes of memory.
* A large number of software packages are installed supporting a variety of analyses including programs for Computational Structural Analysis, Design Analysis, Quantum Chemistry, Molecular Mechanics/Dynamics, Crystallography, Fluid Dynamics, Statistics, Visualization, and Bioinformatics.
* A large number of software packages are installed supporting a variety of analyses including programs for Computational Structural Analysis, Design Analysis, Quantum Chemistry, Molecular Mechanics/Dynamics, Crystallography, Fluid Dynamics, Statistics, Visualization, and Bioinformatics.


==== Open Science Grid ====
=== Open Science Grid ===


UAB is a member of the SURAgrid Virtual Organization (SGVO)_ on the Open Science Grid (OSG) (http://opensciencegrid.org)
UAB is a member of the SURAgrid Virtual Organization (SGVO)_ on the Open Science Grid (OSG) (http://opensciencegrid.org)
This is a national compute network consists of nearly 80,000 compute cores aggregated across national facilities and contributing member sites. The OSG provides operational support for the interconnection middleware and facilities research and operational engagement between members.
This is a national compute network consists of nearly 80,000 compute cores aggregated across national facilities and contributing member sites. The OSG provides operational support for the interconnection middleware and facilities research and operational engagement between members.
== Network Resources ==
=== Research Network ===
UAB 10GigE Research Network The UAB Research Network is currently a dedicated 10GE optical connection between the UAB Shared HPC Facility and the RUST Campus Data Center to create a multi-site facility housing the Research Computing System, which leverage the network for connecting storage and compute hosting resources. The network supports direct connection to high-bandwidth regional networks and the capability to connect data intensive research facilities directly with the high performance computing services of the Research Computing System. This network can support very high speed secure connectivity between nodes connected to it for high speed file transfer of very large data sets without the concerns of interfering with other traffic on the campus backbone ensures predictable latencies.
=== Campus Network ===
Campus High Speed Network Connectivity The campus network backbone is based on a 10 gigabit redundant Ethernet network with 480 gigabit/second backplanes on the core L2/L3 Switch/Routers. For efficient management, a collapsed backbone design is used. Each campus building is connected using gigabit Ethernet links over single mode optical fiber. Within multi-floor buildings, a gigabit Ethernet building backbone over multimode optical fiber is used and Category 5 or better, unshielded twisted pair wiring connects desktops to the network. Computer server clusters are connected to the building entrance using Gigabit Ethernet. Desktops are connected at 100 megabits/second speed (gigabit available when needed). The campus wireless network blankets classrooms, common areas and most academic office buildings.
=== Regional Networks ===
Off-campus Network Connections UAB connects to the Internet2 and National LambdaRail (NLR) high-speed research networks via the University of Alabama System Regional Optical Network (UASRON), a University of Alabama System owned and operated DWDM Network offering 10G Ethernet to the Southern Light Rail (SLR)/Southern Crossroads (SoX) in Atlanta, Ga. The UASRON also connects UAB to UA, and UAH, the other two University of Alabama System institutions, and the Alabama Supercomputer Center utilizing Gigabit Ethernet speeds.  UAB is also connected to other universities and schools through AREN (Alabama Research and Education Network). Connection to the commodity Internet is via Gigabit Ethernet, of which UAB currently uses approximately 1.2 Giga-bits-per-second (Gbps).

Revision as of 21:49, 15 October 2017

The Cyberinfrastructure supporting UAB investigators includes high performance computing clusters, high-speed storage systems, campus, state-wide and regionally connected high-bandwidth networks, and conditioned space for hosting and operating HPC systems, research applications and network equipment.

Cheaha is a campus HPC resource dedicated to enhancing research computing productivity at UAB. Cheaha is managed by UAB Information Technology's Research Computing Services group (UAB ITRCS) and is available to members of the UAB community in need of increased computational capacity. Cheaha supports high-performance computing (HPC) and high throughput computing (HTC) paradigms. Cheaha is composed of resources that span data centers located in the UAB IT Data Centers in the 936 Building and the RUST Computer Center. UAB ITRCS in open collaboration with community members is leading the design and development of these resources. UAB IT’s Infrastructure Services group provides operational support and maintenance of these resources.

A description of the facilities available to UAB researchers are described below. If you would like an account on the HPC system, please send an email to support@listserv.uab.edu to request an account and provide a short statement on your intended use of the resources and your affiliation with the university.

UAB High Performance Computing (HPC) Clusters

Compute Resources

The current compute fabric for this system is anchored by the Cheaha cluster, a commodity cluster with 2800 cores connected by low-latency Fourteen Data Rate (FDR) and Enhanced Data Rate (EDR) InfiniBand networks.

A historical description of the different hardware generations are summarized in the following table:

  • Gen7: 18 2x14 core (504 cores total) 2.4GHz Intel Xeon E5-2680 v4 compute nodes with 256GB RAM, four NVIDIA Tesla P100 16GB GPUs, and EDR InfiniBand interconnect (supported by UAB, 2017).
  • Gen6: 96 2x12 core (2304 cores total) 2.5 GHz Intel Xeon E5-2680 v3 compute nodes with FDR InfiniBand interconnect. Out of the 96 compute nodes, 36 nodes have 128 GB RAM, 38 nodes have 256 GB RAM, and 14 nodes have 384 GB RAM. There are also four compute nodes with the Intel Xeon Phi 7210 accelerator cards and four compute nodes with the NVIDIA K80 GPUs (supported by UAB, 2015/2016).
  • Gen5: 12 2x8 core (192 cores total) 2.0 GHz Intel Xeon E2650 nodes with 96GB RAM per node and 10 Gbps interconnect dedicated to OpenStack and Ceph (supported by UAB IT, 2012).
  • Gen4: 3 2x8 core (48 cores total) 2.70 GHz Intel Xeon compute nodes with 384GB RAM per node (24GB per core), QDR InfiniBand interconnect (supported by Section on Statistical Genetics, School of Public Health, 2012).
  • Gen3: 48 2x6 core (576 cores total) 2.66 GHz Intel Xeon compute nodes with 48GB RAM per node (4GB per core), QDR InfiniBand interconnect (supported by NIH grant S10RR026723-01, 2010)
  • Gen2: 24 2x4 (192 cores total) Intel 3.0 GHz Intel Xeon compute nodes with 16GB RAM per node (2GB per core), DDR InfiniBand interconnect (supported by UAB IT, 2008).
  • Gen1: 60 2-core (120 cores total) AMD 1.6GHz Opteron 64-bit compute nodes with 2GB RAM per node (1GB per core), and Gigabit Ethernet connectivity between the nodes (supported by Alabama EPSCoR Research Infrastructure Initiative, NSF EPS-0091853, 2005).
Generation Type Nodes CPUs per Node Cores Per CPU Total Cores Clock Speed (GHz) Instructions Per Cycle Hardware Reference
Gen 6 Intel Xeon E5-2680 v3 96 2 12 2304 2.50 16 Intel Xeon E5-2680 v3
Gen 7†† Intel Xeon E5-2680 v4 18 2 14 504 2.40 16 Intel Xeon E5-2680 v4
Gen 8 Intel Xeon E5-2680 v4 35 2 12 840 2.50 16 Intel Xeon E5-2680 v3
Gen 9 Intel Xeon Gold 6248R 52 2 24 2496 3.0 16 3.0GHz Intel Xeon Gold 6248R
Theoretical Peak Flops = (number of cores) * (clock speed) * (instructions per cycle)
Generation Theoretical Peak Tera-FLOPS
Gen 6 110
Gen 7†† 358
Gen 8 TBD
Gen 9 TBD

Includes four Intel Xeon Phi 7210 accelerators and four NVIDIA K80 GPUs.
†† Includes 72 NVIDIA Tesla P100 16GB GPUs.

Storage Resources

In 2009, annual investment funds were directed toward establishing a fully connected dual data rate Infiniband network between the compute nodes added in 2008 and laying the foundation for a research storage system with a 60TB DDN storage system accessed via the Lustre distributed file system. In 2010, UAB was awarded an NIH Small Instrumentation Grant (SIG) to further increase analytical and storage capacity by an additional 120TB of high performance Lustre storage on a DDN hardware. In Fall 2013, UAB IT Research Computing acquired an OpenStack cloud and Ceph storage software fabric through a partnership between Dell and Inktank in order to extend cloud-computing solutions to the researchers at UAB and enhance the interfacing capabilities for HPC. This storage system provides an aggregate of half-petabytes of raw storage that is distributed across 12 compute nodes each with node having 16 cores, 96GB RAM, and 36TB of storage and connected together with a 10Gigabit Ethernet networking. During 2016, as part of the Alabama Innovation Fund grant working in partnership with numerous departments, 6.6PB raw GPFS storage on DDN SFA12KX hardware was added to meet the growing data needs of UAB researchers.

Network Resources

Research Network

UAB 10GigE Research Network The UAB Research Network is currently a dedicated 40GE optical connection between the UAB Shared HPC Facility and the RUST Campus Data Center to create a multi-site facility housing the Research Computing System, which leverage the network for connecting storage and compute hosting resources. The network supports direct connection to high-bandwidth regional networks and the capability to connect data intensive research facilities directly with the high performance computing services of the Research Computing System. This network can support very high speed secure connectivity between nodes connected to it for high speed file transfer of very large data sets without the concerns of interfering with other traffic on the campus backbone ensures predictable latencies.

Campus Network

Campus High Speed Network Connectivity The campus network backbone is based on a 40 gigabit redundant Ethernet network with 480 gigabit/second backplanes on the core L2/L3 Switch/Routers. For efficient management, a collapsed backbone design is used. Each campus building is connected using gigabit Ethernet links over single mode optical fiber. Within multi-floor buildings, a gigabit Ethernet building backbone over multimode optical fiber is used and Category 5 or better, unshielded twisted pair wiring connects desktops to the network. Computer server clusters are connected to the building entrance using Gigabit Ethernet. Desktops are connected at 1 gigabits/second speed. The campus wireless network blankets classrooms, common areas and most academic office buildings.

Regional Networks

Off-campus Network Connections UAB connects to the Internet2 high-speed research network via the University of Alabama System Regional Optical Network (UASRON), a University of Alabama System owned and operated DWDM Network offering 100Gbps Ethernet to the Southern Light Rail (SLR)/Southern Crossroads (SoX) in Atlanta, Ga. The UASRON also connects UAB to UA, and UAH, the other two University of Alabama System institutions, and the Alabama Supercomputer Center. UAB is also connected to other universities and schools through Alabama Research and Education Network (AREN).

UAB was awarded the NSF CC*DNI Networking Infrastructure grant (CC-NIE-1541310) in Fall 2016 to establish a dedicated high-speed research network (UAB Science DMZ) that establishes a 40Gbps networking core and provides researchers at UAB with 10Gbps connections from selected computers to the shared computational facility.

Regional and National Resources

Alabama Supercomputing Center (ASC)

Alabama Supercomputer Center (ASC) (http://www.asc.edu) is a State-wide resource located in Hunstville, Alabama. The ASC provides UAB investigators with access to a variety of high performance computing resources. These resources include:

  • The SGI UV (ULTRAVIOLET) has 256 Xeon E5-4640 CPU cores operating at 2.4 GHz and 4 TB of shared memory, and 182 terabytes in the GPFS storage cluster.
  • A Dense Memory Cluster (DMC) HPC system has 2216 CPU cores and 16 terabytes of distributed memory. Each compute node has a local disk (up to 1.9 terabytes of which are accessible as /tmp). Also attached to the DMC is a high performance GPFS storage cluster, which has 45 terabytes of high performance storage accessible as /scratch from each node. Home directories as well as third party applications use a separate GPFS volume and share 137 terabytes of storage. The machine is physically configured as a cluster of 8, 16, or 20 CPU core SMP boards. Ninety-six nodes have 2.26 GHz Intel quad-core Nehalem processors and 24 gigabytes of memory. Forty nodes have 2.3 GHz AMD 8-core Opteron Magny-Cours processors and 128 gigabytes of memory. Forty nodes have 2.5 GHz Intel 10-core Xeon Ivy Bridge processors and 128 gigabytes of memory.
  • A large number of software packages are installed supporting a variety of analyses including programs for Computational Structural Analysis, Design Analysis, Quantum Chemistry, Molecular Mechanics/Dynamics, Crystallography, Fluid Dynamics, Statistics, Visualization, and Bioinformatics.

Open Science Grid

UAB is a member of the SURAgrid Virtual Organization (SGVO)_ on the Open Science Grid (OSG) (http://opensciencegrid.org) This is a national compute network consists of nearly 80,000 compute cores aggregated across national facilities and contributing member sites. The OSG provides operational support for the interconnection middleware and facilities research and operational engagement between members.