Welcome: Difference between revisions

From Cheaha
Jump to navigation Jump to search
No edit summary
(→‎Description of Cheaha for Grants (short): added EDR in short description)
(52 intermediate revisions by 7 users not shown)
Line 1: Line 1:
Welcome to '''UABgrid'''
{{Main_Banner}}
Welcome to the '''Research Computing System'''


UABgrid is a technology platform being built to support research at UAB.
The Research Computing System (RCS) provides a framework for sharing data, accessing compute power, and collaborating with peers on campus and around the globe.  Our goal is to construct a dynamic "network of services" that you can use to organize your data, study it, and share outcomes.


UABgrid provides a framework for sharing data, accessing compute power, and collaborating with peers on campus and around the globeOur goal is to construct a dynamic network of services that supports the development and execution of advanced research processes.
''''docs'''' (the service you are looking at while reading this text) is one of a set of core services, or libraries, available for you to organize information you gather. Docs is a wiki, an online editor to collaboratively write and share documentation. ([http://en.wikipedia.org/wiki/Wiki Wiki is a Hawaiian term] meaning fast.)  You can learn more about '''docs''' on the page [[UnderstandingDocs]]The docs wiki is filled with pages that document the many different services and applications available on the Research Computing System.  If you see information that looks out of date please don't hesitate to [mailto:support@vo.uabgrid.uab.edu ask about it] or fix it.


The UABgrid pilot was launched in September 2007 and has since focused on demonstrating the utility of the platform in three key areas:
The Research Computing System is designed to provide services to researchers in three core areas:


* '''High Performance Computing''' (HPC) - [[Cheaha]], a compute fabric to power even the largest analysis work loads
* '''Data Analysis''' - using the High Performance Computing (HPC) fabric we call [[Cheaha]] for analyzing data and running simulations. Many [[Cheaha_Software|applications are already available]] or you can install your own
* '''Data Sharing''' - supporting the trusted exchange of information to spark new ideas
* '''Data Sharing''' - supporting the trusted exchange of information using virtual data containers to spark new ideas
* [http://docs.uabgrid.uab.edu/wiki/Documentation#User_Guide '''Collaboration'''] - putting the tools you need at your fingertips
* '''Application Development''' - providing virtual machines and web-hosted development tools empowering you to serve others with your research


The research platform is being built on the same technology foundation that has successfully served the needs of prominent national and international initiatives like [https://cabig.nci.nih.gov caBIG], the [http://opensciencegrid.org/ Open Science Grid], [http://www.teragrid.org/ TeraGrid], and the [http://lcg.web.cern.ch/LCG/ LHC Computing Grid].
== Support and Development ==


Construction of the research platform is lead by UAB IT in cooperation with a growing list of collaborators that includes the Center for Clinical and Translational Science (CCTS), the Comprehensive Cancer Center (CCC), the Department of Computer and Information Sciences, the Department of Mechanical Engineering, and Health System Information Services (HSIS). Platform development is conducted openly via the [http://dev.uabgrid.uab.edu UABgrid development wiki].
The Research Computing System is developed and supported by UAB IT's Research Computing Group.  We are also developing a core set of applications to help you to easily incorporate our services into your research processes and this documentation collection to help you leverage the resources already available. We follow the best practices of the Open Source community and develop the RCS openly.  You can follow our progress via the [http://dev.uabgrid.uab.edu our development wiki].
 
The Research Computing System is an out growth of the UABgrid pilot, launched in September 2007 which has focused on demonstrating the utility of unlimited analysis, storage, and application for research.  RCS is being built on the same technology foundations used by major cloud vendors and decades of distributed systems computing research, technology that powered the last ten years of large scale systems serving prominent national and international initiatives like the [http://opensciencegrid.org/ Open Science Grid], [http://xsede.org XSEDE], [http://www.teragrid.org/ TeraGrid], the [http://lcg.web.cern.ch/LCG/ LHC Computing Grid], and [https://cabig.nci.nih.gov caBIG].
 
== Outreach ==
 
The UAB IT Research Computing Group has collaborated with a number of prominent research projects at UAB to identify use cases and develop the requirements for the RCS.  Our collaborators include the Center for Clinical and Translational Science (CCTS), Heflin Genomics Center, the Comprehensive Cancer Center (CCC), the Department of Computer and Information Sciences (CIS), the Department of Mechanical Engineering (ME), Lister Hill Library, the School of Optometry's Center for the Development of Functional Imaging, and Health System Information Services (HSIS).  
 
As part of the process of building this research computing platform, the UAB IT Research Computing Group has hosted an annual campus symposium on research computing and cyber-infrastructure (CI) developments and accomplishments. Starting as CyberInfrastructure (CI) Days in 2007, the name was changed to [http://docs.uabgrid.uab.edu/wiki/UAB_Research_Computing_Day '''UAB Research Computing Day'''] in 2011 to reflect the broader mission to support research.  IT Research Computing also participates in other campus wide symposiums including UAB Research Core Day.
 
== Featured Research Applications ==
 
The Research Computing Group also helps support the campus MATLAB license with self-service installation documentation and supports using MATLAB on the HPC platform, providing a pathway to expand your computational power and freeing your laptop from serving as a compute platform.


{{abox
{{abox
| UAB Matlab site license |
| UAB MATLAB Information |
* About the [[UAB TAH license]]
In January 2011, UAB acquired a site license from Mathworks for MATLAB, SimuLink and 42 Toolboxes. 
* An [[MATLAB|overview of MATLAB]] and instructions on installing
* Learn more about [[MATLAB|MATLAB and how you can use it at UAB]]
* Skip to specific steps:
* Learn more about the [[UAB TAH license|UAB Mathworks Site license]] and review [[Matlab site license FAQ|frequently asked questions about the license]]
** UAB [[Matlab site license]] install instructions for standalone computers
}}
** UAB [[Matlab network concurrent license server]] install instructions
** [[Getting started with MATLAB and Simulink]]
** Advance configuration: using MatLab with [[MatLab|Cheaha cluster]]


* UAB [[Matlab site license FAQ]]
The UAB IT Research Computing group, the CCTS BMI, and [http://www.uab.edu/hcgs/bioinformatics Heflin Center for Genomic Science] have teamed up to help improve genomic research at UAB.  Researchers can work with the scientists and research experts to produce a research pipeline from sequencing, to analysis, to publication.
}}


{{abox
{{abox
|'''[[ResearchNotebook]]'''|
|'''Galaxy'''|
This wiki also hosts the [[ResearchNotebook]] and the [[ResearchArchive]] wiki. The [[ResearchNotebook]] provides a commons to discuss academic research and academic writing at UAB. It is the sister website for the Research Notebook blog [http://researchnb.net]. The [[ResearchArchive]] is a pilot repository for primary data, presentations, ePrints, and other products of research effort at UAB. These projects are a collaboration between Lister Hill Library, UAB IT, and the School of Optometry's Center for the Development of Functional Imaging.
A web front end to run analyses on the cluster fabric. Currently focused on NGS (Next Generation Sequencing; biology) analysis support.  
* [[Galaxy|Galaxy Project Home]]
* [http://projects.uabgrid.uab.edu/galaxy Galaxy Development Wiki]
}}
}}
== Grant and Publication Resources ==
The following description may prove useful in summarizing the services available via Cheaha. Any publications that rely on computations performed on Cheaha should include a statement acknowledging the use of UAB Research Computing facilities in your research, see the suggested example below.  We also request that you send us a list of publications based on your use of Cheaha resources.
=== Description of Cheaha for Grants (short)===
UAB IT Research Computing maintains high performance compute and storage resources for investigators. The Cheaha compute cluster provides over 2900 conventional INTEL CPU cores and 80 accelerators (including 72 NVIDIA P100 GPUS's) interconnected via an EDR InfiniBand network and provides 468 TFLOP/s of aggregate theoretical peak performance. A high-performance, 6.6PB raw GPFS storage on DDN SFA12KX hardware is also connected to these compute nodes via the Infiniband fabric. An additional 20TB of traditional SAN storage is also available for home directories. This general access compute fabric is available to all UAB investigators.
=== Description of Cheaha for Grants (Detailed) ===
The Cyberinfrastructure supporting University of Alabama at Birmingham (UAB) investigators includes high performance computing clusters, storage, campus, statewide and regionally connected high-bandwidth networks, and conditioned space for hosting and operating HPC systems, research applications and network equipment.
==== Cheaha HPC system ====
Cheaha is a campus HPC resource dedicated to enhancing research computing productivity at UAB. Cheaha is managed by UAB Information Technology's Research Computing group (RC) and is available to members of the UAB community in need of increased computational capacity. Cheaha supports high-performance computing (HPC) and high throughput computing (HTC) paradigms. Cheaha is composed of resources that span data centers located in the UAB IT Data Centers in the 936 Building and the RUST Computer Center. Research Computing in open collaboration with the campus research community is leading the design and development of these resources.
==== Compute Resources ====
Cheaha provides users with a traditional command-line interactive environment with access to many scientific tools that can leverage its dedicated pool of local compute resources. Alternately, users of graphical applications can start a cluster desktop. The local compute pool provides access to two generations of compute hardware based on the x86 64-bit architecture. It includes 96 nodes:  2x12 core (2304 cores total) 2.5 GHz Intel Xeon E5-2680 v3 compute nodes with FDR InfiniBand interconnect. Out of the 96 compute nodes, 36 nodes have 128 GB RAM, 38 nodes have 256 GB RAM, and 14 nodes have 384 GB RAM. There are also four compute nodes with the Intel Xeon Phi 7210 accelerator cards and four compute nodes with the NVIDIA K80 GPUs. The newest generation is composed of 18 nodes: 2x14 core (504 cores total) 2.4GHz Intel Xeon E5-2680 v4 compute nodes with 256GB RAM, four NVIDIA Tesla P100 16GB GPUs per node, and EDR InfiniBand interconnect. The compute nodes combine to provide over 468 TFLOP/s of dedicated computing power.
In addition UAB researchers also have access to regional and national HPC resources such as Alabama Supercomputer Authority (ASA), XSEDE and Open Science Grid (OSG).
==== Storage Resources ====
The compute nodes on Cheaha are backed by high-performance, 6.6PB raw GPFS storage on DDN SFA12KX hardware connected via the Infiniband fabric. An expansion of the GPFS fabric will double the capacity and is scheduled to be on-line Fall 2018. An additional 20TB of traditional SAN storage is also available for home directories.
==== Network Resources ====
The UAB Research Network is currently a dedicated 40GE optical connection between the UAB Shared HPC Facility and the RUST Campus Data Center to create a multi-site facility housing the Research Computing System, which leverages the network for connecting storage and compute hosting resources. The network supports direct connection to high-bandwidth regional networks and the capability to connect data intensive research facilities directly with the high performance computing services of the Research Computing System. This network can support very high speed secure connectivity between nodes connected to it for high speed file transfer of very large data sets without the concerns of interfering with other traffic on the campus backbone, ensuring predictable latencies. In addition, the network also consist of a secure Science DMZ with data transfer nodes (DTNs), Perfsonar measurement nodes, and a Bro security node connected directly to the border router  that provide a "friction-free" pathway to access external data repositories as well as computational resources.
The campus network backbone is based on a 40 gigabit redundant Ethernet network with 480 gigabit/second back-planes on the core L2/L3 Switch/Routers. For efficient management, a collapsed backbone design is used. Each campus building is connected using 10 Gigabit Ethernet links over single mode optical fiber. Desktops are connected at 1 gigabits/second speed. The campus wireless network blankets classrooms, common areas and most academic office buildings.
UAB connects to the Internet2 high-speed research network via the University of Alabama System Regional Optical Network (UASRON), a University of Alabama System owned and operated DWDM Network offering 100Gbps Ethernet to the Southern Light Rail (SLR)/Southern Crossroads (SoX) in Atlanta, Ga. The UASRON also connects UAB to UA, and UAH, the other two University of Alabama System institutions, and the Alabama Supercomputer Center. UAB is also connected to other universities and schools through Alabama Research and Education Network (AREN).
==== Personnel ====
UAB IT Research Computing currently maintains a support staff of 10 lead by the Assistant Vice President for Research Computing and includes an HPC Architect-Manager, four Software developers, two Scientists, a system administrator and a project coordinator.
=== Acknowledgment in Publications ===
This work was supported in part by the National Science Foundation under Grants Nos. OAC-1541310, the University of Alabama at Birmingham, and the Alabama Innovation Fund. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the University of Alabama at Birmingham.

Revision as of 14:28, 23 January 2019

Information.png

HPC Web Portal now in Beta

The new HPC web portal is now available. We encourage you to try it out as an alternative to traditional clients. It provides file, shell, and desktop access to the cluster within your web browser.

Welcome to the Research Computing System

The Research Computing System (RCS) provides a framework for sharing data, accessing compute power, and collaborating with peers on campus and around the globe. Our goal is to construct a dynamic "network of services" that you can use to organize your data, study it, and share outcomes.

'docs' (the service you are looking at while reading this text) is one of a set of core services, or libraries, available for you to organize information you gather. Docs is a wiki, an online editor to collaboratively write and share documentation. (Wiki is a Hawaiian term meaning fast.) You can learn more about docs on the page UnderstandingDocs. The docs wiki is filled with pages that document the many different services and applications available on the Research Computing System. If you see information that looks out of date please don't hesitate to ask about it or fix it.

The Research Computing System is designed to provide services to researchers in three core areas:

  • Data Analysis - using the High Performance Computing (HPC) fabric we call Cheaha for analyzing data and running simulations. Many applications are already available or you can install your own
  • Data Sharing - supporting the trusted exchange of information using virtual data containers to spark new ideas
  • Application Development - providing virtual machines and web-hosted development tools empowering you to serve others with your research

Support and Development

The Research Computing System is developed and supported by UAB IT's Research Computing Group. We are also developing a core set of applications to help you to easily incorporate our services into your research processes and this documentation collection to help you leverage the resources already available. We follow the best practices of the Open Source community and develop the RCS openly. You can follow our progress via the our development wiki.

The Research Computing System is an out growth of the UABgrid pilot, launched in September 2007 which has focused on demonstrating the utility of unlimited analysis, storage, and application for research. RCS is being built on the same technology foundations used by major cloud vendors and decades of distributed systems computing research, technology that powered the last ten years of large scale systems serving prominent national and international initiatives like the Open Science Grid, XSEDE, TeraGrid, the LHC Computing Grid, and caBIG.

Outreach

The UAB IT Research Computing Group has collaborated with a number of prominent research projects at UAB to identify use cases and develop the requirements for the RCS. Our collaborators include the Center for Clinical and Translational Science (CCTS), Heflin Genomics Center, the Comprehensive Cancer Center (CCC), the Department of Computer and Information Sciences (CIS), the Department of Mechanical Engineering (ME), Lister Hill Library, the School of Optometry's Center for the Development of Functional Imaging, and Health System Information Services (HSIS).

As part of the process of building this research computing platform, the UAB IT Research Computing Group has hosted an annual campus symposium on research computing and cyber-infrastructure (CI) developments and accomplishments. Starting as CyberInfrastructure (CI) Days in 2007, the name was changed to UAB Research Computing Day in 2011 to reflect the broader mission to support research. IT Research Computing also participates in other campus wide symposiums including UAB Research Core Day.

Featured Research Applications

The Research Computing Group also helps support the campus MATLAB license with self-service installation documentation and supports using MATLAB on the HPC platform, providing a pathway to expand your computational power and freeing your laptop from serving as a compute platform.


UAB MATLAB Information

In January 2011, UAB acquired a site license from Mathworks for MATLAB, SimuLink and 42 Toolboxes.

The UAB IT Research Computing group, the CCTS BMI, and Heflin Center for Genomic Science have teamed up to help improve genomic research at UAB. Researchers can work with the scientists and research experts to produce a research pipeline from sequencing, to analysis, to publication.


Galaxy

A web front end to run analyses on the cluster fabric. Currently focused on NGS (Next Generation Sequencing; biology) analysis support.

Grant and Publication Resources

The following description may prove useful in summarizing the services available via Cheaha. Any publications that rely on computations performed on Cheaha should include a statement acknowledging the use of UAB Research Computing facilities in your research, see the suggested example below. We also request that you send us a list of publications based on your use of Cheaha resources.

Description of Cheaha for Grants (short)

UAB IT Research Computing maintains high performance compute and storage resources for investigators. The Cheaha compute cluster provides over 2900 conventional INTEL CPU cores and 80 accelerators (including 72 NVIDIA P100 GPUS's) interconnected via an EDR InfiniBand network and provides 468 TFLOP/s of aggregate theoretical peak performance. A high-performance, 6.6PB raw GPFS storage on DDN SFA12KX hardware is also connected to these compute nodes via the Infiniband fabric. An additional 20TB of traditional SAN storage is also available for home directories. This general access compute fabric is available to all UAB investigators.

Description of Cheaha for Grants (Detailed)

The Cyberinfrastructure supporting University of Alabama at Birmingham (UAB) investigators includes high performance computing clusters, storage, campus, statewide and regionally connected high-bandwidth networks, and conditioned space for hosting and operating HPC systems, research applications and network equipment.

Cheaha HPC system

Cheaha is a campus HPC resource dedicated to enhancing research computing productivity at UAB. Cheaha is managed by UAB Information Technology's Research Computing group (RC) and is available to members of the UAB community in need of increased computational capacity. Cheaha supports high-performance computing (HPC) and high throughput computing (HTC) paradigms. Cheaha is composed of resources that span data centers located in the UAB IT Data Centers in the 936 Building and the RUST Computer Center. Research Computing in open collaboration with the campus research community is leading the design and development of these resources.

Compute Resources

Cheaha provides users with a traditional command-line interactive environment with access to many scientific tools that can leverage its dedicated pool of local compute resources. Alternately, users of graphical applications can start a cluster desktop. The local compute pool provides access to two generations of compute hardware based on the x86 64-bit architecture. It includes 96 nodes: 2x12 core (2304 cores total) 2.5 GHz Intel Xeon E5-2680 v3 compute nodes with FDR InfiniBand interconnect. Out of the 96 compute nodes, 36 nodes have 128 GB RAM, 38 nodes have 256 GB RAM, and 14 nodes have 384 GB RAM. There are also four compute nodes with the Intel Xeon Phi 7210 accelerator cards and four compute nodes with the NVIDIA K80 GPUs. The newest generation is composed of 18 nodes: 2x14 core (504 cores total) 2.4GHz Intel Xeon E5-2680 v4 compute nodes with 256GB RAM, four NVIDIA Tesla P100 16GB GPUs per node, and EDR InfiniBand interconnect. The compute nodes combine to provide over 468 TFLOP/s of dedicated computing power. In addition UAB researchers also have access to regional and national HPC resources such as Alabama Supercomputer Authority (ASA), XSEDE and Open Science Grid (OSG).

Storage Resources

The compute nodes on Cheaha are backed by high-performance, 6.6PB raw GPFS storage on DDN SFA12KX hardware connected via the Infiniband fabric. An expansion of the GPFS fabric will double the capacity and is scheduled to be on-line Fall 2018. An additional 20TB of traditional SAN storage is also available for home directories.

Network Resources

The UAB Research Network is currently a dedicated 40GE optical connection between the UAB Shared HPC Facility and the RUST Campus Data Center to create a multi-site facility housing the Research Computing System, which leverages the network for connecting storage and compute hosting resources. The network supports direct connection to high-bandwidth regional networks and the capability to connect data intensive research facilities directly with the high performance computing services of the Research Computing System. This network can support very high speed secure connectivity between nodes connected to it for high speed file transfer of very large data sets without the concerns of interfering with other traffic on the campus backbone, ensuring predictable latencies. In addition, the network also consist of a secure Science DMZ with data transfer nodes (DTNs), Perfsonar measurement nodes, and a Bro security node connected directly to the border router that provide a "friction-free" pathway to access external data repositories as well as computational resources.

The campus network backbone is based on a 40 gigabit redundant Ethernet network with 480 gigabit/second back-planes on the core L2/L3 Switch/Routers. For efficient management, a collapsed backbone design is used. Each campus building is connected using 10 Gigabit Ethernet links over single mode optical fiber. Desktops are connected at 1 gigabits/second speed. The campus wireless network blankets classrooms, common areas and most academic office buildings.

UAB connects to the Internet2 high-speed research network via the University of Alabama System Regional Optical Network (UASRON), a University of Alabama System owned and operated DWDM Network offering 100Gbps Ethernet to the Southern Light Rail (SLR)/Southern Crossroads (SoX) in Atlanta, Ga. The UASRON also connects UAB to UA, and UAH, the other two University of Alabama System institutions, and the Alabama Supercomputer Center. UAB is also connected to other universities and schools through Alabama Research and Education Network (AREN).

Personnel

UAB IT Research Computing currently maintains a support staff of 10 lead by the Assistant Vice President for Research Computing and includes an HPC Architect-Manager, four Software developers, two Scientists, a system administrator and a project coordinator.

Acknowledgment in Publications

This work was supported in part by the National Science Foundation under Grants Nos. OAC-1541310, the University of Alabama at Birmingham, and the Alabama Innovation Fund. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the University of Alabama at Birmingham.