Cheaha Quick Start: Difference between revisions

From Cheaha
Jump to navigation Jump to search
(Page started from old page)
 
(Page edited with examples)
Line 1: Line 1:
'''NOTE: This page is still under development. Please refer to [[Cheaha_GettingStarted]] page for detailed documentation.'''
'''NOTE: This page is still under development. Please refer to [[Cheaha2_GettingStarted]] page for detailed documentation.'''


Cheaha is a shared cluster computing environment for UAB researchers. Cheaha offers total 8.75 TFLOPS compute power, 200 TB high-performance storage and 2.8 TB memory. See [[Cheaha_Quick_Start_Hardware]] for more details on compute platform, but first let's get started with an example and see how easy it is to use.
Cheaha is a shared cluster computing environment for UAB researchers. Cheaha offers total 110 TFLOPS compute power, 4.7 PB high-performance storage and 20 TB memory. See [[Cheaha_Quick_Start_Hardware]] for more details on compute platform, but first let's get started with an example and see how easy it is to use.


If you have any questions about Cheaha usage then please contact Research Computing team at support@vo.uabgrid.uab.edu .
If you have any questions about Cheaha usage then please contact Research Computing team at support@vo.uabgrid.uab.edu .


== Logging In ==
== Logging In ==
More [[Cheaha_GettingStarted#Login|detailed login instructions]] are also available.
More [[Cheaha2_GettingStarted#Login|detailed login instructions]] are also available.


Most users will authenticate to Cheaha using their BlazerID and associated password using an SSH (Secure Shell) client. The basic syntax is as follows:
Most users will authenticate to Cheaha using their BlazerID and associated password using an SSH (Secure Shell) client. The basic syntax is as follows:


<pre>
<pre>
ssh BLAZERID@cheaha.uabgrid.uab.edu
ssh BLAZERID@cheaha.rc.uab.edu
</pre>
</pre>


Line 20: Line 20:
#!/bin/bash
#!/bin/bash
#
#
# Define the shell used by your compute job
#SBATCH --job-name=test
#$ -S /bin/bash
#SBATCH --output=res.txt
#
#SBATCH --ntasks=1
# Run in the current directory from where you submit the job
#SBATCH --time=10:00
#$ -cwd
#SBATCH --mem-per-cpu=100
#
#SBATCH --mail-type=FAIL
# Set the maximum runtime for the job (ex: 10 minutes)
#SBATCH --mail-user=$USER@uab.edu
#$ -l h_rt=00:10:00
 
# Set the maximum amount of RAM needed per slot (ex: 512 MB's)
srun hostname
#$ -l vf=512M
srun sleep 60
#
# Your email address
#$ -M YOUR_EMAIL_ADDRESS
#
# Notification Options:
#  b    Mail is sent at the beginning of the job
#  e    Mail is sent at the end of the job
#  a    Mail is sent when the job is aborted or rescheduled
#  s    Mail is sent when the job is suspended
#  n    No mail is sent
#$ -m eas
#
# Use the environment from your current shell
#
#$ -V


echo "The job $JOB_ID is running on $HOSTNAME"


</pre>
</pre>


Lines starting with '#$' have a special meaning in the SGE world. SGE specific configuration options are specified after the '#$' characters. Above configuration options are useful for most job scripts and for additional configuration options refer to SGE commands manual. A job script is submitted to the cluster using SGE specific commands. There are many commands available, but following three commands are the most common:
Lines starting with '#SBATCH' have a special meaning in the SLURM world. SLURM specific configuration options are specified after the '#SBATCH' characters. Above configuration options are useful for most job scripts and for additional configuration options refer to SLURM commands manual. A job script is submitted to the cluster using SLURM specific commands. There are many commands available, but following three commands are the most common:
* qsub - to submit job
* sbatch - to submit job
* qdel - to delete job
* scancel - to delete job
* qstat - to view job status
* squeue - to view job status


We can submit above job script using qsub command:
We can submit above job script using sbatch command:
<pre>
<pre>
$ qsub HelloCheaha.sh
$ sbatch HelloCheaha.sh
Your job 9043385 (HelloCheaha.sh) has been submitted
Submitted batch job 52707
</pre>
</pre>


When the job script is submitted, SGE queues it up and assigns it a job number (e.g. 9043385 in above example). The job number is available inside job script using environment variable $JOB_ID. This variable can be used inside job script to create job related directory structure or file names. [[Cheaha_Quick_Start_Job_Script_Examples]] provides more job script examples and [[Cheaha_Quick_Start_SGE_Commands]] provides more information about SGE commands.
When the job script is submitted, SLURM  queues it up and assigns it a job number (e.g. 52707 in above example). The job number is available inside job script using environment variable $JOB_ID. This variable can be used inside job script to create job related directory structure or file names.  


== Software ==
== Software ==
Line 81: Line 65:


== Storage ==
== Storage ==
A non-trivial analysis requires a good storage backend that supports large file staging, access control and performance. Cheaha storage fabric includes a high-performance parallel file system called [http://wiki.lustre.org/index.php/Main_Page Lustre] which handles large files efficiently. It's available for all Cheaha users and specific details are covered on [[Cheaha_Quick_Start_Storage]] page.
Need to talk about GPFS here


== Graphical Interface ==
== Graphical Interface ==
Some applications use graphical interface to perform certain actions (e.g. submit buttons, file selections etc.). Cheaha supports graphical applications using an interactive X-Windows session with SGE's qrsh command. This will allow you to run graphical applications like MATLAB or AFNI on Cheaha. Refer to [[Cheaha_Quick_Start_Interactive_Jobs]] for details on running graphical X-Windows applications.
Some applications use graphical interface to perform certain actions (e.g. submit buttons, file selections etc.). Cheaha supports graphical applications using an interactive X-Windows session with SLURMS sinteractive command. This will allow you to run graphical applications like MATLAB or AFNI on Cheaha. Refer to [[Cheaha_Quick_Start_Interactive_Jobs]] for details on running graphical X-Windows applications.


== Scheduling Policies ==
== Scheduling Policies ==

Revision as of 12:45, 22 September 2016

NOTE: This page is still under development. Please refer to Cheaha2_GettingStarted page for detailed documentation.

Cheaha is a shared cluster computing environment for UAB researchers. Cheaha offers total 110 TFLOPS compute power, 4.7 PB high-performance storage and 20 TB memory. See Cheaha_Quick_Start_Hardware for more details on compute platform, but first let's get started with an example and see how easy it is to use.

If you have any questions about Cheaha usage then please contact Research Computing team at support@vo.uabgrid.uab.edu .

Logging In

More detailed login instructions are also available.

Most users will authenticate to Cheaha using their BlazerID and associated password using an SSH (Secure Shell) client. The basic syntax is as follows:

ssh BLAZERID@cheaha.rc.uab.edu

Hello Cheaha!

A shared cluster environment like Cheaha uses a job scheduler to run tasks on the cluster to provide optimal resource sharing among users. Cheaha uses a job scheduling system call SGE to schedule and manage jobs. A user needs to tell SGE about resource requirements (e.g. CPU, memory) so that it can schedule jobs effectively. These resource requirements along with actual application code can be specified in a single file commonly referred as 'Job Script/File'. Following is a simple job script that prints job number and hostname.

#!/bin/bash
#
#SBATCH --job-name=test
#SBATCH --output=res.txt
#SBATCH --ntasks=1
#SBATCH --time=10:00
#SBATCH --mem-per-cpu=100
#SBATCH --mail-type=FAIL
#SBATCH --mail-user=$USER@uab.edu

srun hostname
srun sleep 60


Lines starting with '#SBATCH' have a special meaning in the SLURM world. SLURM specific configuration options are specified after the '#SBATCH' characters. Above configuration options are useful for most job scripts and for additional configuration options refer to SLURM commands manual. A job script is submitted to the cluster using SLURM specific commands. There are many commands available, but following three commands are the most common:

  • sbatch - to submit job
  • scancel - to delete job
  • squeue - to view job status

We can submit above job script using sbatch command:

$ sbatch HelloCheaha.sh
Submitted batch job 52707

When the job script is submitted, SLURM queues it up and assigns it a job number (e.g. 52707 in above example). The job number is available inside job script using environment variable $JOB_ID. This variable can be used inside job script to create job related directory structure or file names.

Software

Cheaha's software stack includes many scientific computing softwares. Below is list of popular softwares available on Cheaha:

These softwares can be included in a job environment using environment modules. Environment modules make environment variables modification easy and repeatable. Please refer to Cheaha_Quick_Start_Softwares page for more details.

Storage

Need to talk about GPFS here

Graphical Interface

Some applications use graphical interface to perform certain actions (e.g. submit buttons, file selections etc.). Cheaha supports graphical applications using an interactive X-Windows session with SLURMS sinteractive command. This will allow you to run graphical applications like MATLAB or AFNI on Cheaha. Refer to Cheaha_Quick_Start_Interactive_Jobs for details on running graphical X-Windows applications.

Scheduling Policies

Support

If you have any questions about our documentation or need any help with Cheaha then please contact us on support@vo.uabgrid.uab.edu . Cheaha is maintained by UAB IT's Research Computing team.