From UABgrid Documentation
SLURM is a queue management system and stands for Simple Linux Utility for Resource Management. SLURM was developed at the Lawrence Livermore National Lab and currently runs some of the largest compute clusters in the world. SLURM is the primary job manager on Cheaha (BigGreen- new hardware) while GridEngine continues to be the job manager on the old hardware.
SLURM is similar in many ways to GridEngine or most other queue systems. You write a batch script then submit it to the queue manager (scheduler). The queue manager then schedules your job to run on the queue (or partition in SLURM parlance) that you designate. Below we will provide an outline of how to submit jobs to SLURM, how SLURM decides when to schedule your job and how to monitor progress.
General SLURM Documentation
The primary source for documentation on SLURM usage and commands can be found at the SLURM site. If you Google for SLURM questions, you'll often see the Lawrence Livermore pages as the top hits, but these tend to be outdated.
A great way to get details on the SLURM commands is the man pages available from the Cheaha cluster. For example, if you type the following command:
you'll get the manual page for the sbatch command.
Logging on and Running Jobs from the command line
Cheaha (new hardware) run the CentOS 7 version of the Linux operating system and commands are run under the "bash" shell. There are a number of Linux and bash references, cheat sheets and tutorials available on the web.
- Stage data to $USER_SCRATCH (your scratch directory)
- Research how to run your code in "batch" mode. Batch mode typically means the ability to run it from the command line without requiring any interaction from the user.
- Identify the appropriate resources needed to run the job. The following are mandatory resource requests for all jobs on Cheaha
- Number of processor cores required by the job
- Maximum memory (RAM) required per core
- Maximum runtime
- Write a job script specifying queuing system parameters, resource requests and commands to run program
- Submit script to queuing system (sbatch script.job)
- Monitor job (squeue)
- Review the results and resubmit as necessary
- Clean up the scratch directory by moving or deleting the data off of the cluster
TODO: provide an explanation of what makes a batch job and why use that vs an interactive job
For additional information on the sbatch command execute man sbatch at the command line to view the manual.
Example Batch Job Script
A job consists of resource requests and tasks. The Slurm job scheduler interprets lines beginning with #SBATCH as Slurm arguments. In this example, the job is requesting to run 1 task
#!/bin/bash # #SBATCH --job-name=test #SBATCH --output=res.txt #SBATCH --ntasks=1 #SBATCH --time=10:00 #SBATCH --mem-per-cpu=100 #SBATCH --partition=short #SBATCH --mail-type=FAIL #SBATCH --mail-user=$USER@uab.edu srun hostname srun sleep 60
Login Node (The command-line interface after you login to Cheaha ) is supposed to be used for submitting jobs and/or lighter prep work required for the job scripts. You are not supposed to run heavy computations on the login node. If you have a heavier workload to prepare for a batch job (eg. compiling code or other manipulations of data) or your compute application requires interactive control, you should request a dedicated interactive node for this work.
Interactive resources are requested by submitting an "interactive" job to the scheduler. Interactive jobs will provide you a command line on a compute resource that you can use just like you would the command line on the login node. The difference is that the scheduler has dedicated the requested resources to your job and you can run your interactive commands without having to worry about impacting other users on the login node. Interactive jobs are requested with the srun command
srun --ntasks=1 --time=01:00:00 --pty /bin/bash
This command requests for 1 core (-n) on 1 node (-N) for 1 hour (-t).
20160311 partitions & graphical interactive
Howdy, the new changes are in place. The primary focus of the changes were to:
- Change the scheduling algorithm to one that allows jobs to share compute nodes (i.e. Slurm will allocate CPU cores now instead of complete compute nodes).
- We added partitions (in SGE they were called queues) with the following characteristics (these may change over time as we tweak things):
- short (default partition): Priority 2 :: Max Runtime 2 hours
- medium: Priority 4 :: Max Runtime 50 hours
- long: Priority 6 :: Max Runtime 159 hours (6 days 6 hours)
- interactive: Priority 10 :: Max Runtime 2 hours
- In order to run a job in a partition other than "short" you'll need to specifically request it using the --partition argument (--time=48:00:00 --partition=medium)
- Graphical interactive jobs now work. You can run an interactive job using the sinteractive command, for example:
sinteractive --time=00:05:00 --job-name=sinteractiveTest --ntasks=1 --mem=1024
More to come and please let us know of any issues or concerns.