SGE-Slurm

From Cheaha
(Redirected from SGE-SLURM)
Jump to navigation Jump to search


Attention: Research Computing Documentation has Moved
https://docs.rc.uab.edu/


Please use the new documentation url https://docs.rc.uab.edu/ for all Research Computing documentation needs.


As a result of this move, we have deprecated use of this wiki for documentation. We are providing read-only access to the content to facilitate migration of bookmarks and to serve as an historical record. All content updates should be made at the new documentation site. The original wiki will not receive further updates.

Thank you,

The Research Computing Team

SGE-Slurm user comamnds

Some common commands and flags in SGE and Slurm with their respective equivalents:


User Commands SGE Slurm
Interactive login qrsh

srun --pty bash

Job submission qsub [script_file] sbatch [script_file]
Job deletion qdel [job_id] scancel [job_id]
Job status by job qstat -u \* [-j job_id] squeue [job_id]
Job status by user qstat [-u user_name]

squeue -u [user_name]

Job hold qhold [job_id] scontrol hold [job_id]
Job release qrls [job_id] scontrol release [job_id]
Queue list qconf -sql squeue
List nodes qhost sinfo -N OR scontrol show nodes
Cluster status qhost -q sinfo
Graphical queue view qmon sview
Job ID $JOB_ID $SLURM_JOBID
Submit directory $SGE_O_WORKDIR $SLURM_SUBMIT_DIR
Submit host $SGE_O_HOST $SLURM_SUBMIT_HOST
Node list $PE_HOSTFILE $SLURM_JOB_NODELIST
Job Array Index $SGE_TASK_ID $SLURM_ARRAY_TASK_ID
Script directive #$ #SBATCH
queue -q [queue] -p [queue]
count of nodes N/A -N [min[-max]]
CPU count -pe [PE] [count] -n [count]
Wall clock limit -l h_rt=[seconds] -t [min] OR -t [days-hh:mm:ss]
Standard out file -o [file_name] -o [file_name]
Standard error file -e [file_name] e [file_name]
Combine STDOUT & STDERR files -j yes (use -o without -e)
Copy environment -V --export=[ALL | NONE | variables]
Event notification -m abe --mail-type=[events]
send notification email -M [address] --mail-user=[address]
Job name -N [name] --job-name=[name]
Restart job -r [yes|no] --requeue OR --no-requeue (NOTE:
configurable default)
Set working directory -wd [directory] --workdir=[dir_name]
Resource sharing -l exclusive --exclusive OR--shared
Memory size -l mem_free=[memory][K|M|G] --mem=[mem][M|G|T] OR --mem-per-cpu=
[mem][M|G|T]
Charge to an account -A [account] --account=[account]
Tasks per node (Fixed allocation_rule in PE) --tasks-per-node=[count]
--cpus-per-task=[count]
Job dependancy -hold_jid [job_id | job_name] --depend=[state:job_id]
Job project -P [name] --wckey=[name]
Job host preference -q [queue]@[node] OR -q
[queue]@@[hostgroup]
--nodelist=[nodes] AND/OR --exclude=
[nodes]
Quality of service --qos=[name]
Job arrays -t [array_spec] --array=[array_spec] (Slurm version 2.6+)
Generic Resources -l [resource]=[value] --gres=[resource_spec]
Licenses -l [license]=[count] --licenses=[license_spec]
Begin Time -a [YYMMDDhhmm] --begin=YYYY-MM-DD[THH:MM[:SS]]

SGE - Slurm conversion examples

SGE Slurm
qstat

qstat -u username
qstat -f

squeue

squeue -u username
squeue -al

qsub

qsub -N jobname

qsub -m beas
qsub -M $USER@uab.edu
qsub -l h_rt=24:00:00
qsub -pe smp 16
qsub -l vf=4G
qsub -P projectname
qsub -o filename
qsub -e filename

sbatch

sbatch -J jobname

sbatch --mail-type=ALL
sbatch --mail-user=$USER@uab.edu
sbatch -t 24:00:00
sbatch -p partition -n 16
sbatch --mem-per-cpu=4096
sbatch -A projectname
sbatch -o filename
sbatch -e filename

qlogin for interactive via vnc sinteractive for interactive via vnc

qlogin -l h_rt=8:00:00, vf=1G

sinteractive --time=8:00:00 --mem=1024