SGE-Slurm: Difference between revisions

From Cheaha
Jump to navigation Jump to search
(Added table border for easier readibility)
No edit summary
Line 179: Line 179:
sbatch --mail-user=$USER@uab.edu<br />
sbatch --mail-user=$USER@uab.edu<br />
sbatch -t 24:00:00<br />
sbatch -t 24:00:00<br />
sbatch -p node -n 16<br />
sbatch -p ''partition'' -n 16<br />
        sbatch --mem=4096<br />
        sbatch --mem-per-cpu=4096<br />
sbatch -A projectname<br />
sbatch -A projectname<br />
sbatch -o filename<br />
sbatch -o filename<br />

Revision as of 13:37, 22 October 2016

SGE-Slurm user comamnds

Some common commands and flags in SGE and Slurm with their respective equivalents:


User Commands SGE Slurm
Interactive login qrsh

srun --pty bash

Job submission qsub [script_file] sbatch [script_file]
Job deletion qdel [job_id] scancel [job_id]
Job status by job qstat -u \* [-j job_id] squeue [job_id]
Job status by user qstat [-u user_name]

squeue -u [user_name]

Job hold qhold [job_id] scontrol hold [job_id]
Job release qrls [job_id] scontrol release [job_id]
Queue list qconf -sql squeue
List nodes qhost sinfo -N OR scontrol show nodes
Cluster status qhost -q sinfo
Graphical queue view qmon sview
Job ID $JOB_ID $SLURM_JOBID
Submit directory $SGE_O_WORKDIR $SLURM_SUBMIT_DIR
Submit host $SGE_O_HOST $SLURM_SUBMIT_HOST
Node list $PE_HOSTFILE $SLURM_JOB_NODELIST
Job Array Index $SGE_TASK_ID $SLURM_ARRAY_TASK_ID
Script directive #$ #SBATCH
queue -q [queue] -p [queue]
count of nodes N/A -N [min[-max]]
CPU count -pe [PE] [count] -n [count]
Wall clock limit -l h_rt=[seconds] -t [min] OR -t [days-hh:mm:ss]
Standard out file -o [file_name] -o [file_name]
Standard error file -e [file_name] e [file_name]
Combine STDOUT & STDERR files -j yes (use -o without -e)
Copy environment -V --export=[ALL | NONE | variables]
Event notification -m abe --mail-type=[events]
send notification email -M [address] --mail-user=[address]
Job name -N [name] --job-name=[name]
Restart job -r [yes|no] --requeue OR --no-requeue (NOTE:
configurable default)
Set working directory -wd [directory] --workdir=[dir_name]
Resource sharing -l exclusive --exclusive OR--shared
Memory size -l mem_free=[memory][K|M|G] --mem=[mem][M|G|T] OR --mem-per-cpu=
[mem][M|G|T]
Charge to an account -A [account] --account=[account]
Tasks per node (Fixed allocation_rule in PE) --tasks-per-node=[count]
--cpus-per-task=[count]
Job dependancy -hold_jid [job_id | job_name] --depend=[state:job_id]
Job project -P [name] --wckey=[name]
Job host preference -q [queue]@[node] OR -q
[queue]@@[hostgroup]
--nodelist=[nodes] AND/OR --exclude=
[nodes]
Quality of service --qos=[name]
Job arrays -t [array_spec] --array=[array_spec] (Slurm version 2.6+)
Generic Resources -l [resource]=[value] --gres=[resource_spec]
Licenses -l [license]=[count] --licenses=[license_spec]
Begin Time -a [YYMMDDhhmm] --begin=YYYY-MM-DD[THH:MM[:SS]]

SGE - Slurm conversion examples

SGE Slurm
qstat

qstat -u username
qstat -f

squeue

squeue -u username
squeue -al

qsub

qsub -N jobname

qsub -m beas
qsub -M $USER@uab.edu
qsub -l h_rt=24:00:00
qsub -pe smp 16
qsub -l vf=4G
qsub -P projectname
qsub -o filename
qsub -e filename

sbatch

sbatch -J jobname

sbatch --mail-type=ALL
sbatch --mail-user=$USER@uab.edu
sbatch -t 24:00:00
sbatch -p partition -n 16
sbatch --mem-per-cpu=4096
sbatch -A projectname
sbatch -o filename
sbatch -e filename

qlogin for interactive via vnc sinteractive for interactive via vnc

qlogin -l h_rt=8:00:00, vf=1G

sinteractive --time=8:00:00 --mem=1024