Personal tools

Data Movement

From UABgrid Documentation

Jump to: navigation, search

NOTE: This page is under construction.

There are various Linux native commands that you can use to move your data within the HPC cluster, such as mv, cp, scp etc. One of the most powerful tools for data movement on Linux is rsync, which we'll be using in our examples below.

rsync and scp can also be used for moving data from a local storage to Cheaha.



To find out more information such as flags, usage etc. about any of the above mentioned tools, you can use man TOOL_NAME.

[build@c0051 ~]$ man rsync

       rsync - a fast, versatile, remote (and local) file-copying tool

       Local:  rsync [OPTION...] SRC... [DEST]

       Access via remote shell:
         Pull: rsync [OPTION...] [USER@]HOST:SRC... [DEST]
         Push: rsync [OPTION...] SRC... [USER@]HOST:DEST

       Access via rsync daemon:
         Pull: rsync [OPTION...] [USER@]HOST::SRC... [DEST]
               rsync [OPTION...] rsync://[USER@]HOST[:PORT]/SRC... [DEST]
         Push: rsync [OPTION...] SRC... [USER@]HOST::DEST
               rsync [OPTION...] SRC... rsync://[USER@]HOST[:PORT]/DEST

       Usages with just one SRC arg and no DEST arg will list the source files
       instead of copying.


If you are interested in finding out about various methods of moving data and various tools which can be used to achieve that aim, this page provides a very good description/guide : How to transfer large amounts of data via network..


Do not store sensitive information on this filesystem. It is not encrypted. Note that your data will be stored on the cluster filesystem, and while not accessible to ordinary users, it could be accessible to the cluster administrator(s).


If the data that you are moving is large, then you should always use either an interactive session or a job script for your data movement. This ensures that the process for your data movement isn't using and slowing login nodes for a long time, and instead is performing these operations on a compute node.

Interactive session

  • Start an interactive session using srun
srun --ntasks=1 --mem-per-cpu=1024 --time=08:00:00 --partition=medium --job-name=DATA_TRANSFER --pty /bin/bash

NOTE: Please change the time required and the corresponding partition according to your need.

  • Start an rsync process to start the transfer, once you have moved from login001 to c00XX node:
[build@c0051 Salmon]$ rsync -aP SOURCE_PATH DESTINATION_PATH

Job Script

#SBATCH --job-name=test
#SBATCH --output=res.txt
#SBATCH --ntasks=1
#SBATCH --partition=express
# Time format = HH:MM:SS, DD-HH:MM:SS
#SBATCH --time=10:00
# Mimimum memory required per allocated  CPU  in  MegaBytes. 
#SBATCH --mem-per-cpu=2048
#SBATCH --mail-type=FAIL



  • Please change the time required and the corresponding partition according to your need.
  • After modifications to the given job script, submit it using : sbatch JOB_SCRIPT



This sections provides various use cases where you would need to move your data.

Moving data from local storage to HPC