# Data Movement

(Difference between revisions)

There are various Linux native commands that you can use to move your data within the HPC cluster, such as mv, cp, scp etc. One of the most powerful tools for data movement on Linux is rsync, which we'll be using in our examples below.

rsync and scp can also be used for moving data from a local storage to Cheaha.

## General Usage

To find out more information such as flags, uasge etc. about any of the above mentioned tools, you can use man TOOL_NAME.



### Job Script

#!/bin/bash
#
#SBATCH --job-name=test
#SBATCH --output=res.txt
#SBATCH --partition=express
#
# Time format = HH:MM:SS, DD-HH:MM:SS
#
#SBATCH --time=10:00
#
# Mimimum memory required per allocated  CPU  in  MegaBytes.
#
#SBATCH --mem-per-cpu=2048
#SBATCH --mail-type=FAIL

rsync -aP SOURCE_PATH DESTINATION_PATH


NOTE:

• Please change the time required and the corresponding partition according to your need.
• After modifications to the given job script, submit it using : sbatch JOB_SCRIPT

## Moving data from Lustre to GPFS Storage

SGE and Lustre will be taken offline December 18 2016 and decommissioned. All data remaining on Lustre after this date will be deleted.

Instructions for migrating data to /data/scratch/$USER location: • Login to the new hardware (hostname:cheaha.rc.uab.edu). Instructions to login can be found here. • You will notice that your /scratch/user/$USER is also mounted on the new hardware. It’s a read-only mount, and there to help you in moving your data .
• Start a rsync process using : rsync -aP /scratch/user/$USER/ /data/scratch/$USER. If the data that you would be transferring is large, then either start an interactive session for this task or create a [job script.

Data in /home or /rstore isn’t affected and remains the same on both new and old hardware, hence you don’t need to move that data.

## Examples

This sections provides various use cases where you would need to move your data.

\\TODO

\\TODO