Gromacs

From Cheaha
Revision as of 14:57, 4 April 2012 by Tanthony@uab.edu (talk | contribs) (added category MD)
Jump to navigation Jump to search


Attention: Research Computing Documentation has Moved
https://docs.rc.uab.edu/


Please use the new documentation url https://docs.rc.uab.edu/ for all Research Computing documentation needs.


As a result of this move, we have deprecated use of this wiki for documentation. We are providing read-only access to the content to facilitate migration of bookmarks and to serve as an historical record. All content updates should be made at the new documentation site. The original wiki will not receive further updates.

Thank you,

The Research Computing Team

Gromacs is a molecular dynamics package primarily designed for biomolecular systems such as proteins and lipids. originally developed in the University of Groningen, now maintained and extended at different places, including the University of Uppsala, University of Stockholm and the Max Planck Institute for Polymer Research. GROMACS is open source software released under the GPL.

The program is written for Unix-like operating systems; it can run on Windows machines if the Cygwin Unix layer is used. The program can be run in parallel on multiple CPU cores or a network of machines using the MPI library. The latest stable release on Cheaha is 4.0.7.


Using Gromacs

GROMACS is free software, licensed under the GNU General Public License. The details are available in the license text, but in short you can modify and redistribute the code as long as your version is licensed under the GPL too.

Gromacs on Your Desktop

Gromacs can be downloaded and installed on your desktop from http://www.gromacs.org/.

Linux: Download and installation instructions for Gromacs are available at: http://www.gromacs.org/Downloads/Installation_Instructions

Windows: Download and installation instructions for Gromacs on Windows are avialable at: http://www.gromacs.org/Downloads/Installation_Instructions/Windows

Gromacs on Cheaha

Gromacs is pre-installed on the Cheaha research computing system. This allows users to run Gromacs directly on the cluster without any need to install software.

Gromacs Versions

Use the 'module' command to view a list of available Gromacs versions. If the version that you require isn't listed, please open a help desk ticket to request the installation.

The following is an example output of the command and doesn't necessarily represent the currently installed versions:

$ module avail gromacs

---------------------------------------------- /etc/modulefiles 
-----------------------------------------------
gromacs/gromacs-4-gnu   
gromacs/gromacs-4-intel

Submitting Gromacs jobs to Cheaha

These instructions provide an example of how to create and submit a Gromacs job on Cheaha.

First, create the working directory for the job- Replace 'USERNAME' with the account associated username on Cheaha. You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of the user home directory.

$ mkdir -p /lustre/scratch/USERNAME/jobs/gromacs 
$ cd /lustre/scratch/USERNAME/jobs/gromacs


Next, Copy all the files required for Gromacs to the working directory.

Gromacs supports the following files: run parameters(*.mdp, m2p), structure(*.gro,g96,pdb), topolgy (*.top,itp,rtp,ndx), run input (*.tpr,tpa,tpb), trajectory (*.trj,trr,xtc,gro,g96,pdb), energy (*.ene,edr), and other (*.dat,edi,edo,eps,log,map,mtx,out,tex,xpm,xvg) files.

Eg. Copy the directory "d.dppc" from the local host to a remote host's directory "gromacs"

$ scp -r d.dppc USERNAME@cheaha.uabgrid.uab.edu:/lustre/scratch/USERNAME/jobs/gromacs 


Next, create a job submit script as shown below called 'gromacsSubmit', make sure to edit the following parameters:

* s_rt to an appropriate soft wall time limit
* h_rt to the maximum wall time for your job
* -N - job name
* -M - user email
* -pe openmpi* numberOfProcessors  (-pe openmpi* 32 - run the code in parallel on 32  processors on the Cheaha)
* -l vf to the maximum memory needed for each task
* cd to the current working directory where the job data is stored
* input file is topol.tpr, output file is gromacs_numberOfProcessors.out
#!/bin/bash
#$ -S /bin/bash
#
# Execute in the current working directory
#$ -cwd
#
# Job runtime (24 hours)
#$ -l h_rt=2:00:00,s_rt=1:55:00
#$ -j y
#
# Job Name and email
#$ -N gromacs_test
#$ -M tanthony@uab.edu
#
#$ -pe openmpi* 8
# Load the appropriate module(s)
. /etc/profile.d/modules.sh
module load gromacs/gromacs-4-intel
#
#$ -V
#$ -l vf=1G
# Single precision
MDRUN=mdrun_mpi

cd /home/tanthony/d.dppc

mpirun -np ${NSLOTS} $MDRUN -v -np ${NSLOTS} -s topol.tpr -g gromacs_${NSLOTS}.out

Submit the script to the scheduler with

$ qsub gromacsSubmit

The output will be

Your job 8030124 ("gromacs_test") has been submitted

You can check the status of the jobs using the 'qstat' command

qstat -u $USER

job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
8030124 0.52111 gromacs_te tanthony     r     06/14/2011 11:42:41 all.q@cheaha-compute-1-8.local     8        

The job output can be found in the output directory specified earlier.

Gromacs Support

In order to facilitate interaction among Gromacs users, share experience, and provide peer-support UAB IT Research Computing will establish a Gromacs-users group.


Gromacs Tutorials


Benchmarks

Benchmark data for running Gromacs on Cheaha will be developed leveraging the benchmark foundation of the NIH's Biowulf cluster Gromacs testing suite combined with local work flow characteristics. A comparative benchmark between the UAB Cheaha cluster, NIH Biowulf cluster and the ASC's Dense Memory Cluster has been performed and the results are available here.