Gromacs: Difference between revisions
No edit summary |
No edit summary |
||
Line 38: | Line 38: | ||
<pre> | <pre> | ||
$ | $ mkdir -p /lustre/scratch/USERNAME/jobs/gromacs | ||
$ cd /lustre/scratch/USERNAME/jobs/gromacs | $ cd /lustre/scratch/USERNAME/jobs/gromacs | ||
</pre> | </pre> | ||
Next, Copy all the files required for Gromacs to the working directory. | |||
Gromacs supports the following files: run parameters(*.mdp, m2p), structure(*.gro,g96,pdb), topolgy (*.top,itp,rtp,ndx), run input (*.tpr,tpa,tpb), trajectory (*.trj,trr,xtc,gro,g96,pdb), energy (*.ene,edr), and other (*.dat,edi,edo,eps,log,map,mtx,out,tex,xpm,xvg) files. | |||
Eg. | |||
Copy the directory "d.dppc" from the local host to a remote host's directory "NAMD" | |||
<pre> | |||
$ scp -r d.dppc USERNAME@cheaha.uabgrid.uab.edu:/lustre/scratch/USERNAME/jobs/gromacs | |||
</pre> | |||
== Benchmarking == | == Benchmarking == | ||
Revision as of 16:35, 14 June 2011
Gromacs is a molecular dynamics package primarily designed for biomolecular systems such as proteins and lipids. originally developed in the University of Groningen, now maintained and extended at different places, including the University of Uppsala, University of Stockholm and the Max Planck Institute for Polymer Research. GROMACS is open source software released under the GPL.
The program is written for Unix-like operating systems; it can run on Windows machines if the Cygwin Unix layer is used. The program can be run in parallel on multiple CPU cores or a network of machines using the MPI library. The latest stable release on Cheaha is 4.0.7.
Using Gromacs
GROMACS is free software, licensed under the GNU General Public License. The details are available in the license text, but in short you can modify and redistribute the code as long as your version is licensed under the GPL too.
Gromacs on Your Desktop
Gromacs can be downloaded and installed on your desktop from http://www.gromacs.org/. Linux: Download and installation instructions for Gromacs are available at: http://www.gromacs.org/Downloads/Installation_Instructions Windows: Download and installation instructions for Gromacs on Windows are avialable at: http://www.gromacs.org/Downloads/Installation_Instructions/Windows
Gromacs on Cheaha
Gromacs is pre-installed on the Cheaha research computing system. This allows users to run Gromacs directly on the cluster without any need to install software.
Gromacs Versions
Use the 'module' command to view a list of available Gromacs versions. If the version that you require isn't listed, please open a help desk ticket to request the installation.
The following is an example output of the command and doesn't necessarily represent the currently installed versions:
$ module avail gromacs ---------------------------------------------- /etc/modulefiles ----------------------------------------------- gromacs/gromacs-4-gnu gromacs/gromacs-4-intel
Submitting Gromacs jobs to Cheaha
These instructions provide an example of how to create and submit a Gromacs job on Cheaha.
First, create the working directory for the job- Replace 'USERNAME' with the account associated username on Cheaha. You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of the user home directory.
$ mkdir -p /lustre/scratch/USERNAME/jobs/gromacs $ cd /lustre/scratch/USERNAME/jobs/gromacs
Next, Copy all the files required for Gromacs to the working directory.
Gromacs supports the following files: run parameters(*.mdp, m2p), structure(*.gro,g96,pdb), topolgy (*.top,itp,rtp,ndx), run input (*.tpr,tpa,tpb), trajectory (*.trj,trr,xtc,gro,g96,pdb), energy (*.ene,edr), and other (*.dat,edi,edo,eps,log,map,mtx,out,tex,xpm,xvg) files.
Eg. Copy the directory "d.dppc" from the local host to a remote host's directory "NAMD"
$ scp -r d.dppc USERNAME@cheaha.uabgrid.uab.edu:/lustre/scratch/USERNAME/jobs/gromacs
Benchmarking
2011 Hardware
Benchmark data for running Gromacs on Cheaha will be developed leveraging the benchmark foundation of the NIH's Biowulf cluster Gromacs testing suite combined with local work flow characteristics.
2007 Hardware and Gromacs 3.x
Note: The Gromacs 3.x code base was severely limited in spanning multiple compute nodes. The limit for 1GigE network fabrics was 4 nodes. The following performance data is provided for historical reference only and does not reflect performance of the Gromacs 4.x code base currently install on Cheaha.
Two identical 4 CPU Gromacs runs and the jobs spread out as follows based on current queue load (the new nodes are using Infiniband, the old TCP for message passing):
Dell Blades: 4 cpu job running on 4 compute nodes
Job ID: 71566 Submitted: 14:11:40 Completed: 17:06:03 Wall Clock: 02:54:23
NODE (s) Real (s) (%) Time: 10462.000 10462.000 100.0 2h54:22 (Mnbf/s) (GFlops) (ns/day) (hour/ns) Performance: 238.044 16.164 4.129 5.812
Verari: 4 cpu job running on 2 compute nodes
Job ID: 71567 Submitted: 14:11:44 Completed: 23:13:01 Wall Clock: 09:01:17
NODE (s) Real (s) (%) Time: 32473.000 32473.000 100.0 9h01:13 (Mnbf/s) (GFlops) (ns/day) (hour/ns) Performance: 76.705 5.208 1.330 18.040