Gromacs: Difference between revisions

From Cheaha
Jump to navigation Jump to search
(Added initial performance metrics from uabgrid-dev discussion 2009-11-11)
 
No edit summary
 
(22 intermediate revisions by 3 users not shown)
Line 1: Line 1:
[http://www.gromacs.org/ Gromacs] is a molecular dynamics package primarily designed for biomolecular systems such as proteins and lipids.
[http://www.gromacs.org/ Gromacs] is a molecular dynamics package primarily designed for biomolecular systems such as proteins and lipids.  originally developed in the '''[[wikipedia:University of Groningen|University of Groningen]]''', now maintained and extended at different places, including the '''[[wikipedia:University of Uppsala|University of Uppsala]]''', '''[[wikipedia:University of Stockholm|University of Stockholm]]''' and the '''[[wikipedia:Max Planck Institute for Polymer Research|Max Planck Institute for Polymer Research]]'''.
GROMACS is '''[[wikipedia:open source|open source]]''' software released under the '''[[wikipedia:GNU General Public License|GPL]]'''.


Benchmark data for running Gromacs on [[Cheaha]] follows:
The program is written for '''[[wikipedia:Unix-like|Unix-like]]''' operating systems; it can run on '''[[wikipedia:Microsoft Windows|Windows]]''' machines if the '''[[wikipedia: Cygwin|Cygwin]]''' Unix layer is used. The program can be run in parallel on multiple CPU cores or a network of machines using the '''[[wikipedia:Message Passing Interface|MPI]]''' library. The latest stable release on [[Cheaha]] is 4.0.7.




Two identical 4 CPU Gromacs runs and the jobs spread out as follows based on current queue load (the new nodes are using Infiniband, the old TCP for message passing):
== Using Gromacs ==
GROMACS is free software, licensed under the GNU General Public License. The details are available in the '''[http://www.gnu.org/copyleft/gpl.html| license text]''', but in short you can modify and redistribute the code as long as your version is licensed under the GPL too.


Dell Blades: 4 cpu job running on 4 compute nodes
=== Gromacs on Your Desktop ===
Job ID:    71566
Gromacs can be downloaded and installed on your desktop from http://www.gromacs.org/.
Submitted: 14:11:40
Completed: 17:06:03
Wall Clock: 02:54:23


              NODE (s)  Real (s)      (%)
Linux: Download and installation instructions for Gromacs are available at: http://www.gromacs.org/Downloads/Installation_Instructions
      Time: 10462.000  10462.000    100.0
                      2h54:22
              (Mnbf/s)  (GFlops)  (ns/day)  (hour/ns)
Performance:    238.044    16.164      4.129      5.812


Verari: 4 cpu job running on 2 compute nodes
Windows: Download and installation instructions  for Gromacs on Windows are avialable at: http://www.gromacs.org/Downloads/Installation_Instructions/Windows
Job ID:    71567
Submitted: 14:11:44
Completed: 23:13:01
Wall Clock: 09:01:17


              NODE (s)   Real (s)     (%)
=== Gromacs on Cheaha ===
      Time32473.000 32473.000    100.0
Gromacs is pre-installed on the [[Cheaha]] research computing system.  This allows users to run Gromacs directly on the cluster without any need to install software.
                      9h01:13
 
              (Mnbf/s(GFlops)   (ns/day) (hour/ns)
==== Gromacs Versions ====
Performance:    76.705      5.208      1.330    18.040
Use the 'module' command to view a list of available Gromacs versions. If the version that you require isn't listed, please open a help desk ticket to request the installation.
 
The following is an example output of the command and doesn't necessarily represent the currently installed versions:
<pre>
$ module avail gromacs
 
---------------------------------------------- /etc/modulefiles
-----------------------------------------------
gromacs/gromacs-4-gnu 
gromacs/gromacs-4-intel
 
</pre>
 
==== Submitting Gromacs jobs to Cheaha ====
 
These instructions provide an example of how to create and submit a Gromacs job on [[Cheaha]].
 
First, create the working directory for the job- Replace 'USERNAME' with the account associated username on Cheaha.
You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of the user home directory.
 
<pre>
$ mkdir -p $USER_SCRATCH/jobs/gromacs
$ cd $USER_SCRATCH/jobs/gromacs
</pre>
 
 
 
Next, Copy all the files required for Gromacs to the working directory.
Gromacs supports the following files: run parameters(*.mdp, m2p), structure(*.gro,g96,pdb), topolgy (*.top,itp,rtp,ndx), run input (*.tpr,tpa,tpb), trajectory (*.trj,trr,xtc,gro,g96,pdb), energy (*.ene,edr), and other (*.dat,edi,edo,eps,log,map,mtx,out,tex,xpm,xvg) files.
 
Eg.
Copy the directory "d.dppc" from the local host to a remote host's directory "gromacs"
<pre>
$ scp -r d.dppc USERNAME@cheaha.uabgrid.uab.edu:/data/scratch/USERNAME/jobs/gromacs
</pre>
 
 
 
Next, create a job submit script as shown below called 'gromacsSubmit', make sure to edit the following parameters:
 
* s_rt to an appropriate soft wall time limit
* h_rt to the maximum wall time for your job
* -N - job name
* -M - user email
* -pe openmpi* numberOfProcessors  (-pe openmpi* 32 - run the code in parallel on 32  processors on the Cheaha)
  * -l vf to the maximum memory needed for each task
* cd to the current working directory where the job data is stored
  * input file is topol.tpr, output file is gromacs_numberOfProcessors.out
 
<pre>
#!/bin/bash
#$ -S /bin/bash
#
# Execute in the current working directory
#$ -cwd
#
# Job runtime (24 hours)
#$ -l h_rt=2:00:00,s_rt=1:55:00
#$ -j y
#
# Job Name and email
#$ -N gromacs_test
#$ -M tanthony@uab.edu
#
#$ -pe openmpi* 8
# Load the appropriate module(s)
. /etc/profile.d/modules.sh
module load gromacs/gromacs-4-intel
#
#$ -V
#$ -l vf=1G
# Single precision
MDRUN=mdrun_mpi
 
cd /home/tanthony/d.dppc
 
mpirun -np ${NSLOTS} $MDRUN -v -np ${NSLOTS} -s topol.tpr -g gromacs_${NSLOTS}.out
 
</pre>
 
Submit the script to the scheduler with
<pre>
$ qsub gromacsSubmit
</pre>
 
The output will be
<pre>
Your job 8030124 ("gromacs_test") has been submitted
</pre>
 
You can check the status of the jobs using the 'qstat' command
<pre>
qstat -u $USER
 
job-ID prior  name      user        state submit/start at    queue                          slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
8030124 0.52111 gromacs_te tanthony    r    06/14/2011 11:42:41 all.q@cheaha-compute-1-8.local     8       
</pre>
 
The job output can be found in the output directory specified earlier.
 
== Gromacs Support ==
In order to facilitate interaction among Gromacs users, share experience, and provide peer-support UAB IT Research Computing will establish a Gromacs-users group.
 
 
== Gromacs Tutorials ==
* Gormacs Tutorials can be found at: http://www.gromacs.org/Documentation/Tutorials
* Gromacs Tutorials: http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/index.html
* Gromacs Introduction: http://www-personal.umich.edu/~amadi/fwspidr_tutor.pdf
 
 
== Benchmarks ==
 
Benchmark data for running Gromacs on [[Cheaha]] will be developed leveraging the benchmark foundation of the [http://biowulf.nih.gov/apps/gromacs NIH's Biowulf cluster Gromacs] testing suite combined with local work flow characteristics.
A comparative benchmark between the UAB Cheaha cluster, NIH Biowulf cluster and the ASC's Dense Memory Cluster has been performed and the results are available [[Gromacs Benchmark|here]].
 
 
 
 
 
 
[[Category:Software]][[Category:Molecular Dynamics]]

Latest revision as of 19:51, 31 October 2016

Gromacs is a molecular dynamics package primarily designed for biomolecular systems such as proteins and lipids. originally developed in the University of Groningen, now maintained and extended at different places, including the University of Uppsala, University of Stockholm and the Max Planck Institute for Polymer Research. GROMACS is open source software released under the GPL.

The program is written for Unix-like operating systems; it can run on Windows machines if the Cygwin Unix layer is used. The program can be run in parallel on multiple CPU cores or a network of machines using the MPI library. The latest stable release on Cheaha is 4.0.7.


Using Gromacs

GROMACS is free software, licensed under the GNU General Public License. The details are available in the license text, but in short you can modify and redistribute the code as long as your version is licensed under the GPL too.

Gromacs on Your Desktop

Gromacs can be downloaded and installed on your desktop from http://www.gromacs.org/.

Linux: Download and installation instructions for Gromacs are available at: http://www.gromacs.org/Downloads/Installation_Instructions

Windows: Download and installation instructions for Gromacs on Windows are avialable at: http://www.gromacs.org/Downloads/Installation_Instructions/Windows

Gromacs on Cheaha

Gromacs is pre-installed on the Cheaha research computing system. This allows users to run Gromacs directly on the cluster without any need to install software.

Gromacs Versions

Use the 'module' command to view a list of available Gromacs versions. If the version that you require isn't listed, please open a help desk ticket to request the installation.

The following is an example output of the command and doesn't necessarily represent the currently installed versions:

$ module avail gromacs

---------------------------------------------- /etc/modulefiles 
-----------------------------------------------
gromacs/gromacs-4-gnu   
gromacs/gromacs-4-intel

Submitting Gromacs jobs to Cheaha

These instructions provide an example of how to create and submit a Gromacs job on Cheaha.

First, create the working directory for the job- Replace 'USERNAME' with the account associated username on Cheaha. You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of the user home directory.

$ mkdir -p $USER_SCRATCH/jobs/gromacs 
$ cd $USER_SCRATCH/jobs/gromacs


Next, Copy all the files required for Gromacs to the working directory.

Gromacs supports the following files: run parameters(*.mdp, m2p), structure(*.gro,g96,pdb), topolgy (*.top,itp,rtp,ndx), run input (*.tpr,tpa,tpb), trajectory (*.trj,trr,xtc,gro,g96,pdb), energy (*.ene,edr), and other (*.dat,edi,edo,eps,log,map,mtx,out,tex,xpm,xvg) files.

Eg. Copy the directory "d.dppc" from the local host to a remote host's directory "gromacs"

$ scp -r d.dppc USERNAME@cheaha.uabgrid.uab.edu:/data/scratch/USERNAME/jobs/gromacs 


Next, create a job submit script as shown below called 'gromacsSubmit', make sure to edit the following parameters:

* s_rt to an appropriate soft wall time limit
* h_rt to the maximum wall time for your job
* -N - job name
* -M - user email
* -pe openmpi* numberOfProcessors  (-pe openmpi* 32 - run the code in parallel on 32  processors on the Cheaha)
* -l vf to the maximum memory needed for each task
* cd to the current working directory where the job data is stored
* input file is topol.tpr, output file is gromacs_numberOfProcessors.out
#!/bin/bash
#$ -S /bin/bash
#
# Execute in the current working directory
#$ -cwd
#
# Job runtime (24 hours)
#$ -l h_rt=2:00:00,s_rt=1:55:00
#$ -j y
#
# Job Name and email
#$ -N gromacs_test
#$ -M tanthony@uab.edu
#
#$ -pe openmpi* 8
# Load the appropriate module(s)
. /etc/profile.d/modules.sh
module load gromacs/gromacs-4-intel
#
#$ -V
#$ -l vf=1G
# Single precision
MDRUN=mdrun_mpi

cd /home/tanthony/d.dppc

mpirun -np ${NSLOTS} $MDRUN -v -np ${NSLOTS} -s topol.tpr -g gromacs_${NSLOTS}.out

Submit the script to the scheduler with

$ qsub gromacsSubmit

The output will be

Your job 8030124 ("gromacs_test") has been submitted

You can check the status of the jobs using the 'qstat' command

qstat -u $USER

job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
8030124 0.52111 gromacs_te tanthony     r     06/14/2011 11:42:41 all.q@cheaha-compute-1-8.local     8        

The job output can be found in the output directory specified earlier.

Gromacs Support

In order to facilitate interaction among Gromacs users, share experience, and provide peer-support UAB IT Research Computing will establish a Gromacs-users group.


Gromacs Tutorials


Benchmarks

Benchmark data for running Gromacs on Cheaha will be developed leveraging the benchmark foundation of the NIH's Biowulf cluster Gromacs testing suite combined with local work flow characteristics. A comparative benchmark between the UAB Cheaha cluster, NIH Biowulf cluster and the ASC's Dense Memory Cluster has been performed and the results are available here.