Gromacs

From UABgrid Documentation
(Difference between revisions)
Jump to: navigation, search
(Added initial performance metrics from uabgrid-dev discussion 2009-11-11)
 
(Link to Biowulf for reference and qualify benchmark numbers for older Gromacs and older Cheaha hardware)
Line 1: Line 1:
 
[http://www.gromacs.org/ Gromacs] is a molecular dynamics package primarily designed for biomolecular systems such as proteins and lipids.
 
[http://www.gromacs.org/ Gromacs] is a molecular dynamics package primarily designed for biomolecular systems such as proteins and lipids.
  
Benchmark data for running Gromacs on [[Cheaha]] follows:
+
== Benchmarking ==
  
 +
=== 2011 Hardware ===
 +
 +
Benchmark data for running Gromacs on [[Cheaha]] will be developed leveraging the benchmark foundation of the [http://biowulf.nih.gov/apps/gromacs NIH's Biowulf cluster Gromacs] testing suite combined with local work flow characteristics.
 +
 +
=== 2007 Hardware and Gromacs 3.x ===
 +
 +
Note: The Gromacs 3.x code base was severely limited in spanning multiple compute nodes.  The limit for 1GigE network fabrics was 4 nodes.  The following performance data is provided for historical reference only and does not reflect performance of the Gromacs 4.x code base currently install on Cheaha.
  
 
Two identical 4 CPU Gromacs runs and the jobs spread out as follows based on current queue load (the new nodes are using Infiniband, the old TCP for message passing):
 
Two identical 4 CPU Gromacs runs and the jobs spread out as follows based on current queue load (the new nodes are using Infiniband, the old TCP for message passing):

Revision as of 13:51, 26 January 2011

Gromacs is a molecular dynamics package primarily designed for biomolecular systems such as proteins and lipids.

Benchmarking

2011 Hardware

Benchmark data for running Gromacs on Cheaha will be developed leveraging the benchmark foundation of the NIH's Biowulf cluster Gromacs testing suite combined with local work flow characteristics.

2007 Hardware and Gromacs 3.x

Note: The Gromacs 3.x code base was severely limited in spanning multiple compute nodes. The limit for 1GigE network fabrics was 4 nodes. The following performance data is provided for historical reference only and does not reflect performance of the Gromacs 4.x code base currently install on Cheaha.

Two identical 4 CPU Gromacs runs and the jobs spread out as follows based on current queue load (the new nodes are using Infiniband, the old TCP for message passing):

Dell Blades: 4 cpu job running on 4 compute nodes

Job ID:    71566
Submitted: 14:11:40
Completed: 17:06:03
Wall Clock: 02:54:23
              NODE (s)   Real (s)      (%)
      Time:  10462.000  10462.000    100.0
                      2h54:22
              (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:    238.044     16.164      4.129      5.812

Verari: 4 cpu job running on 2 compute nodes

Job ID:    71567
Submitted: 14:11:44
Completed: 23:13:01
Wall Clock: 09:01:17
              NODE (s)   Real (s)      (%)
      Time:  32473.000  32473.000    100.0
                      9h01:13
              (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:     76.705      5.208      1.330     18.040
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox