MVAPICH: Difference between revisions

From Cheaha
Jump to navigation Jump to search
No edit summary
No edit summary
Line 8: Line 8:
MVAPICH 1.2 supports the following seven underlying transport interfaces:
MVAPICH 1.2 supports the following seven underlying transport interfaces:


High-Performance support with scalability for OpenFabrics/Gen2 interface, developed by OpenFabrics, to work with InfiniBand and other RDMA interconnects.
*High-Performance support with scalability for OpenFabrics/Gen2 interface, developed by OpenFabrics, to work with InfiniBand and other RDMA interconnects.
(NEW) High-Performance support with scalability for OpenFabrics/Gen2-RDMAoE interface, developed by OpenFabrics
* High-Performance support with scalability for OpenFabrics/Gen2-RDMAoE interface, developed by OpenFabrics
High-Performance support with scalability (for clusters with multi-thousand cores) for OpenFabrics/Gen2-Hybrid interface, developed by OpenFabrics, to work with InfiniBand.
*High-Performance support with scalability (for clusters with multi-thousand cores) for OpenFabrics/Gen2-Hybrid interface, developed by OpenFabrics, to work with InfiniBand.
Shared-Memory only channel This interface support is useful for running MPI jobs on multi-processor systems without using any high-performance network. For example, multi-core servers, desktops, and laptops; and clusters with serial nodes.
*Shared-Memory only channel This interface support is useful for running MPI jobs on multi-processor systems without using any high-performance network. For example, multi-core servers, desktops, and laptops; and clusters with serial nodes.
The InfiniPath interface for InfiniPath adapters from QLogic.
*The InfiniPath interface for InfiniPath adapters from QLogic.
The standard TCP/IP interface (provided by MPICH) to work with a range of networks. This interface can be used with IPoIB support of InfiniBand also. However, it will not deliver good performance/scalability as compared to any of the lower-level (OpenFabrics/Gen2 or OpenFabrics/Gen2-Hybrid) support.
*The standard TCP/IP interface (provided by MPICH) to work with a range of networks. This interface can be used with IPoIB support of InfiniBand also. However, it will not deliver good performance/scalability as compared to any of the lower-level (OpenFabrics/Gen2 or OpenFabrics/Gen2-Hybrid) support.
 
MVAPICH 1.2 supports many features for high performance, scalability portability and fault tolerance. It also supports a wide range of platforms (architecture, OS, compilers and InfiniBand adapters).
MVAPICH 1.2 supports many features for high performance, scalability portability and fault tolerance. It also supports a wide range of platforms (architecture, OS, compilers and InfiniBand adapters).


Project Website: http://mvapich.cse.ohio-state.edu/overview/mvapich/
Project Website: http://mvapich.cse.ohio-state.edu/overview/mvapich/


The following Modules files should be loaded for this package:
The following Modules files should be loaded for this package:

Revision as of 14:53, 20 March 2012

This page is a Generic stub.

You can help by expanding this page..


MVAPICH (MPI-1 over OpenFabrics/Gen2, OprnFabrics/Gen2-UD, uDAPL, InfiniPath, VAPI and TCP/IP)

This is an MPI-1 implementation. This implementation is based on MPICH and MVICH. MVAPICH is pronounced as ``em-vah-pich. The latest release is MVAPICH 1.2 (includes MPICH 1.2.7). It is available under BSD licensing.

MVAPICH 1.2 supports the following seven underlying transport interfaces:

  • High-Performance support with scalability for OpenFabrics/Gen2 interface, developed by OpenFabrics, to work with InfiniBand and other RDMA interconnects.
  • High-Performance support with scalability for OpenFabrics/Gen2-RDMAoE interface, developed by OpenFabrics
  • High-Performance support with scalability (for clusters with multi-thousand cores) for OpenFabrics/Gen2-Hybrid interface, developed by OpenFabrics, to work with InfiniBand.
  • Shared-Memory only channel This interface support is useful for running MPI jobs on multi-processor systems without using any high-performance network. For example, multi-core servers, desktops, and laptops; and clusters with serial nodes.
  • The InfiniPath interface for InfiniPath adapters from QLogic.
  • The standard TCP/IP interface (provided by MPICH) to work with a range of networks. This interface can be used with IPoIB support of InfiniBand also. However, it will not deliver good performance/scalability as compared to any of the lower-level (OpenFabrics/Gen2 or OpenFabrics/Gen2-Hybrid) support.

MVAPICH 1.2 supports many features for high performance, scalability portability and fault tolerance. It also supports a wide range of platforms (architecture, OS, compilers and InfiniBand adapters).

Project Website: http://mvapich.cse.ohio-state.edu/overview/mvapich/


The following Modules files should be loaded for this package:

 module load mvapich2-gnu