Ior2016: Difference between revisions

From Cheaha
Jump to navigation Jump to search
(initial edit)
 
(added link to user guide)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
IOR Benchmark
IOR Benchmark
The IOR software is used for benchmarking parallel file systems using POSIX, MPIIO, or HDF5 interfaces.
The IOR software is used for benchmarking parallel file systems using POSIX, MPIIO, or HDF5 interfaces.
These IOR tests were performed on the UAB 4.7 PB gpfs storage fabric for performance estimation.


Downloaded from: https://github.com/LLNL/ior
Downloaded from: https://github.com/LLNL/ior


Before build:
IOR User guide is available at: http://www.csm.ornl.gov/essc/io/IOR-2.10.1.ornl.13/USER_GUIDE
 
==Before build==
Download IOR from github
 
<pre>
<pre>
export LDFLAGS=-L/usr/lpp/mmfs/lib
export LDFLAGS=-L/usr/lpp/mmfs/lib
Line 28: Line 34:


=== Running with mpi ===
=== Running with mpi ===
==== Direct run ====
  module load mpich/ge/gcc/64/3.2  
  module load mpich/ge/gcc/64/3.2  
  '''mpiexec -n 3 ./ior -a POSIX -N 3 -b 10m -d 5 -t 256k -o  /data/user/tanthony/ior-master/ 1 -e -g -w -r -s 1 -i 4 -vv -F -C'''  
  '''mpiexec -n 3 ./ior -a POSIX -N 3 -b 10m -d 5 -t 256k -o  /data/user/tanthony/ior-master/ 1 -e -g -w -r -s 1 -i 4 -vv -F -C'''  
Line 102: Line 109:




Running via srun
==== Running via srun ====


<pre>
<pre>

Latest revision as of 15:53, 13 October 2016

IOR Benchmark The IOR software is used for benchmarking parallel file systems using POSIX, MPIIO, or HDF5 interfaces.

These IOR tests were performed on the UAB 4.7 PB gpfs storage fabric for performance estimation.

Downloaded from: https://github.com/LLNL/ior

IOR User guide is available at: http://www.csm.ornl.gov/essc/io/IOR-2.10.1.ornl.13/USER_GUIDE

Before build

Download IOR from github

export LDFLAGS=-L/usr/lpp/mmfs/lib
export CPPFLAGS=-I/usr/lpp/mmfs/include
LIBS=-lgpfs ./configure 

to avoid the gpfs_fcntl issue https://github.com/LLNL/ior/issues/15

Running IOR

Direct Running with out mpi

 
./ior -vv -k -wWr -C -F -i4 -t 256k -b 10m -s574 -o /data/user/tanthony/ior-master/

 ./ior -vv -k -wWr -C -F -i4 -t 256k -b 10m -s574 -numTasks 3 -o /data/user/tanthony/ior-master/

 srun -N2 --time=10:00:00 --mem=4096 --partition=medium --job-name=iortest  ./ior -f IOR.input

IOR -a POSIX -N 3 -b 500m -d 5 -t 128k -o  /data/user/tanthony/ior-master/ 1 -e -g -w -r -s 1 -i 4 -vv -F -C

Running with mpi

Direct run

module load mpich/ge/gcc/64/3.2 
mpiexec -n 3 ./ior -a POSIX -N 3 -b 10m -d 5 -t 256k -o  /data/user/tanthony/ior-master/ 1 -e -g -w -r -s 1 -i 4 -vv -F -C 
Test 0 started: Tue Sep  6 12:01:17 2016
Path: /data/user/tanthony/ior-master
FS: 4766.2 TiB   Used FS: 1.7%   Inodes: 128.0 Mi   Used Inodes: 13.6%
Participating tasks: 3
Using reorderTasks '-C' (expecting block, not cyclic, task assignment)
task 0 on c0089
task 1 on c0089
task 2 on c0089
Summary:
    api                = POSIX
    test filename      = /data/user/tanthony/ior-master/
    access             = file-per-process
    pattern            = segmented (1 segment)
    ordering in a file = sequential offsets
    ordering inter file= constant task offsets = 1
    clients            = 3 (3 per node)
    repetitions        = 4
    xfersize           = 131072 bytes
    blocksize          = 500 MiB
    aggregate filesize = 1.46 GiB
Using Time Stamp 1473181277 (0x57cef65d) for Data Signature
Max Write: 4921.20 MiB/sec (5160.25 MB/sec)
Max Read:  11274.53 MiB/sec (11822.20 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write        4921.20    4086.66    4646.19     332.18    0.32463 0 3 3 4 1 1 1 0 0 1 524288000 131072 1572864000 POSIX 0
read        11274.53   11176.68   11229.53      38.65    0.13358 0 3 3 4 1 1 1 0 0 1 524288000 131072 1572864000 POSIX 0

Finished: Tue Sep  6 12:01:58 2016

Increasing number of threads

mpiexec -n 3 ./ior -a POSIX -N 3 -b 1g -d 5 -t 256k -o  /data/user/tanthony/ior-master/ -vv -k -wWr -C -F -i4
Using synchronized MPI timer
Start time skew across all tasks: 0.00 sec

Test 0 started: Tue Sep  6 12:17:53 2016
Path: /data/user/tanthony/ior-master
FS: 4766.2 TiB   Used FS: 1.7%   Inodes: 128.0 Mi   Used Inodes: 13.6%
Participating tasks: 3
Using reorderTasks '-C' (expecting block, not cyclic, task assignment)
task 0 on c0089
task 1 on c0089
task 2 on c0089
Summary:
    api                = POSIX
    test filename      = /data/user/tanthony/ior-master/
    access             = file-per-process
    pattern            = segmented (1 segment)
    ordering in a file = sequential offsets
    ordering inter file= constant task offsets = 1
    clients            = 3 (3 per node)
    repetitions        = 4
    xfersize           = 262144 bytes
    blocksize          = 1 GiB
    aggregate filesize = 3 GiB
Using Time Stamp 1473182273 (0x57cefa41) for Data Signature

Max Write: 6295.34 MiB/sec (6601.14 MB/sec)
Max Read:  10185.52 MiB/sec (10680.30 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write        6295.34    6139.65    6225.21      55.59    0.49352 0 3 3 4 1 1 1 0 0 1 1073741824 262144 3221225472 POSIX 0
read        10185.52    9304.62    9676.96     325.32    0.31781 0 3 3 4 1 1 1 0 0 1 1073741824 262144 3221225472 POSIX 0


Running via srun

srun -N3 -n3 --partition=beta --time=2:00:00 --mem=4096 mpiexec -n 3 ./ior -a POSIX -N 3 -b 1g -d 5 -t 256k -o  /data/user/tanthony/ior-master/ 1 -e -g -w -r -s 1 -i 4 -vv -F -C 


Increasing the number of nodes

  • Running with three nodes
srun -N3 -n3 --partition=beta --time=2:00:00 --mem=4096 mpiexec -n 3 ./ior -a POSIX -N 3 -b 2g -d 5 -t 2m -o  /data/user/tanthony/ior-master/ 1 -e -g -w -r -s 1 -i 3 -vv -F -C
Max Write: 11845.79 MiB/sec (12421.21 MB/sec)
Max Read:  15105.21 MiB/sec (15838.96 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write       11845.79    5360.16    9535.91    2958.26    0.73459 0 3 1 3 1 1 1 0 0 1 2147483648 2097152 6442450944 POSIX 0
read        15105.21   12168.91   13998.89    1303.39    0.44299 0 3 1 3 1 1 1 0 0 1 2147483648 2097152 6442450944 POSIX 0 
  • Running with 6 nodes
srun -N6 -n6 --partition=beta --time=2:00:00 --mem=4096 mpiexec -n 6 ./ior -a POSIX -N 6 -b 3g -d 5 -t 4m -o  /data/user/tanthony/ior-master/ 1 -e -g -w -r -s 1 -i 3 -vv -F -C

Max Write: 20743.88 MiB/sec (21751.53 MB/sec)
Max Read:  19156.05 MiB/sec (20086.57 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write       20743.88    4791.11   15196.25    7362.94    1.88494 0 6 1 3 1 1 1 0 0 1 3221225472 4194304 19327352832 POSIX 0
read        19156.05   11148.20   15872.29    3424.16    1.22674 0 6 1 3 1 1 1 0 0 1 3221225472 4194304 19327352832 POSIX 0
  • Running with 6 nodes and 16 tasks

srun -N16 -n16 --partition=beta --time=2:00:00 --mem=4096 mpiexec -n 16 ./ior -a POSIX -N 16 -b 4g -d 5 -t 4m -o  /data/user/tanthony/ior-master/ 1 -e -g -w -r -s 1 -i 3 -vv -F -C


Max Write: 21229.09 MiB/sec (22260.31 MB/sec)
Max Read:  19458.28 MiB/sec (20403.48 MB/sec)