NAMD: Difference between revisions

From Cheaha
Jump to navigation Jump to search
No edit summary
 
(16 intermediate revisions by 2 users not shown)
Line 14: Line 14:


=== NAMD on Cheaha ===
=== NAMD on Cheaha ===
MATLAB is pre-installed on the [[Cheaha]] research computing system.  This allows users to run NAMD directly on the cluster without any need to install software.
NAMD is pre-installed on the [[Cheaha]] research computing system.  This allows users to run NAMD directly on the cluster without any need to install software.


==== NAMD Versions ====
==== NAMD Versions ====
Line 27: Line 27:
namd/namd-2.7  
namd/namd-2.7  
namd/namd-2.8
namd/namd-2.8
namd/namd-2.9
</pre>
</pre>


Line 33: Line 34:
These instructions provide an example of how to create and submit a NAMD job on [[Cheaha]].
These instructions provide an example of how to create and submit a NAMD job on [[Cheaha]].


First, create the working directory for the job- Replace 'USERNAME' with the account associated username on Cheaha.
First, create the working directory for the job. You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of your home directory.
You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of the user home directory.
 
The ${USER_SCRATCH} variable expands to /data/scratch/$USER, where $USER is your cheaha account name.


<pre>
<pre>
$ mkdir mkdir -p /lustre/scratch/USERNAME/jobs/NAMD  
$ mkdir -p ${USER_SCRATCH}/jobs/NAMD  
$ cd /lustre/scratch/USERNAME/jobs/NAMD
$ cd ${USER_SCRATCH}/jobs/NAMD
</pre>
</pre>


Next, Copy all the files required for NAMD to the working directory.
Next, Copy all the files required for NAMD to the working directory.
Line 49: Line 50:
Copy the directory "apoa1" from the local host to a remote host's directory "NAMD"
Copy the directory "apoa1" from the local host to a remote host's directory "NAMD"
<pre>
<pre>
$ scp -r apoa1 USERNAME@cheaha.uabgrid.uab.edu:/lustre/scratch/USERNAME/jobs/NAMD  
$ scp -r apoa1 USERNAME@cheaha.uabgrid.uab.edu:/data/scratch/USERNAME/jobs/NAMD  
</pre>
</pre>




Edit the NAMD config file. Pay special attention to make sure that allthe files listed in the config are present in working directory.
Edit the NAMD config file. Pay special attention to make sure that all the files listed in the config are present in working directory.
In the following case they are apao1.pdb, apoa1.psf, par_all22_prot_lipid.xplor, and  par_all22_popc.xplor. Also change the outputname (last line in config file) to the location where the output should be written.
In the following case they are apao1.pdb, apoa1.psf, par_all22_prot_lipid.xplor, and  par_all22_popc.xplor. Also change the outputname (last line in config file) to the location where the output should be written.
<pre>
<pre>
Line 89: Line 90:
outputtiming        20
outputtiming        20


outputname          /lustre/scratch/USERNAME/jobs/NAMD/output
outputname          /data/scratch/USERNAME/jobs/NAMD/output


</pre>
</pre>
Line 100: Line 101:
  * -N - job name
  * -N - job name
  * -M - user email
  * -M - user email
  * -pe namd2 numberOfProcessors  (-pe namd2 32 - run the code in parallel on 32  processors on the Sipsey nodes of Cheaha)
  * -pe namd* numberOfProcessors  (-pe namd* 32 - run the code in parallel on 32  processors on the Cheaha compute nodes. The * wildcard instructs the scheduler to use any of the namd parallel environments and should always be used for NAMD jobs unless you really know what you are doing)
  * -l vf to the maximum memory needed for each task
  * -l vf to the maximum memory needed for each task
  * cd to the current working directory where the job data is stored
  * cd to the current working directory where the job data is stored
Line 108: Line 109:
#!/bin/bash
#!/bin/bash
#$ -S /bin/bash
#$ -S /bin/bash
 
#
#$ -m be
# Example NAMD job submission script can be found here:
# https://docs.uabgrid.uab.edu/wiki/NAMD#Submitting_NAMD_jobs_to_Cheaha
#
#
# Execute from the current working directory
# Execute from the current working directory
#$ -cwd
#$ -cwd
#
#
# Job runtime (24 hours)
# Job runtime (1 hour, at 55 minutes NAMD will be notified to shut down)
#$ -l h_rt=1:00:00,s_rt=0:55:00
#$ -l h_rt=1:00:00,s_rt=0:55:00
#$ -j y
#$ -j y
Line 120: Line 122:
# Job Name and email
# Job Name and email
#$ -N namdtest_1
#$ -N namdtest_1
#$ -M USERNAME@uab.edu
#$ -M YOUR_EMAIL_ADDRESS
# Email options to determine when to send emails
#$ -m be
#
#$ -pe namd* 32
#
#
#$ -pe namd2 32
# Load the appropriate module(s)
# Load the appropriate module(s)
. /etc/profile.d/modules.sh
. /etc/profile.d/modules.sh
Line 130: Line 135:
#$ -l vf=1G
#$ -l vf=1G
#
#
# Single precision
# The $NSLOTS variable is set automatically by SGE to match the number of
# slots requests
export CHARMRUN_NAMD_DIR=/share/apps/namd/NAMD_2.8b1_Linux-x86_64-ibverbs
#
#
########## Do Not Edit This Section ##########
########## Do Not Edit This Section ##########
## NAMD requires these variables, otherwise it will try to use rsh
## instead of ssh, which won't work on cheaha
#
#
# The version of MPICH to use, transport protocol & a trick to delete cleanly
# running MPICH jobs ...
#
#$ -v MPIR_HOME=/opt/lam/intel/bin
#$ -v P4_RSHCOMMAND=rsh
#$ -v P4_RSHCOMMAND=rsh
#$ -v MPICH_PROCESS_GROUP=no
#$ -v MPICH_PROCESS_GROUP=no
#$ -v CONV_RSH=ssh
#$ -v CONV_RSH=ssh
#
##############################################
 
# Prepare nodelist file for charmrun ...
# Prepare nodelist file for charmrun ...
#
#
# The scheduler provides several variables, two of which we are using here:
# $NSLOTS  -  This contains the number of cpu slots requested by the -pe namd*
# $TMPDIR  -  This is a temporary directory used by the scheduler for this job
#      we use it to build the charmlist machine file for charmrun
echo "Got ${NSLOTS} slots."
echo "Got ${NSLOTS} slots."
echo "group main" > ${TMPDIR}/charmlist
echo "group main" > ${TMPDIR}/charmlist
awk '{print "host " $1}' ${TMPDIR}/machines >> ${TMPDIR}/charmlist
awk '{print "host " $1}' ${TMPDIR}/machines >> ${TMPDIR}/charmlist
cat ${TMPDIR}/charmlist
cat ${TMPDIR}/charmlist
#
 
########## Edit The Filename ##########
########## Edit The Directory and MYFILE ##########
#
# Ready ...
# Ready ...
#  
#  
 
cd ${USER_SCRATCH}/jobs/NAMD
cd /lustre/scratch/USERNAME/jobs/NAMD
 
 
export MYFILE=apoa1
export MYFILE=apoa1


 
# The variable ${CHARMRUN_NAMD_DIR} is automatically set by the
${CHARMRUN_NAMD_DIR}/charmrun ${CHARMRUN_NAMD_DIR}/namd2 ++nodelist
# "module load namd/namd-2.8" command above and points to the installation
${TMPDIR}/charmlist +p ${NSLOTS} ${MYFILE}.namd > ${MYFILE}.out
# directory for NAMD
${CHARMRUN_NAMD_DIR}/charmrun ${CHARMRUN_NAMD_DIR}/namd2 ++nodelist ${TMPDIR}/charmlist +p ${NSLOTS} ${MYFILE}.namd > ${MYFILE}.out


</pre>
</pre>
Line 193: Line 191:
== NAMD Support ==  
== NAMD Support ==  
In order to facilitate interaction among NAMD users, share experience, and provide peer-support UAB IT Research Computing will establish a NAMD-users group.
In order to facilitate interaction among NAMD users, share experience, and provide peer-support UAB IT Research Computing will establish a NAMD-users group.


== NAMD Tutorials ==
== NAMD Tutorials ==
Line 199: Line 196:
* Introduction to MD simulations Hands-on: NAMD Tutorial: http://biophys.physics.missouri.edu/courses/phys4500-05/Lectures/lecture03b.pdf
* Introduction to MD simulations Hands-on: NAMD Tutorial: http://biophys.physics.missouri.edu/courses/phys4500-05/Lectures/lecture03b.pdf
* Getting Started with NAMD: http://www1.pacific.edu/~mmccallu/namd1/
* Getting Started with NAMD: http://www1.pacific.edu/~mmccallu/namd1/


== NAMD Benchmarks ==
== NAMD Benchmarks ==
NAMD has been benchmarked on the UAB [[Cheaha]] cluster and can run with Infiniband interconnect on the generation 2 and generation3 hardware. Benchmarking has also been completed with ether interconnect on Cheaha.
NAMD has been benchmarked on the UAB [[Cheaha]] cluster and can run with Infiniband interconnect on the generation 2 and generation 3 hardware. Benchmarking has also been completed using the ethernet interconnect on Cheaha.
A comparative benchmark between the UAB Cheaha cluster, NIH Biowulf cluster and the ASC's Dense Memory Cluster has been performed and the results are available [[NAMD Benchmarks|here]].
A comparative benchmark between the UAB Cheaha cluster, NIH Biowulf cluster and the ASC's Dense Memory Cluster has been performed and the results are available [[NAMD Benchmarks|here]].
[[Category:Software]][[Category:Molecular Dynamics]]

Latest revision as of 19:53, 31 October 2016

NAMD (Not (just) Another Molecular Dynamics program)is a free-of-charge molecular dynamics simulation package written using the Charm++ parallel programming model, noted for its parallel efficiency and often used to simulate large systems (millions of atoms). It has been developed by the joint collaboration of the Theoretical and Computational Biophysics Group (TCB) and the Parallel Programming Laboratory (PPL) at the University of Illinois at Urbana-Champaign.

It was introduced in 1995 by Nelson et al. as a parallel molecular dynamics code enabling interactive simulation by linking to the visualization code VMD. NAMD has since matured, adding many features and scaling to thousands of processors. The latest stable release on the Cheaha research computing system is v2.8b1.


Using NAMD

NAMD is available free of charge for non-commercial use by individuals, academic or research institutions and corporations for in-house business purposes only, upon completion and submission of the online registration form presented when attempting to download NAMD at the web site http://www.ks.uiuc.edu/Research/namd/.

NAMD on Your Desktop

NAMD can be downloaded and installed on your desktop from http://www.ks.uiuc.edu/Research/namd/.

NAMD on Cheaha

NAMD is pre-installed on the Cheaha research computing system. This allows users to run NAMD directly on the cluster without any need to install software.

NAMD Versions

Use the 'module' command to view a list of available NAMD versions. If the version that you require isn't listed, please open a help desk ticket to request the installation.

The following is an example output of the command and doesn't necessarily represent the currently installed versions:

$ module avail namd
------------------------------- /etc/modulefiles  
-------------------------------
namd/namd-2.6 
namd/namd-2.7 
namd/namd-2.8
namd/namd-2.9

Submitting NAMD jobs to Cheaha

These instructions provide an example of how to create and submit a NAMD job on Cheaha.

First, create the working directory for the job. You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of your home directory.

The ${USER_SCRATCH} variable expands to /data/scratch/$USER, where $USER is your cheaha account name.

$ mkdir -p ${USER_SCRATCH}/jobs/NAMD 
$ cd ${USER_SCRATCH}/jobs/NAMD

Next, Copy all the files required for NAMD to the working directory.

NAMD requires the coordinates (*.pdb), structure(*.psf), parameters (*.xplor), and the config file to be present in the working directory.

Eg. Copy the directory "apoa1" from the local host to a remote host's directory "NAMD"

$ scp -r apoa1 USERNAME@cheaha.uabgrid.uab.edu:/data/scratch/USERNAME/jobs/NAMD 


Edit the NAMD config file. Pay special attention to make sure that all the files listed in the config are present in working directory. In the following case they are apao1.pdb, apoa1.psf, par_all22_prot_lipid.xplor, and par_all22_popc.xplor. Also change the outputname (last line in config file) to the location where the output should be written.

cellBasisVector1     108.8612 0.0 0.0
cellBasisVector2     0.0 108.8612 0.0
cellBasisVector3     0.0 0.0 77.758
cellOrigin           0.0 0.0 0.0

coordinates          apoa1.pdb
temperature          300
seed                 74269

switching            on
switchdist           10
cutoff               12
pairlistdist         13.5
margin               0
stepspercycle        20

PME                  on
PMEGridSizeX         108
PMEGridSizeY         108
PMEGridSizeZ         80

structure            apoa1.psf
parameters           par_all22_prot_lipid.xplor
parameters           par_all22_popc.xplor
exclude              scaled1-4
1-4scaling           1.0

timestep             1.0
fullElectFrequency   4

numsteps             500
outputtiming         20

outputname           /data/scratch/USERNAME/jobs/NAMD/output


Next, create a job submit script as shown below called 'namdSubmit', make sure to edit the following parameters:

* s_rt to an appropriate soft wall time limit
* h_rt to the maximum wall time for your job
* -N - job name
* -M - user email
* -pe namd* numberOfProcessors  (-pe namd* 32 - run the code in parallel on 32  processors on the Cheaha compute nodes. The * wildcard instructs the scheduler to use any of the namd parallel environments and should always be used for NAMD jobs unless you really know what you are doing)
* -l vf to the maximum memory needed for each task
* cd to the current working directory where the job data is stored
* export MYFILE=configFileName  (export MYFILE=apoa1 willuse config file apoa1
#!/bin/bash
#$ -S /bin/bash
#
# Example NAMD job submission script can be found here:
# https://docs.uabgrid.uab.edu/wiki/NAMD#Submitting_NAMD_jobs_to_Cheaha
#
# Execute from the current working directory
#$ -cwd
#
# Job runtime (1 hour, at 55 minutes NAMD will be notified to shut down)
#$ -l h_rt=1:00:00,s_rt=0:55:00
#$ -j y
#
# Job Name and email
#$ -N namdtest_1
#$ -M YOUR_EMAIL_ADDRESS
# Email options to determine when to send emails
#$ -m be
#
#$ -pe namd* 32
#
# Load the appropriate module(s)
. /etc/profile.d/modules.sh
module load namd/namd-2.8
#
#$ -V
#$ -l vf=1G
#
#
########## Do Not Edit This Section ##########
## NAMD requires these variables, otherwise it will try to use rsh
## instead of ssh, which won't work on cheaha
#
#$ -v P4_RSHCOMMAND=rsh
#$ -v MPICH_PROCESS_GROUP=no
#$ -v CONV_RSH=ssh
##############################################

# Prepare nodelist file for charmrun ...
#
# The scheduler provides several variables, two of which we are using here:
# $NSLOTS  -  This contains the number of cpu slots requested by the -pe namd*
# $TMPDIR  -  This is a temporary directory used by the scheduler for this job
#      we use it to build the charmlist machine file for charmrun
echo "Got ${NSLOTS} slots."
echo "group main" > ${TMPDIR}/charmlist
awk '{print "host " $1}' ${TMPDIR}/machines >> ${TMPDIR}/charmlist
cat ${TMPDIR}/charmlist

########## Edit The Directory and MYFILE ##########
# Ready ...
# 
cd ${USER_SCRATCH}/jobs/NAMD
export MYFILE=apoa1

# The variable ${CHARMRUN_NAMD_DIR} is automatically set by the
# "module load namd/namd-2.8" command above and points to the installation
# directory for NAMD
${CHARMRUN_NAMD_DIR}/charmrun ${CHARMRUN_NAMD_DIR}/namd2 ++nodelist ${TMPDIR}/charmlist +p ${NSLOTS} ${MYFILE}.namd > ${MYFILE}.out

Submit the script to the scheduler with

$ qsub namdSubmit

The output will be

Your job 8013121 ("namdtest_1") has been submitted

You can check the status of the jobs using the 'qstat' command

qstat -u $USER
job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
8013121 0.51232 namdtest_1 tanthony     r     05/25/2011 10:39:56 sipsey.q@sipsey-compute-1-11.l    32 

The job output can be found in the output directory specified earlier.

NAMD Support

In order to facilitate interaction among NAMD users, share experience, and provide peer-support UAB IT Research Computing will establish a NAMD-users group.

NAMD Tutorials

NAMD Benchmarks

NAMD has been benchmarked on the UAB Cheaha cluster and can run with Infiniband interconnect on the generation 2 and generation 3 hardware. Benchmarking has also been completed using the ethernet interconnect on Cheaha. A comparative benchmark between the UAB Cheaha cluster, NIH Biowulf cluster and the ASC's Dense Memory Cluster has been performed and the results are available here.