NAMD

NAMD (Not (just) Another Molecular Dynamics program)is a free-of-charge molecular dynamics simulation package written using the Charm++ parallel programming model, noted for its parallel efficiency and often used to simulate large systems (millions of atoms). It has been developed by the joint collaboration of the Theoretical and Computational Biophysics Group (TCB) and the Parallel Programming Laboratory (PPL) at the University of Illinois at Urbana-Champaign.

It was introduced in 1995 by Nelson et al. as a parallel molecular dynamics code enabling interactive simulation by linking to the visualization code VMD. NAMD has since matured, adding many features and scaling to thousands of processors. The latest stable release on the Cheaha research computing system is v2.8b1.

Using NAMD
NAMD is available free of charge for non-commercial use by individuals, academic or research institutions and corporations for in-house business purposes only, upon completion and submission of the online registration form presented when attempting to download NAMD at the web site http://www.ks.uiuc.edu/Research/namd/.

NAMD on Your Desktop
NAMD can be downloaded and installed on your desktop from http://www.ks.uiuc.edu/Research/namd/.

NAMD on Cheaha
NAMD is pre-installed on the Cheaha research computing system. This allows users to run NAMD directly on the cluster without any need to install software.

NAMD Versions
Use the 'module' command to view a list of available NAMD versions. If the version that you require isn't listed, please open a help desk ticket to request the installation.

The following is an example output of the command and doesn't necessarily represent the currently installed versions: $ module avail namd --- /etc/modulefiles --- namd/namd-2.6 namd/namd-2.7 namd/namd-2.8 namd/namd-2.9

Submitting NAMD jobs to Cheaha
These instructions provide an example of how to create and submit a NAMD job on Cheaha.

First, create the working directory for the job. You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of your home directory.

The ${USER_SCRATCH} variable expands to /data/scratch/$USER, where $USER is your cheaha account name.

$ mkdir -p ${USER_SCRATCH}/jobs/NAMD $ cd ${USER_SCRATCH}/jobs/NAMD

Next, Copy all the files required for NAMD to the working directory. NAMD requires the coordinates (*.pdb), structure(*.psf), parameters (*.xplor), and the config file to be present in the working directory.

Eg. Copy the directory "apoa1" from the local host to a remote host's directory "NAMD" $ scp -r apoa1 USERNAME@cheaha.uabgrid.uab.edu:/data/scratch/USERNAME/jobs/NAMD

Edit the NAMD config file. Pay special attention to make sure that all the files listed in the config are present in working directory. In the following case they are apao1.pdb, apoa1.psf, par_all22_prot_lipid.xplor, and par_all22_popc.xplor. Also change the outputname (last line in config file) to the location where the output should be written. cellBasisVector1    108.8612 0.0 0.0 cellBasisVector2    0.0 108.8612 0.0 cellBasisVector3    0.0 0.0 77.758 cellOrigin          0.0 0.0 0.0

coordinates         apoa1.pdb temperature         300 seed                74269

switching           on switchdist           10 cutoff              12 pairlistdist        13.5 margin              0 stepspercycle       20

PME                 on PMEGridSizeX         108 PMEGridSizeY        108 PMEGridSizeZ        80

structure           apoa1.psf parameters          par_all22_prot_lipid.xplor parameters          par_all22_popc.xplor exclude             scaled1-4 1-4scaling          1.0

timestep            1.0 fullElectFrequency  4

numsteps            500 outputtiming        20

outputname          /data/scratch/USERNAME/jobs/NAMD/output

Next, create a job submit script as shown below called 'namdSubmit', make sure to edit the following parameters:

* s_rt to an appropriate soft wall time limit * h_rt to the maximum wall time for your job * -N - job name * -M - user email * -pe namd* numberOfProcessors (-pe namd* 32 - run the code in parallel on 32  processors on the Cheaha compute nodes. The * wildcard instructs the scheduler to use any of the namd parallel environments and should always be used for NAMD jobs unless you really know what you are doing) * -l vf to the maximum memory needed for each task * cd to the current working directory where the job data is stored * export MYFILE=configFileName (export MYFILE=apoa1 willuse config file apoa1

. /etc/profile.d/modules.sh module load namd/namd-2.8
 * 1) !/bin/bash
 * 2) $ -S /bin/bash
 * 3) Example NAMD job submission script can be found here:
 * 4) https://docs.uabgrid.uab.edu/wiki/NAMD#Submitting_NAMD_jobs_to_Cheaha
 * 5) Execute from the current working directory
 * 6) $ -cwd
 * 7) Job runtime (1 hour, at 55 minutes NAMD will be notified to shut down)
 * 8) $ -l h_rt=1:00:00,s_rt=0:55:00
 * 9) $ -j y
 * 10) Job Name and email
 * 11) $ -N namdtest_1
 * 12) $ -M YOUR_EMAIL_ADDRESS
 * 13) Email options to determine when to send emails
 * 14) $ -m be
 * 15) $ -pe namd* 32
 * 16) Load the appropriate module(s)
 * 1) Email options to determine when to send emails
 * 2) $ -m be
 * 3) $ -pe namd* 32
 * 4) Load the appropriate module(s)
 * 1) Load the appropriate module(s)
 * 1) Load the appropriate module(s)
 * 1) $ -V
 * 2) $ -l vf=1G
 * 3) Do Not Edit This Section ##########
 * 4) NAMD requires these variables, otherwise it will try to use rsh
 * 5) instead of ssh, which won't work on cheaha
 * 6) $ -v P4_RSHCOMMAND=rsh
 * 7) $ -v MPICH_PROCESS_GROUP=no
 * 8) $ -v CONV_RSH=ssh
 * 1) $ -v P4_RSHCOMMAND=rsh
 * 2) $ -v MPICH_PROCESS_GROUP=no
 * 3) $ -v CONV_RSH=ssh
 * 1) $ -v CONV_RSH=ssh

echo "Got ${NSLOTS} slots." echo "group main" > ${TMPDIR}/charmlist awk '{print "host " $1}' ${TMPDIR}/machines >> ${TMPDIR}/charmlist cat ${TMPDIR}/charmlist
 * 1) Prepare nodelist file for charmrun ...
 * 2) The scheduler provides several variables, two of which we are using here:
 * 3) $NSLOTS  -  This contains the number of cpu slots requested by the -pe namd*
 * 4) $TMPDIR  -  This is a temporary directory used by the scheduler for this job
 * 5)      we use it to build the charmlist machine file for charmrun
 * 1)      we use it to build the charmlist machine file for charmrun

cd ${USER_SCRATCH}/jobs/NAMD export MYFILE=apoa1
 * 1) Edit The Directory and MYFILE ##########
 * 2) Ready ...

${CHARMRUN_NAMD_DIR}/charmrun ${CHARMRUN_NAMD_DIR}/namd2 ++nodelist ${TMPDIR}/charmlist +p ${NSLOTS} ${MYFILE}.namd > ${MYFILE}.out
 * 1) The variable ${CHARMRUN_NAMD_DIR} is automatically set by the
 * 2) "module load namd/namd-2.8" command above and points to the installation
 * 3) directory for NAMD

Submit the script to the scheduler with $ qsub namdSubmit

The output will be Your job 8013121 ("namdtest_1") has been submitted

You can check the status of the jobs using the 'qstat' command qstat -u $USER job-ID prior   name       user         state submit/start at     queue                          slots ja-task-ID - 8013121 0.51232 namdtest_1 tanthony    r     05/25/2011 10:39:56 sipsey.q@sipsey-compute-1-11.l    32

The job output can be found in the output directory specified earlier.

NAMD Support
In order to facilitate interaction among NAMD users, share experience, and provide peer-support UAB IT Research Computing will establish a NAMD-users group.

NAMD Tutorials

 * NAMD tutorial can be found at: http://www.ks.uiuc.edu/Training/Tutorials/
 * Introduction to MD simulations Hands-on: NAMD Tutorial: http://biophys.physics.missouri.edu/courses/phys4500-05/Lectures/lecture03b.pdf
 * Getting Started with NAMD: http://www1.pacific.edu/~mmccallu/namd1/

NAMD Benchmarks
NAMD has been benchmarked on the UAB Cheaha cluster and can run with Infiniband interconnect on the generation 2 and generation 3 hardware. Benchmarking has also been completed using the ethernet interconnect on Cheaha. A comparative benchmark between the UAB Cheaha cluster, NIH Biowulf cluster and the ASC's Dense Memory Cluster has been performed and the results are available here.