NAMD: Difference between revisions
(Changed example email from USERNAME@uab.edu to YOUR_EMAIL_ADDRESS to prevent accidental spamming of UAB exchange) |
|||
Line 112: | Line 112: | ||
#$ -cwd | #$ -cwd | ||
# | # | ||
# Job runtime ( | # Job runtime (1 hour, at 55 minutes NAMD will be notified to shut down) | ||
#$ -l h_rt=1:00:00,s_rt=0:55:00 | #$ -l h_rt=1:00:00,s_rt=0:55:00 | ||
#$ -j y | #$ -j y | ||
Line 139: | Line 139: | ||
# | # | ||
########## Do Not Edit This Section ########## | ########## Do Not Edit This Section ########## | ||
## 20120302 - MJH - Are these variables really needed, or just an artifact | |||
## of a previous job script? I don't think NAMD uses any of these variables | |||
# | # | ||
# The version of MPICH to use, transport protocol & a trick to delete cleanly | # The version of MPICH to use, transport protocol & a trick to delete cleanly | ||
Line 147: | Line 149: | ||
#$ -v MPICH_PROCESS_GROUP=no | #$ -v MPICH_PROCESS_GROUP=no | ||
#$ -v CONV_RSH=ssh | #$ -v CONV_RSH=ssh | ||
# | ############################################## | ||
# Prepare nodelist file for charmrun ... | # Prepare nodelist file for charmrun ... | ||
# | # | ||
Line 154: | Line 157: | ||
awk '{print "host " $1}' ${TMPDIR}/machines >> ${TMPDIR}/charmlist | awk '{print "host " $1}' ${TMPDIR}/machines >> ${TMPDIR}/charmlist | ||
cat ${TMPDIR}/charmlist | cat ${TMPDIR}/charmlist | ||
########## Edit The | ########## Edit The Directory and MYFILE ########## | ||
# | |||
# Ready ... | # Ready ... | ||
# | # | ||
cd ${UABGRID_SCRATCH}/jobs/NAMD | |||
cd | |||
export MYFILE=apoa1 | export MYFILE=apoa1 | ||
${CHARMRUN_NAMD_DIR}/charmrun ${CHARMRUN_NAMD_DIR}/namd2 ++nodelist ${TMPDIR}/charmlist +p ${NSLOTS} ${MYFILE}.namd > ${MYFILE}.out | |||
${CHARMRUN_NAMD_DIR}/charmrun ${CHARMRUN_NAMD_DIR}/namd2 ++nodelist | |||
${TMPDIR}/charmlist +p ${NSLOTS} ${MYFILE}.namd > ${MYFILE}.out | |||
</pre> | </pre> |
Revision as of 17:24, 2 March 2012
NAMD (Not (just) Another Molecular Dynamics program)is a free-of-charge molecular dynamics simulation package written using the Charm++ parallel programming model, noted for its parallel efficiency and often used to simulate large systems (millions of atoms). It has been developed by the joint collaboration of the Theoretical and Computational Biophysics Group (TCB) and the Parallel Programming Laboratory (PPL) at the University of Illinois at Urbana-Champaign.
It was introduced in 1995 by Nelson et al. as a parallel molecular dynamics code enabling interactive simulation by linking to the visualization code VMD. NAMD has since matured, adding many features and scaling to thousands of processors. The latest stable release on the Cheaha research computing system is v2.8b1.
Using NAMD
NAMD is available free of charge for non-commercial use by individuals, academic or research institutions and corporations for in-house business purposes only, upon completion and submission of the online registration form presented when attempting to download NAMD at the web site http://www.ks.uiuc.edu/Research/namd/.
NAMD on Your Desktop
NAMD can be downloaded and installed on your desktop from http://www.ks.uiuc.edu/Research/namd/.
NAMD on Cheaha
NAMD is pre-installed on the Cheaha research computing system. This allows users to run NAMD directly on the cluster without any need to install software.
NAMD Versions
Use the 'module' command to view a list of available NAMD versions. If the version that you require isn't listed, please open a help desk ticket to request the installation.
The following is an example output of the command and doesn't necessarily represent the currently installed versions:
$ module avail namd ------------------------------- /etc/modulefiles ------------------------------- namd/namd-2.6 namd/namd-2.7 namd/namd-2.8
Submitting NAMD jobs to Cheaha
These instructions provide an example of how to create and submit a NAMD job on Cheaha.
First, create the working directory for the job- Replace 'USERNAME' with the account associated username on Cheaha. You can create any directory to run your job. It is recommended that the job directory be on the scratch (i.e. lustre filesystem) instead of the user home directory.
$ mkdir -p /lustre/scratch/USERNAME/jobs/NAMD $ cd /lustre/scratch/USERNAME/jobs/NAMD
Next, Copy all the files required for NAMD to the working directory.
NAMD requires the coordinates (*.pdb), structure(*.psf), parameters (*.xplor), and the config file to be present in the working directory.
Eg. Copy the directory "apoa1" from the local host to a remote host's directory "NAMD"
$ scp -r apoa1 USERNAME@cheaha.uabgrid.uab.edu:/lustre/scratch/USERNAME/jobs/NAMD
Edit the NAMD config file. Pay special attention to make sure that all the files listed in the config are present in working directory.
In the following case they are apao1.pdb, apoa1.psf, par_all22_prot_lipid.xplor, and par_all22_popc.xplor. Also change the outputname (last line in config file) to the location where the output should be written.
cellBasisVector1 108.8612 0.0 0.0 cellBasisVector2 0.0 108.8612 0.0 cellBasisVector3 0.0 0.0 77.758 cellOrigin 0.0 0.0 0.0 coordinates apoa1.pdb temperature 300 seed 74269 switching on switchdist 10 cutoff 12 pairlistdist 13.5 margin 0 stepspercycle 20 PME on PMEGridSizeX 108 PMEGridSizeY 108 PMEGridSizeZ 80 structure apoa1.psf parameters par_all22_prot_lipid.xplor parameters par_all22_popc.xplor exclude scaled1-4 1-4scaling 1.0 timestep 1.0 fullElectFrequency 4 numsteps 500 outputtiming 20 outputname /lustre/scratch/USERNAME/jobs/NAMD/output
Next, create a job submit script as shown below called 'namdSubmit', make sure to edit the following parameters:
* s_rt to an appropriate soft wall time limit * h_rt to the maximum wall time for your job * -N - job name * -M - user email * -pe namd2 numberOfProcessors (-pe namd2 32 - run the code in parallel on 32 processors on the Sipsey nodes of Cheaha) * -l vf to the maximum memory needed for each task * cd to the current working directory where the job data is stored * export MYFILE=configFileName (export MYFILE=apoa1 willuse config file apoa1
#!/bin/bash #$ -S /bin/bash # Execute from the current working directory #$ -cwd # # Job runtime (1 hour, at 55 minutes NAMD will be notified to shut down) #$ -l h_rt=1:00:00,s_rt=0:55:00 #$ -j y # # Job Name and email #$ -N namdtest_1 #$ -M YOUR_EMAIL_ADDRESS # Email options to determine when to send emails #$ -m be # #$ -pe namd2 32 # Load the appropriate module(s) . /etc/profile.d/modules.sh module load namd/namd-2.8 # #$ -V #$ -l vf=1G # # Single precision # The $NSLOTS variable is set automatically by SGE to match the number of # slots requests export CHARMRUN_NAMD_DIR=/share/apps/namd/NAMD_2.8b1_Linux-x86_64-ibverbs # ########## Do Not Edit This Section ########## ## 20120302 - MJH - Are these variables really needed, or just an artifact ## of a previous job script? I don't think NAMD uses any of these variables # # The version of MPICH to use, transport protocol & a trick to delete cleanly # running MPICH jobs ... # #$ -v MPIR_HOME=/opt/lam/intel/bin #$ -v P4_RSHCOMMAND=rsh #$ -v MPICH_PROCESS_GROUP=no #$ -v CONV_RSH=ssh ############################################## # Prepare nodelist file for charmrun ... # echo "Got ${NSLOTS} slots." echo "group main" > ${TMPDIR}/charmlist awk '{print "host " $1}' ${TMPDIR}/machines >> ${TMPDIR}/charmlist cat ${TMPDIR}/charmlist ########## Edit The Directory and MYFILE ########## # Ready ... # cd ${UABGRID_SCRATCH}/jobs/NAMD export MYFILE=apoa1 ${CHARMRUN_NAMD_DIR}/charmrun ${CHARMRUN_NAMD_DIR}/namd2 ++nodelist ${TMPDIR}/charmlist +p ${NSLOTS} ${MYFILE}.namd > ${MYFILE}.out
Submit the script to the scheduler with
$ qsub namdSubmit
The output will be
Your job 8013121 ("namdtest_1") has been submitted
You can check the status of the jobs using the 'qstat' command
qstat -u $USER job-ID prior name user state submit/start at queue slots ja-task-ID ----------------------------------------------------------------------------------------------------------------- 8013121 0.51232 namdtest_1 tanthony r 05/25/2011 10:39:56 sipsey.q@sipsey-compute-1-11.l 32
The job output can be found in the output directory specified earlier.
NAMD Support
In order to facilitate interaction among NAMD users, share experience, and provide peer-support UAB IT Research Computing will establish a NAMD-users group.
NAMD Tutorials
- NAMD tutorial can be found at: http://www.ks.uiuc.edu/Training/Tutorials/
- Introduction to MD simulations Hands-on: NAMD Tutorial: http://biophys.physics.missouri.edu/courses/phys4500-05/Lectures/lecture03b.pdf
- Getting Started with NAMD: http://www1.pacific.edu/~mmccallu/namd1/
NAMD Benchmarks
NAMD has been benchmarked on the UAB Cheaha cluster and can run with Infiniband interconnect on the generation 2 and generation 3 hardware. Benchmarking has also been completed using the ethernet interconnect on Cheaha. A comparative benchmark between the UAB Cheaha cluster, NIH Biowulf cluster and the ASC's Dense Memory Cluster has been performed and the results are available here.