<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://docs.uabgrid.uab.edu/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wwarr%40uab.edu</id>
	<title>Cheaha - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.uabgrid.uab.edu/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wwarr%40uab.edu"/>
	<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/wiki/Special:Contributions/Wwarr@uab.edu"/>
	<updated>2026-04-15T18:40:21Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.2</generator>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=SLURM_VNC_interactive_jobs&amp;diff=6197</id>
		<title>SLURM VNC interactive jobs</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=SLURM_VNC_interactive_jobs&amp;diff=6197"/>
		<updated>2021-06-24T22:55:33Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Notice:''' It is now possible to access a VNC-like desktop environment from our [[Open_OnDemand|Open OnDemand]] web portal. &lt;br /&gt;
&lt;br /&gt;
==Using VNC session ==&lt;br /&gt;
Please refer to our [[Setting_Up_VNC_Session|VNC setup guide]]  to start a VNC session with Cheaha login node. After getting your VNC desktop, you can get a compute node resource using the '''sinteractive''' command. A '''sinteractive''' session launched through VNC won't get terminated even if your SSH or VNC window is closed. Of course, it will get terminated after specified run-time is reached.&lt;br /&gt;
&lt;br /&gt;
= Interactive Resources =&lt;br /&gt;
&lt;br /&gt;
After you login to Cheaha the command-line interface that you see is running on the login node.  Most of your light interactive prep-work to submit a compute job to the scheduler can be carried out on this login node.  If you have a heavier workload to prepare for a batch job (eg. compiling code or other manipulations of data) or your compute application requires interactive control, you should request a dedicated interactive node for this work.&lt;br /&gt;
&lt;br /&gt;
Interactive resources are requested by submitting an &amp;quot;interactive&amp;quot; job to the scheduler.  Interactive jobs will provide you a command line on a compute resource that you can use just like you would the command line on the login node.  The difference is that the scheduler has dedicated the requested resources to your job and you can run your interactive commands without having to worry about impacting other users on the login node.&lt;br /&gt;
&lt;br /&gt;
Interactive jobs are requested with the sinteractive command (please use your correct email address in place of the ''$USER@uab.edu'' string if you do not have an @uab.edu email address):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sinteractive --nodes=1 --time=120 --mail-user=$USER@uab.edu --mail-type=begin,end,fail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command requests 1 node (24 cores) with entire allocation of RAM  on the node for 120 minutes (2 hours).  The command will wait until the resource is reserved by the scheduler and send you an email when the resource is available.  The email alert can be useful during periods of heavy cluster demand when interactive resources reservations may have significant wait times.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Setting_Up_VNC_Session&amp;diff=6196</id>
		<title>Setting Up VNC Session</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Setting_Up_VNC_Session&amp;diff=6196"/>
		<updated>2021-06-24T22:54:23Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Added note about Open OnDemand&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[wikipedia:Virtual_Network_Computing|Virtual Network Computing (VNC)]] is a cross-platform desktop sharing system to interact with a remote system's desktop using a graphical interface. This page covers basic instructions to access a desktop on [[Cheaha]] using VNC. These basic instructions support a variety of use-cases where access to graphical applications on the cluster is helpful or required. If you are interested in knowing more options or detailed technical information, then please take a look at man pages of specified commands.&lt;br /&gt;
&lt;br /&gt;
'''Notice:''' It is now possible to access a VNC-like desktop environment from our [[Open_OnDemand|Open OnDemand]] web portal.&lt;br /&gt;
&lt;br /&gt;
== One Time Setup ==&lt;br /&gt;
VNC use on Cheaha requires a one-time-setup to configure settings to starting the virtual desktop. These instructions will configure the VNC server to use the Gnome desktop environment, the default desktop environment on the cluster. (Alternatively, you can run the vncserver command without this configure and and start a very basic (but harder to use) desktop environment.) To get started [[Cheaha_GettingStarted#Login | log in to cheaha via ssh.]]&lt;br /&gt;
&lt;br /&gt;
=== Set VNC Session Password ===&lt;br /&gt;
You must maintain a password for your VNC server sessions using the vncpasswd command. The password is validated each time a connection comes in, so it can be changed on the fly using vncpasswd command anytime later.  '''Remember this password as you will be prompted for it when you access  your cluster desktop'''. By default, the command stores an obfuscated version of the password in the file $HOME/.vnc/passwd.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vncpasswd &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Configure the Cluster Desktop ===&lt;br /&gt;
The vncserver command relies on a configuration script to start your virtual desktop environment. The [[wikipedia:GNOME|GNOME2]] desktop provides a familiar desktop experience and can be selected by creating the following vncserver startup script (~/.vnc/xstartup).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir $HOME/.vnc&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; $HOME/.vnc/xstartup &amp;lt;&amp;lt;\EOF&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
&lt;br /&gt;
# Start up the standard system desktop&lt;br /&gt;
unset SESSION_MANAGER&lt;br /&gt;
unset DBUS_SESSION_BUS_ADDRESS&lt;br /&gt;
&lt;br /&gt;
/usr/bin/mate-session&lt;br /&gt;
&lt;br /&gt;
[ -x /etc/vnc/xstartup ] &amp;amp;&amp;amp; exec /etc/vnc/xstartup&lt;br /&gt;
[ -r $HOME/.Xresources ] &amp;amp;&amp;amp; xrdb $HOME/.Xresources&lt;br /&gt;
x-window-manager &amp;amp;&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Setup correct permission to xstartup&lt;br /&gt;
chmod +x $HOME/.vnc/xstartup&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default a VNC server displays graphical environment using a tab-window-manager. If the above xstartup file is absent, then a file with the default tab-window-manager settings will be created by the vncserver command during startup.  If you want to switch to the GNOME desktop, simply replace this default file with the settings above. &lt;br /&gt;
&lt;br /&gt;
This completes the one-time setup on the cluster for creating a VNC server password and selecting the preferred desktop environment.&lt;br /&gt;
&lt;br /&gt;
=== Select a VNC Client ===&lt;br /&gt;
You will also need a VNC client on your personal desktop in order to remotely access your cluster desktop.  &lt;br /&gt;
&lt;br /&gt;
Mac OS comes with a native VNC client so you don't need to use any third-party software.  Chicken of the VNC is a popular alternative on Mac OS to the native VNC client, especially for older Mac OS, pre-10.7.&lt;br /&gt;
&lt;br /&gt;
Most Linux systems have the VNC software installed so you can simply use the vncviewer command to access a VNC server. &lt;br /&gt;
&lt;br /&gt;
If you use MS Windows then you will need to install a VNC client. Here is a list of VNC client softwares and you can any one of it to access VNC server. &lt;br /&gt;
 * http://www.tightvnc.com/ (Mac, Linux and Windows)&lt;br /&gt;
 * http://www.realvnc.com/ (Mac, Linux and Windows)&lt;br /&gt;
 * http://sourceforge.net/projects/cotvnc/ (Mac)&lt;br /&gt;
&lt;br /&gt;
== Start your VNC Desktop == &lt;br /&gt;
Your VNC desktop must be started before you can connect to it.  To start the VNC desktop you need to log into cheaha using an [[Cheaha_GettingStarted#Login|standard SSH connection]]. The VNC server is started by executing the vncserver command after you log in to cheaha. It will run in the background and continue running even after you log out of the SSH session that was used to run the vncserver command.&lt;br /&gt;
&lt;br /&gt;
To start the VNC desktop run the vncserver command.  You will see a short message like the following from the vncserver before it goes into the background. You will need this information to connect to your desktop.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vncserver &lt;br /&gt;
New 'login001:24 (blazer)' desktop is login001:24&lt;br /&gt;
&lt;br /&gt;
Starting applications specified in /home/blazer/.vnc/xstartup&lt;br /&gt;
Log file is /home/blazer/.vnc/login001:24.log&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above command output indicates that a VNC server is started on VNC X-display number 24, which translates to system port 5924. The vncserver automatically selects this port from a list of available ports.&lt;br /&gt;
&lt;br /&gt;
The actual system port on which VNC server is listening for connections is obtained by adding a VNC base port (default: port 5900) and a VNC X-display number (24 in above case). Alternatively you can specify a high numbered system port directly (e.g. 5927) using '-rfbport &amp;lt;port-number&amp;gt;' option and the vncserver will try to use it if it's available. See vncserver's man page for details.&lt;br /&gt;
&lt;br /&gt;
Please note that the vncserver will continue to run in the backgound on the head node until it is explicitly stopped.  This allows you to reconnect to the same desktop session without having to first start the vncserver, leaving all your desktop applications active.  When you no longer need your desktop, simply log out of your desktop using the desktop's log out menu option or by explicitly ending the vncserver command with the 'vncserver -kill ' command.&lt;br /&gt;
&lt;br /&gt;
=== Alternate Cluster Desktop Sizes ===&lt;br /&gt;
The default size of your cluster desktop is 1024x768 pixels.  If you want to start your desktop with an alternate geometry to match your application, personal desktop environment, or other preferences, simply add a &amp;quot;-geometry hieghtxwidth&amp;quot; argument to your vncserver command.  For example, if you want a wide screen geometry popular with laptops, you might start the VNC server with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncserver -geometry 1280x800&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Stop your VNC Desktop == &lt;br /&gt;
Stopping the VNC process is done using the ''vncserver -kill'' command. The command takes a single argument, the display port.&lt;br /&gt;
&lt;br /&gt;
The VNC server display port can be found using the following command (display port format is a ''':''' followed by 1 or more digits):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncserver -list&lt;br /&gt;
&lt;br /&gt;
X DISPLAY #     PROCESS ID&lt;br /&gt;
:4              52904&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the above example, the VNC display port is ''':4'''. Terminating the VNC desktop can now be done via:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncserver -kill :4&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Establish a Network Connection to your VNC Server ==&lt;br /&gt;
&lt;br /&gt;
As indicated in the output from the vncserver command, the VNC desktop is listening for connections on a higher numbered port.  This port isn't directly accessible from the internet. Hence, we need to use SSH local port forwarding to connect to this server.&lt;br /&gt;
&lt;br /&gt;
This SSH session provides the connection to your VNC desktop and must remain active while you use the desktop.  You can disconnect and reconnect to your desktop by establishing this SSH session whenever you need to access your desktop.  In other words, your desktop remains active across your connections to it. This supports a mobile work environment.&lt;br /&gt;
&lt;br /&gt;
=== Port-forwarding from Linux or Mac Systems ===&lt;br /&gt;
Set up SSH port forwarding using the native SSH command. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ssh -L &amp;lt;local-port&amp;gt;:&amp;lt;remote-system-host&amp;gt;:&amp;lt;remote-system-port&amp;gt; USERID@&amp;lt;SSH-server-host&amp;gt;&lt;br /&gt;
$ ssh -L 5924:localhost:5924 USERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Above command will forward connections on local port 5924 to a remote system's (same as SSH server host Cheaha - hence localhost) port 5924.&lt;br /&gt;
&lt;br /&gt;
=== Port-forwarding from Windows Systems ===&lt;br /&gt;
Windows users need to establish the connection using whatever SSH software they commonly use. The following is an example configuration using Putty client on Windows. Be sure to press the &amp;quot;Add&amp;quot; button to save the configuration with the session and ensure the tunnel is opened when the connection is established.&lt;br /&gt;
&lt;br /&gt;
[[File:Putty-SSH-Tunnel.png]]&lt;br /&gt;
&lt;br /&gt;
== Access your Cluster Desktop ==&lt;br /&gt;
&lt;br /&gt;
With the network connection to the VNC server established, you can access your cluster desktop using your preferred VNC client. When you access your cluster desktop you will be prompted for the VNC password you created during the one time setup above.&lt;br /&gt;
&lt;br /&gt;
The VNC client will actually connect to your local machine, eg. &amp;quot;localhost&amp;quot;, because it relies on the SSH port forwarding to connect to the VNC server on the cluster. You do this because you have already created the real connection to Cheaha using the SSH tunnel.  The SSH tunnel &amp;quot;listens&amp;quot; on your local host and forwards all of your VNC traffic across the network to your VNC server on the cluster.&lt;br /&gt;
&lt;br /&gt;
You can access the VNC server using the following connection scenarios based on your personal desktop environment.&lt;br /&gt;
&lt;br /&gt;
==== From Mac ====&lt;br /&gt;
&lt;br /&gt;
'''For Mac OSX 10.8 and higher'''&lt;br /&gt;
Mac users can use the default VNC client and start it from Finder. Press '''cmd+k''' to bring up the &amp;quot;connect to server&amp;quot; window. Enter the following connection string in Finder: &lt;br /&gt;
&amp;lt;pre&amp;gt;vnc://localhost:5924 &amp;lt;/pre&amp;gt;&lt;br /&gt;
The connection string pattern is &amp;quot;vnc://&amp;lt;vnc-server&amp;gt;:&amp;lt;vnc-port&amp;gt;&amp;quot;.  Adjust your port setting for the specific value of your cluster desktop given when you run vncserver above.&lt;br /&gt;
&lt;br /&gt;
'''For Mac OSX 10.7 and lower'''&lt;br /&gt;
Download and install Chicken of the VNC from [http://sourceforge.net/projects/cotvnc/| sourceforge].&lt;br /&gt;
Start COTVNC and enter the following in the host window and provide the VNC password you created during set up when prompted:&lt;br /&gt;
&amp;lt;pre&amp;gt;localhost:5924&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== From Linux ====&lt;br /&gt;
Linux users can use the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncviewer :24 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Shortcut for Linux Users =====&lt;br /&gt;
Linux users can optionally skip the explicit SSH tunnel setup described above by using the -via argument to the vncviewer command. The &amp;quot;-via &amp;lt;gateway&amp;gt;&amp;quot; will set up the SSH tunnel implicitly. For the above example, the following command would be used:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncviewer -via cheaha.rc.uab.edu :24&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This option is preferred since it will also establish VNC settings that are more efficient for slow networks. See the man page for vncviewer for details on other encodings.&lt;br /&gt;
&lt;br /&gt;
==== From Windows ====&lt;br /&gt;
Windows users should use whatever connection string is applicable to their VNC client. &lt;br /&gt;
&lt;br /&gt;
Remember to use &amp;quot;localhost&amp;quot; as the host address in your VNC client.  You do this because you have already created the real connection to Cheaha using the SSH tunnel.  The SSH tunnel &amp;quot;listens&amp;quot; on your local host and forwards all of your VNC traffic across the network to your VNC server on the cluster.&lt;br /&gt;
&lt;br /&gt;
== Using your Desktop ==&lt;br /&gt;
Once we have a VNC session established with Gnome desktop environment, we can use it to launch any graphical application on Cheaha or use it to open GUI (X11) supported SSH session with a remote system in the cluster. &lt;br /&gt;
&lt;br /&gt;
VNC can be particularly useful when we are trying to access and X Windows application from MS Windows, as native X11 setup on Windows is typically more involved than the VNC setup above. For example, it's much easier to start X11 based SSH session with the remote system  on the cluster from above Gnome desktop environment than doing all X11 setup on Windows.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
$ ssh -X $USER@172.x.x.x&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Performance Considerations for Slow Networks ===&lt;br /&gt;
&lt;br /&gt;
If the network you are using to connect to your VNC session is slow (eg. wifi or off campus), you may be able to improve the responsiveness of the VNC session by adjusting simple desktop settings in your VNC desktop.  The VNC screen needs to be repainted every time your desktop is modified, eg. opening or moving a window.  Any bit of data you don't have to send will improve the drawing speed.  Most modern desktops default to a pretty picture.  While nice to look at these pictures contain lots data.  If you set your desktop background to a solid color (no gradients) the screen refresh will be much quicker (see System-&amp;gt;Preferences-&amp;gt;Desktop Background).  Also, if you change to a basic windowing theme it will also speed screen refreshes (see System-&amp;gt;Preferences-&amp;gt;Themes-&amp;gt;Mist).&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=SLURM_VNC_interactive_jobs&amp;diff=6195</id>
		<title>SLURM VNC interactive jobs</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=SLURM_VNC_interactive_jobs&amp;diff=6195"/>
		<updated>2021-06-24T22:51:17Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Added note about preferring Open OnDemand&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Notice==&lt;br /&gt;
'''This page contains outdated information.''' While the method outlined here should still work, the easiest way to access a VNC-like desktop on Cheaha is to use the [[Open_OnDemand|Open OnDemand]] web portal to create and access an Interactive HPC Desktop job through your web browser.&lt;br /&gt;
&lt;br /&gt;
==Using VNC session ==&lt;br /&gt;
Please refer to our [[Setting_Up_VNC_Session|VNC setup guide]]  to start a VNC session with Cheaha login node. After getting your VNC desktop, you can get a compute node resource using the '''sinteractive''' command. A '''sinteractive''' session launched through VNC won't get terminated even if your SSH or VNC window is closed. Of course, it will get terminated after specified run-time is reached.&lt;br /&gt;
&lt;br /&gt;
= Interactive Resources =&lt;br /&gt;
&lt;br /&gt;
After you login to Cheaha the command-line interface that you see is running on the login node.  Most of your light interactive prep-work to submit a compute job to the scheduler can be carried out on this login node.  If you have a heavier workload to prepare for a batch job (eg. compiling code or other manipulations of data) or your compute application requires interactive control, you should request a dedicated interactive node for this work.&lt;br /&gt;
&lt;br /&gt;
Interactive resources are requested by submitting an &amp;quot;interactive&amp;quot; job to the scheduler.  Interactive jobs will provide you a command line on a compute resource that you can use just like you would the command line on the login node.  The difference is that the scheduler has dedicated the requested resources to your job and you can run your interactive commands without having to worry about impacting other users on the login node.&lt;br /&gt;
&lt;br /&gt;
Interactive jobs are requested with the sinteractive command (please use your correct email address in place of the ''$USER@uab.edu'' string if you do not have an @uab.edu email address):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sinteractive --nodes=1 --time=120 --mail-user=$USER@uab.edu --mail-type=begin,end,fail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command requests 1 node (24 cores) with entire allocation of RAM  on the node for 120 minutes (2 hours).  The command will wait until the resource is reserved by the scheduler and send you an email when the resource is available.  The email alert can be useful during periods of heavy cluster demand when interactive resources reservations may have significant wait times.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Open_OnDemand&amp;diff=6194</id>
		<title>Open OnDemand</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Open_OnDemand&amp;diff=6194"/>
		<updated>2021-06-24T22:46:40Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: initial page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
[https://openondemand.org/ Open OnDemand] is a graphical, web-based cluster interface software produced by Ohio Supercomputer Center. The software is installed on the UAB cluster fabric and allows a number of interactions with Cheaha in a consistent and user-friendly environment, all within a web browser. Chrome is preferred, but it appears to work with Firefox as well. Our instance of Open OnDemand is available at [https://rc.uab.edu https://rc.uab.edu].&lt;br /&gt;
&lt;br /&gt;
Features of Open OnDemand include:&lt;br /&gt;
* Filesystem browser&lt;br /&gt;
* Text file editor&lt;br /&gt;
* Image viewer&lt;br /&gt;
* Job composer and saveable templates&lt;br /&gt;
* Active job list&lt;br /&gt;
* Terminal&lt;br /&gt;
* Interactive applications, with in-browser remote desktop experience&lt;br /&gt;
* Web server interactive applications&lt;br /&gt;
* Per-user application sandbox for deploying custom graphical applications&lt;br /&gt;
&lt;br /&gt;
Interactive applications include:&lt;br /&gt;
* Desktop environment&lt;br /&gt;
* ANSYS (only available to licensed users)&lt;br /&gt;
* IGV&lt;br /&gt;
* MATLAB&lt;br /&gt;
* SAS&lt;br /&gt;
&lt;br /&gt;
Web server interactive applications include:&lt;br /&gt;
* Jupyter&lt;br /&gt;
* RStudio&lt;br /&gt;
&lt;br /&gt;
=Applications=&lt;br /&gt;
&lt;br /&gt;
==Interactive==&lt;br /&gt;
&lt;br /&gt;
===IGV===&lt;br /&gt;
&lt;br /&gt;
For more information on IGV please see [[OOD_IGV|OOD IGV]].&lt;br /&gt;
&lt;br /&gt;
==Sandbox==&lt;br /&gt;
&lt;br /&gt;
For more information on using the application sandbox please see [[Open_OnDemand_Sandbox|Open OnDemand Sandbox]].&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=RStudio&amp;diff=6193</id>
		<title>RStudio</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=RStudio&amp;diff=6193"/>
		<updated>2021-06-24T22:00:29Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;RStudio is an integrated development environment (IDE) for R. It includes a console, syntax-highlighting editor that supports direct code execution, as well as tools for plotting, history, debugging and workspace management. To learn more about RStudio, click [https://www.rstudio.com/ here].&lt;br /&gt;
&lt;br /&gt;
===Starting a RStudio server session===&lt;br /&gt;
RStudio server session can be started on cheaha, by using command '''rserver'''.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[ravi89@login001 ~]$ rserver&lt;br /&gt;
Waiting for RStudio server to start&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
SSH port forwarding from laptop&lt;br /&gt;
ssh -L 8700:c0082:8700 ravi89@cheaha.rc.uab.edu&lt;br /&gt;
&lt;br /&gt;
Connection string for local browser&lt;br /&gt;
http://localhost:8700&lt;br /&gt;
&lt;br /&gt;
Authorization info for Rstudio&lt;br /&gt;
Username: ravi89&lt;br /&gt;
Password: ................&lt;br /&gt;
&lt;br /&gt;
[ravi89@login001 ~]$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Accessing the created RStudio session===&lt;br /&gt;
Once RStudio session has started after running '''rserver''' command, it would give you the information you need to connect to it.&lt;br /&gt;
&lt;br /&gt;
Here are the steps to connect to it, based on the information that it sends:&lt;br /&gt;
&lt;br /&gt;
====Port forwarding====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SSH port forwarding from laptop&lt;br /&gt;
ssh -L 8700:c0082:8700 ravi89@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are on a Mac/Linux system, start a new tab/terminal on your mac and copy the ssh line mentioned under '''SSH port forwarding from laptop''' which in the example above would be '''ssh -L 8700:c0082:8700 ravi89@cheaha.rc.uab.edu''' . On a Windows system, you can set up port forwarding on your system, using the methods defined [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here].&lt;br /&gt;
&lt;br /&gt;
====Local Browser Connection====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Connection string for local browser&lt;br /&gt;
http://localhost:8700&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, start up a web browser of your choice, Google Chrome, Firefox, Safari etc. , and go to the link mentioned under '''Connection string for local browser''' , which in the above example would be &amp;lt;nowiki&amp;gt;http://localhost:8700&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Authorization Info====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Authorization info for Rstudio&lt;br /&gt;
Username: ravi89&lt;br /&gt;
Password: ................&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Each RStudio server session is secured with a random temporary password, which can be found under '''Authorization info for Rstudio''' . Use this info to login to Rstudio server, on your web browser.&lt;br /&gt;
=====Setting your own password=====&lt;br /&gt;
You can setup your own password for accessing RStudio session, by setting environment variable RSTUDIO_PASSWORD . You can set an environment variable using the followng command on cheaha, before starting '''rserver'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[ravi89@login001 ~]$ export RSTUDIO_PASSWORD=asdfghjkl&lt;br /&gt;
[ravi89@login001 ~]$ rserver &lt;br /&gt;
Waiting for RStudio server to start&lt;br /&gt;
.............&lt;br /&gt;
&lt;br /&gt;
SSH port forwarding from laptop&lt;br /&gt;
ssh -L 8742:c0076:8742 ravi89@cheaha.rc.uab.edu&lt;br /&gt;
&lt;br /&gt;
Connection string for local browser&lt;br /&gt;
http://localhost:8742&lt;br /&gt;
&lt;br /&gt;
Authorization info for Rstudio&lt;br /&gt;
Username: ravi89&lt;br /&gt;
Password: asdfghjkl&lt;br /&gt;
&lt;br /&gt;
[ravi89@login001 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Default parameters===&lt;br /&gt;
If you use '''rserver''' without any additional parameters, it would start with the following default parameters&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Partition: Short&lt;br /&gt;
Time: 12:00:00&lt;br /&gt;
mem-per-cpu: 1024&lt;br /&gt;
cpus-per-task: 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Setting parameters===&lt;br /&gt;
You can set your own parameters with '''rserver''' like time, partition etc.&lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 rserver --time=05:00:00 --partition=short --mem-per-cpu=4096&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
List of parameters that you can set up with rserver:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Parallel run options:&lt;br /&gt;
  -a, --array=indexes         job array index values&lt;br /&gt;
  -A, --account=name          charge job to specified account&lt;br /&gt;
      --bb=&amp;lt;spec&amp;gt;             burst buffer specifications&lt;br /&gt;
      --bbf=&amp;lt;file_name&amp;gt;       burst buffer specification file&lt;br /&gt;
      --begin=time            defer job until HH:MM MM/DD/YY&lt;br /&gt;
      --comment=name          arbitrary comment&lt;br /&gt;
      --cpu-freq=min[-max[:gov]] requested cpu frequency (and governor)&lt;br /&gt;
  -c, --cpus-per-task=ncpus   number of cpus required per task&lt;br /&gt;
  -d, --dependency=type:jobid defer job until condition on jobid is satisfied&lt;br /&gt;
      --deadline=time         remove the job if no ending possible before&lt;br /&gt;
                              this deadline (start &amp;gt; (deadline - time[-min]))&lt;br /&gt;
      --delay-boot=mins       delay boot for desired node features&lt;br /&gt;
  -D, --workdir=directory     set working directory for batch script&lt;br /&gt;
  -e, --error=err             file for batch script's standard error&lt;br /&gt;
      --export[=names]        specify environment variables to export&lt;br /&gt;
      --export-file=file|fd   specify environment variables file or file&lt;br /&gt;
                              descriptor to export&lt;br /&gt;
      --get-user-env          load environment from local cluster&lt;br /&gt;
      --gid=group_id          group ID to run job as (user root only)&lt;br /&gt;
      --gres=list             required generic resources&lt;br /&gt;
      --gres-flags=opts       flags related to GRES management&lt;br /&gt;
  -H, --hold                  submit job in held state&lt;br /&gt;
      --ignore-pbs            Ignore #PBS options in the batch script&lt;br /&gt;
  -i, --input=in              file for batch script's standard input&lt;br /&gt;
  -I, --immediate             exit if resources are not immediately available&lt;br /&gt;
      --jobid=id              run under already allocated job&lt;br /&gt;
  -J, --job-name=jobname      name of job&lt;br /&gt;
  -k, --no-kill               do not kill job on node failure&lt;br /&gt;
  -L, --licenses=names        required license, comma separated&lt;br /&gt;
  -M, --clusters=names        Comma separated list of clusters to issue&lt;br /&gt;
                              commands to.  Default is current cluster.&lt;br /&gt;
                              Name of 'all' will submit to run on all clusters.&lt;br /&gt;
                              NOTE: SlurmDBD must up.&lt;br /&gt;
  -m, --distribution=type     distribution method for processes to nodes&lt;br /&gt;
                              (type = block|cyclic|arbitrary)&lt;br /&gt;
      --mail-type=type        notify on state change: BEGIN, END, FAIL or ALL&lt;br /&gt;
      --mail-user=user        who to send email notification for job state&lt;br /&gt;
                              changes&lt;br /&gt;
      --mcs-label=mcs         mcs label if mcs plugin mcs/group is used&lt;br /&gt;
  -n, --ntasks=ntasks         number of tasks to run&lt;br /&gt;
      --nice[=value]          decrease scheduling priority by value&lt;br /&gt;
      --no-requeue            if set, do not permit the job to be requeued&lt;br /&gt;
      --ntasks-per-node=n     number of tasks to invoke on each node&lt;br /&gt;
  -N, --nodes=N               number of nodes on which to run (N = min[-max])&lt;br /&gt;
  -o, --output=out            file for batch script's standard output&lt;br /&gt;
  -O, --overcommit            overcommit resources&lt;br /&gt;
  -p, --partition=partition   partition requested&lt;br /&gt;
      --parsable              outputs only the jobid and cluster name (if present),&lt;br /&gt;
                              separated by semicolon, only on successful submission.&lt;br /&gt;
      --power=flags           power management options&lt;br /&gt;
      --priority=value        set the priority of the job to value&lt;br /&gt;
      --profile=value         enable acct_gather_profile for detailed data&lt;br /&gt;
                              value is all or none or any combination of&lt;br /&gt;
                              energy, lustre, network or task&lt;br /&gt;
      --propagate[=rlimits]   propagate all [or specific list of] rlimits&lt;br /&gt;
      --qos=qos               quality of service&lt;br /&gt;
  -Q, --quiet                 quiet mode (suppress informational messages)&lt;br /&gt;
      --reboot                reboot compute nodes before starting job&lt;br /&gt;
      --requeue               if set, permit the job to be requeued&lt;br /&gt;
  -s, --oversubscribe         over subscribe resources with other jobs&lt;br /&gt;
  -S, --core-spec=cores       count of reserved cores&lt;br /&gt;
      --signal=[B:]num[@time] send signal when time limit within time seconds&lt;br /&gt;
      --spread-job            spread job across as many nodes as possible&lt;br /&gt;
      --switches=max-switches{@max-time-to-wait}&lt;br /&gt;
                              Optimum switches and max time to wait for optimum&lt;br /&gt;
      --thread-spec=threads   count of reserved threads&lt;br /&gt;
  -t, --time=minutes          time limit&lt;br /&gt;
      --time-min=minutes      minimum time limit (if distinct)&lt;br /&gt;
      --uid=user_id           user ID to run job as (user root only)&lt;br /&gt;
      --use-min-nodes         if a range of node counts is given, prefer the&lt;br /&gt;
                              smaller count&lt;br /&gt;
  -v, --verbose               verbose mode (multiple -v's increase verbosity)&lt;br /&gt;
  -W, --wait                  wait for completion of submitted job&lt;br /&gt;
      --wckey=wckey           wckey to run job under&lt;br /&gt;
      --wrap[=command string] wrap command string in a sh script and submit&lt;br /&gt;
&lt;br /&gt;
Constraint options:&lt;br /&gt;
      --contiguous            demand a contiguous range of nodes&lt;br /&gt;
  -C, --constraint=list       specify a list of constraints&lt;br /&gt;
  -F, --nodefile=filename     request a specific list of hosts&lt;br /&gt;
      --mem=MB                minimum amount of real memory&lt;br /&gt;
      --mincpus=n             minimum number of logical processors (threads)&lt;br /&gt;
                              per node&lt;br /&gt;
      --reservation=name      allocate resources from named reservation&lt;br /&gt;
      --tmp=MB                minimum amount of temporary disk&lt;br /&gt;
  -w, --nodelist=hosts...     request a specific list of hosts&lt;br /&gt;
  -x, --exclude=hosts...      exclude a specific list of hosts&lt;br /&gt;
&lt;br /&gt;
Consumable resources related options:&lt;br /&gt;
      --exclusive[=user]      allocate nodes in exclusive mode when&lt;br /&gt;
                              cpu consumable resource is enabled&lt;br /&gt;
      --exclusive[=mcs]       allocate nodes in exclusive mode when&lt;br /&gt;
                              cpu consumable resource is enabled&lt;br /&gt;
                              and mcs plugin is enabled&lt;br /&gt;
      --mem-per-cpu=MB        maximum amount of real memory per allocated&lt;br /&gt;
                              cpu required by the job.&lt;br /&gt;
                              --mem &amp;gt;= --mem-per-cpu if --mem is specified.&lt;br /&gt;
&lt;br /&gt;
Affinity/Multi-core options: (when the task/affinity plugin is enabled)&lt;br /&gt;
  -B  --extra-node-info=S[:C[:T]]            Expands to:&lt;br /&gt;
       --sockets-per-node=S   number of sockets per node to allocate&lt;br /&gt;
       --cores-per-socket=C   number of cores per socket to allocate&lt;br /&gt;
       --threads-per-core=T   number of threads per core to allocate&lt;br /&gt;
                              each field can be 'min' or wildcard '*'&lt;br /&gt;
                              total cpus requested = (N x S x C x T)&lt;br /&gt;
&lt;br /&gt;
      --ntasks-per-core=n     number of tasks to invoke on each core&lt;br /&gt;
      --ntasks-per-socket=n   number of tasks to invoke on each socket&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Setting per-project package libraries===&lt;br /&gt;
Please see the page on [[R#Per-Project_Package_Libraries|R Per-Project Package Libraries]] for more information&lt;br /&gt;
&lt;br /&gt;
===Moving rstudio directory===&lt;br /&gt;
As you accumulate rstudio packages, you may find that it is taking a lot of space in your $HOME directory, leading to issues with interactive sessions failing to start. The issue may be resolved by moving the directory and creating a shortcut to the new location in its place.&lt;br /&gt;
&lt;br /&gt;
How to: Move a pre-existing rstudio directory and create a symlink&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
mv ~/.rstudio $USER_DATA/&lt;br /&gt;
ln -s $USER_DATA/.rstudio .rstudio&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=R&amp;diff=6192</id>
		<title>R</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=R&amp;diff=6192"/>
		<updated>2021-06-24T21:55:17Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Added per-project libraries implementation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Generic_stub}}&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
&lt;br /&gt;
R is a free software environment for statistical computing and graphics. Versions available on Cheaha can be found and loaded using the following commands, where &amp;lt;version&amp;gt; must be replaced by one of the versions shown by the spider command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module spider R&lt;br /&gt;
module load R/&amp;lt;version&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Usage=&lt;br /&gt;
&lt;br /&gt;
==Per-Project Package Libraries==&lt;br /&gt;
&lt;br /&gt;
When working with multiple projects, or when using software like [[AFNI]] which make use of R internally, it may be helpful to use separate folders to store libraries for separate projects and software. Keeping library paths separate on a per-project or per-software basis will minimize the risk of library conflicts and hard-to-trace bugs.&lt;br /&gt;
&lt;br /&gt;
Library paths may be managed within R using the [https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/libPaths libPaths] function. Simply pass a list of directories to the function to change the library paths available to R.&lt;br /&gt;
&lt;br /&gt;
The following reference assumes that an R module is already loaded. To achieve separation of libraries on a per-project bases, navigate to one of the desired project directory. Create a folder called &amp;quot;rlibs&amp;quot;, which will be used to store packages. Create an empty text file called &amp;quot;.Rprofile&amp;quot; in the root project directory, if it doesn't already exist. Add the following to the &amp;quot;.Rprofile&amp;quot;. If a call to libPaths already exists, exercise judgement before modifying it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
.libPaths(c('./rlibs'))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using...&lt;br /&gt;
&lt;br /&gt;
* [[RStudio]]: you will need to set your root project folder as the working directory. Run the &amp;quot;.Rprofile&amp;quot; file you created earlier to set the &amp;quot;rlibs&amp;quot; folder as the default library path. All newly installed packages in this session will be installed in that folder.&lt;br /&gt;
&lt;br /&gt;
* R REPL (read-eval-print loop): started by calling &amp;quot;R&amp;quot; at the command line. The &amp;quot;.Rprofile&amp;quot; file will be loaded and executed as R code by the REPL environment before giving control to you. This will set the &amp;quot;rlibs&amp;quot; folder as the default library path. All newly installed packages in this session will be installed in that folder.&lt;br /&gt;
&lt;br /&gt;
* Rscript: ensure that you start any scripts from the directory containing the &amp;quot;.Rprofile&amp;quot;. As with the R REPL environment, the file will be executed prior to running any other code, setting the &amp;quot;rlibs&amp;quot; folder as the default library path. All newly installed packages in this script will be installed in that folder.&lt;br /&gt;
&lt;br /&gt;
Repeating this process for each project ensures that libraries are kept separate, allowing more flexibility, repeatability, and reducing the risk of errors or cross-contamination of versions and dependencies. The only downside is maintaining discipline in creating the folder and file each time a new project is started, and the additional maintenance relating to each separate library.&lt;br /&gt;
&lt;br /&gt;
It is possible for collisions to still occur with packages installed in the default locations. If you wish to use the practice described above, you may need to remove packages installed in the default locations.&lt;br /&gt;
&lt;br /&gt;
If you have a single workflow using multiple versions of R that are causing package collisions, please contact us for [[Support]]. We will work with you to find an optimal solution.&lt;br /&gt;
&lt;br /&gt;
=SGE=&lt;br /&gt;
&lt;br /&gt;
==SGE module files==&lt;br /&gt;
&lt;br /&gt;
The following Modules files should be loaded for this package:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load R/R-2.7.2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For other versions, simply replace the version number&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load R/R-2.11.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following libraries are available&lt;br /&gt;
* /share/apps/R/R-&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;X.X.X&amp;lt;/font&amp;gt;/gnu/lib/R/library&lt;br /&gt;
** The default libraries that come with R&lt;br /&gt;
** Rmpi&lt;br /&gt;
** Snow&lt;br /&gt;
* /share/apps/R/R-&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;X.X.X&amp;lt;/font&amp;gt;/gnu/lib/R/bioc&lt;br /&gt;
** BioConductor libraries (default package set using getBioC)&lt;br /&gt;
&lt;br /&gt;
Additional libraries should be installed by the user under ~/R_exlibs or follow these instructions:&lt;br /&gt;
&lt;br /&gt;
**Make a directory on your home page to install packages/libraries to (DEST_DIR)&lt;br /&gt;
**Make a .Rprofile document in your home space (~/) with the following content, i.e. run the following command on your terminal&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; $HOME/.Rprofile &amp;lt;&amp;lt;\EOF&lt;br /&gt;
.libPaths(“~/DEST_DIR&amp;quot;)&lt;br /&gt;
cat(&amp;quot;.Rprofile: Setting UK repositoryn&amp;quot;)&lt;br /&gt;
r = getOption(&amp;quot;repos&amp;quot;) # hard code the UK repo for CRAN&lt;br /&gt;
r[&amp;quot;CRAN&amp;quot;] = &amp;quot;http://cran.uk.r-project.org&amp;quot;&lt;br /&gt;
options(repos = r)&lt;br /&gt;
rm(r)&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
NOTE:Change DEST_DIR to the name of the directory you created.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Load R module and open it.&lt;br /&gt;
**Run install.packages(“Package_Name”) at the prompt.&lt;br /&gt;
**This would install Package_Name to DEST_DIR. To use it just use library(Package_Name)&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you install a package with one version of R, it might not be compatible with another version. So it would be advisable to pick one version of R and go with it, so that you don’t have to install multiple versions of the same package.&lt;br /&gt;
&lt;br /&gt;
==SGE Job script ==&lt;br /&gt;
'''Sample R Grid Engine Job Script'''&lt;br /&gt;
This is an example of a serial (i.e. non parallel) R job that has a 2 hour run time limit requesting 256M of RAM&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -S /bin/bash&lt;br /&gt;
#$ -cwd&lt;br /&gt;
#&lt;br /&gt;
#$ -j y&lt;br /&gt;
#$ -N rtestjob&lt;br /&gt;
# Use '#$ -m n' instead to disable all email for this job&lt;br /&gt;
#$ -m eas&lt;br /&gt;
#$ -M YOUR_EMAIL_ADDRESS&lt;br /&gt;
#$ -l h_rt=2:00:00,s_rt=1:55:00&lt;br /&gt;
#$ -l vf=256M&lt;br /&gt;
. /etc/profile.d/modules.sh&lt;br /&gt;
module load R/R-2.7.2&lt;br /&gt;
&lt;br /&gt;
#$ -v PATH,R_HOME,R_LIBS,LD_LIBRARY_PATH,CWD&lt;br /&gt;
&lt;br /&gt;
R CMD BATCH rscript.R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]][[Category:Bio-statistics]]&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Checkpointing&amp;diff=6191</id>
		<title>Checkpointing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Checkpointing&amp;diff=6191"/>
		<updated>2021-06-17T19:21:54Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Redirected page to Dmtcp Checkpointing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Dmtcp_Checkpointing]]&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Dmtcp_Checkpointing&amp;diff=6190</id>
		<title>Dmtcp Checkpointing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Dmtcp_Checkpointing&amp;diff=6190"/>
		<updated>2021-06-17T19:20:20Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Overhaul of page to include new information and more details&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
DMTCP stands for '''D'''istributed '''M'''ulti'''T'''hreaded '''C'''heck'''P'''ointing and is used to store in-memory data while a task is running to allow restarting the task if something goes wrong. DMTCP works by creating a checkpoint file at user-defined times. Checkpointing can be setup to occur at regular intervals without user monitoring, or on demand at any time the task is running. More information is available at the [http://dmtcp.sourceforge.net/ DMTCP website].&lt;br /&gt;
&lt;br /&gt;
== How it Works ==&lt;br /&gt;
DMTCP operates using a server-client model, which allows for dynamic and on-demand checkpointing. The client wraps your software with an application that monitors execution and memory, and waits for checkpoint instructions from the server. The server sends checkpointing instructions to the client to initiate a checkpoint.&lt;br /&gt;
&lt;br /&gt;
When a checkpoint is requested by the server, the client halts execution of the wrapped software. Any data in system memory associated with the wrapped software is dumped to disk as a binary data file. The client also creates a restart shell script for restarting from the most recent checkpoint.&lt;br /&gt;
&lt;br /&gt;
If execution of the job is interrupted for any reason before completion, the restart shell script can be executed&lt;br /&gt;
&lt;br /&gt;
= Advantages and Considerations =&lt;br /&gt;
There are multiple advantages to using checkpointing:&lt;br /&gt;
1. restart after hardware failure &lt;br /&gt;
2. restart after SLURM job timeout&lt;br /&gt;
3. allow jobs to run longer than maximum time limit&lt;br /&gt;
4. allow debugging starting at a specific point in time&lt;br /&gt;
&lt;br /&gt;
Considerations for checkpointing:&lt;br /&gt;
1. all data and application executables and libraries must be stored in memory at the time of checkpointing&lt;br /&gt;
2. any data stored, by your program, in temporary files on disk must be dealt with carefully&lt;br /&gt;
3. The following command must be used on Cheaha before using the restart script: `export DMTCP_COORD_HOST=localhost`&lt;br /&gt;
4. '''IMPORTANT''' checkpoint frequency can negatively impact performance&lt;br /&gt;
&lt;br /&gt;
The first two considerations above are typical of most Cheaha users, so DMTCP should &amp;quot;just work&amp;quot;. Please isolate a small test case and test DMTCP checkpointing and recovery on your workflow before running it with your full data set. It is always best practice to familiarize yourself with new tools before using them in practice. Please contact us for [[Support]] if your test case or primary workflow aren't working as expected.&lt;br /&gt;
&lt;br /&gt;
Jobs are run on the first available node. DMTCP stores the hostname, i.e. the node name, as the default DMTCP server and client address. There is no guarantee the next job will be located on that node, which can result in an error. The third consideration accounts for this by replacing the static node name with the current localhost, which will be the new node name.&lt;br /&gt;
&lt;br /&gt;
The last consideration is very important. How often your checkpoint occurs should be carefully considered. DMTCP copies memory to disk, which can take seconds or minutes depending on how much information is in memory. During this time, your software is not executing. Checkpointing too frequently, or using a too short interval, can cause degradation of performance. In contrast, checkpointing too infrequently can cause excessive loss of data in the event of failure. It is important to find a balance.&lt;br /&gt;
&lt;br /&gt;
When deciding how often to checkpoint, consider how much memory usage is expected, how long the job is expected to take, and how much time loss is acceptable, and the purpose for checkpointing. If the job will take at least one day to complete, a good rule of thumb is to set the checkpointing interval between 1 hour and 1 day. For jobs shorter than one day, checkpointing is unlikely to be necessary.&lt;br /&gt;
&lt;br /&gt;
== Use with MPI ==&lt;br /&gt;
As of 06/17/2021 the DMTCP versions on Cheaha only work for single node (SMP) jobs. MPI jobs require a specialized version of DMTCP that is not yet officially released and has additional considerations. If this applies to you, please review the slides at the [http://dmtcp.sourceforge.net/papers/hpdc19-slides.pdf Official DMTCP page]. If you need checkpointing for your MPI job please contact us for [[Support]].&lt;br /&gt;
&lt;br /&gt;
= Additional Resources and Tutorials =&lt;br /&gt;
UAB Data Science Club has put together two tutorial videos on using DMTCP on a toy workflow in Python: [https://youtu.be/0gzR6sotqLk Part 1] and [https://youtu.be/LUXMQO-ZPYY Part 2].&lt;br /&gt;
&lt;br /&gt;
The code for a toy Python example, ready to test on Cheaha, is available on our [https://gitlab.rc.uab.edu/rc-data-science/dmtcp-checkpointing-tutorial GitLab instance].&lt;br /&gt;
&lt;br /&gt;
Additional examples are available from web sources:&lt;br /&gt;
* [http://dmtcp.sourceforge.net/demo.html the official DMTCP demo page]&lt;br /&gt;
* [https://cvw.cac.cornell.edu/Checkpoint/dmtcpcount the Cornell CAC Virtual Workshop page]&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Igv&amp;diff=6189</id>
		<title>Igv</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Igv&amp;diff=6189"/>
		<updated>2021-06-17T18:36:50Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Redirected page to OOD IGV&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[OOD_IGV]]&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6188</id>
		<title>OOD IGV</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6188"/>
		<updated>2021-06-17T18:34:21Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to use IGV ==&lt;br /&gt;
&lt;br /&gt;
=== Running IGV from OOD ===&lt;br /&gt;
&lt;br /&gt;
To start an interactive IGV job using Open OnDemand (OOD), please navigate to [https://rc.uab.edu rc.uab.edu] and log in. At the top bar, click the &amp;quot;Interactive Apps&amp;quot; drop down menu and select &amp;quot;IGV&amp;quot; from the list.&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_interactive_jobs.png|border]]&lt;br /&gt;
&lt;br /&gt;
You should arrive at a new page with a job resource request selection form. Please fill out the form with values appropriate for your use case, then click the &amp;quot;Launch&amp;quot; button.&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_job_setup.png|border|960px]]&lt;br /&gt;
&lt;br /&gt;
You should be taken to a new page where all of your currently running interactive jobs are available. The job just created in the previous step should be starting up. Please be patient until the &amp;quot;Launch Desktop in new tab&amp;quot; button appears. When it does, click it to open a new tab with an interactive IGV session. If anything goes wrong, please reach out to us for [[Support]].&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_launch.png|border]]&lt;br /&gt;
&lt;br /&gt;
=== Running IGV from OOD interactive desktop ===&lt;br /&gt;
&lt;br /&gt;
IGV is also available from an interactive desktop job, giving the full desktop experience. Because the IGV interface is programmed in Java, we must tell IGV how much memory is available in our job context. If we don't then the default value of 2 GB is used, likely insufficient. Please see [[Java#Xmx|Java Xmx]] for more information and a more robust method of calculation. The instructions below assume an interactive desktop job has been created and prepared and that the terminal is open in that interactive desktop. This may be done using the Open OnDemand web portal.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# replace &amp;lt;version&amp;gt; with one available in the list from `module avail IGV`&lt;br /&gt;
module load IGV/&amp;lt;version&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# compute memory, leave 512 for JVM, rest for heap&lt;br /&gt;
avail_mem=$(($SLURM_MEM_PER_CPU * $SLURM_JOB_CPUS_PER_NODE))&lt;br /&gt;
heap_mem=$((avail_mem - 512))&lt;br /&gt;
&lt;br /&gt;
# start with heap allocation hint&lt;br /&gt;
igv.sh -Xmx${heap_mem}m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Legacy Setup and Run ==&lt;br /&gt;
&lt;br /&gt;
IGV has been added to Open OnDemand (OOD) as a first-class application, so the steps below are no longer necessary to use IGV. The methods in the previous section are preferred due to their simplicity and are more readily supported.&lt;br /&gt;
&lt;br /&gt;
=== First time setup ===&lt;br /&gt;
&lt;br /&gt;
# get a cheaha account (see [[Cheaha_GettingStarted]])&lt;br /&gt;
  * then install EITHER&lt;br /&gt;
    * via [[#Install_IGV_via_Job|OOD job launcher]]&lt;br /&gt;
    * OR&lt;br /&gt;
    * via [[#Install_via_Terminal_in_OOD_Desktop|Terminal on the OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
=== Install IGV via Job ===&lt;br /&gt;
&lt;br /&gt;
# launch Job Composer/Create New Job/From a Specified Path: https://rc.uab.edu/pun/sys/myjobs/new_from_path and setup the job&lt;br /&gt;
  * Source path: '''/share/apps/ngs-ccts/ood-igv/jobs'''&lt;br /&gt;
  * Name: '''setup IGV 2.5'''&lt;br /&gt;
  * Script Name: '''2.5.sh'''&lt;br /&gt;
  * Cluster: '''Cheaha'''&lt;br /&gt;
  * '''SAVE'''&lt;br /&gt;
  * [[File:A1.ood job composer.jpg|700px]]&lt;br /&gt;
# Run/Submit the job&lt;br /&gt;
  * Click on the green &amp;quot;play&amp;quot; arrow.  [[File:A2.ood job submit.png|700px]]&lt;br /&gt;
  * Status changes to &amp;quot;queued&amp;quot; [[File:A3.ood job queued.png|700px]]&lt;br /&gt;
  * wait until job completes. [[File:A4.ood job completed.png|700px]]&lt;br /&gt;
# now open OOD desktop to launch IGV from desktop icon&lt;br /&gt;
  * see [[#Running_IGV_from_OOD_Desktop]]&lt;br /&gt;
&lt;br /&gt;
=== Install via Terminal in OOD Desktop ===&lt;br /&gt;
&lt;br /&gt;
# launch an interactive desktop with OOD https://rc.uab.edu&lt;br /&gt;
  * Requst an OOD Desktop [[File:A.ood start desktop.png|700px]]&lt;br /&gt;
  * Set Request RAM and HOURS [[File:A.ood set mem.png|700px]]&lt;br /&gt;
  * Open the desktop, once running [[File:A.ood launch desktop.png|700px]]&lt;br /&gt;
  * desktop open [[File:A.ood desktop.png|700px]]&lt;br /&gt;
# start a &amp;quot;Terminal&amp;quot; &lt;br /&gt;
  * open terminal app [[File:B.ood with terminal highlight.jpg|700px]]&lt;br /&gt;
# in terminal, enter  &amp;lt;nowiki&amp;gt;/share/apps/ngs-ccts/ood-igv/2.5.sh&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  * enter path to installer [[File:C.ood terminal setup 25.png|700px]] &lt;br /&gt;
# that will install IGV locally, and launch it. &lt;br /&gt;
  * installer will scroll a lot of text, some in alarming colors. &lt;br /&gt;
  * a few seconds after the text stop, the desktop icon and loading bar will appear&lt;br /&gt;
  * installer finished, IGV loading [[File:D.ood setup ivg loading.png|700px]] &lt;br /&gt;
&lt;br /&gt;
=== Running IGV from OOD Desktop ===&lt;br /&gt;
&lt;br /&gt;
# Setup should create a desktop icon called &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:G.ood desktop with icon.png|700px]]&lt;br /&gt;
# In the future, you can just start OOD, then click on &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:E.ood igv loading.png|800px]]&lt;br /&gt;
&lt;br /&gt;
=== Script source code ===&lt;br /&gt;
Code can be found at https://gitlab.rc.uab.edu/CCTS-Informatics-Pipelines/ood-igv&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6187</id>
		<title>OOD IGV</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6187"/>
		<updated>2021-06-17T18:33:56Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Deprecated older methods&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to use IGV ==&lt;br /&gt;
&lt;br /&gt;
=== Running IGV from OOD ===&lt;br /&gt;
&lt;br /&gt;
To start an interactive IGV job using Open OnDemand (OOD), please navigate to [https://rc.uab.edu rc.uab.edu] and log in. At the top bar, click the &amp;quot;Interactive Apps&amp;quot; drop down menu and select &amp;quot;IGV&amp;quot; from the list.&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_interactive_jobs.png|border]]&lt;br /&gt;
&lt;br /&gt;
You should arrive at a new page with a job resource request selection form. Please fill out the form with values appropriate for your use case, then click the &amp;quot;Launch&amp;quot; button.&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_job_setup.png|border|960px]]&lt;br /&gt;
&lt;br /&gt;
You should be taken to a new page where all of your currently running interactive jobs are available. The job just created in the previous step should be starting up. Please be patient until the &amp;quot;Launch Desktop in new tab&amp;quot; button appears. When it does, click it to open a new tab with an interactive IGV session. If anything goes wrong, please reach out to us for [[Support]].&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_launch.png|border]]&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD interactive desktop ==&lt;br /&gt;
&lt;br /&gt;
IGV is also available from an interactive desktop job, giving the full desktop experience. Because the IGV interface is programmed in Java, we must tell IGV how much memory is available in our job context. If we don't then the default value of 2 GB is used, likely insufficient. Please see [[Java#Xmx|Java Xmx]] for more information and a more robust method of calculation. The instructions below assume an interactive desktop job has been created and prepared and that the terminal is open in that interactive desktop. This may be done using the Open OnDemand web portal.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# replace &amp;lt;version&amp;gt; with one available in the list from `module avail IGV`&lt;br /&gt;
module load IGV/&amp;lt;version&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# compute memory, leave 512 for JVM, rest for heap&lt;br /&gt;
avail_mem=$(($SLURM_MEM_PER_CPU * $SLURM_JOB_CPUS_PER_NODE))&lt;br /&gt;
heap_mem=$((avail_mem - 512))&lt;br /&gt;
&lt;br /&gt;
# start with heap allocation hint&lt;br /&gt;
igv.sh -Xmx${heap_mem}m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Legacy Setup and Run ==&lt;br /&gt;
&lt;br /&gt;
IGV has been added to Open OnDemand (OOD) as a first-class application, so the steps below are no longer necessary to use IGV. The methods in the previous section are preferred due to their simplicity and are more readily supported.&lt;br /&gt;
&lt;br /&gt;
=== First time setup ===&lt;br /&gt;
&lt;br /&gt;
# get a cheaha account (see [[Cheaha_GettingStarted]])&lt;br /&gt;
  * then install EITHER&lt;br /&gt;
    * via [[#Install_IGV_via_Job|OOD job launcher]]&lt;br /&gt;
    * OR&lt;br /&gt;
    * via [[#Install_via_Terminal_in_OOD_Desktop|Terminal on the OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
=== Install IGV via Job ===&lt;br /&gt;
&lt;br /&gt;
# launch Job Composer/Create New Job/From a Specified Path: https://rc.uab.edu/pun/sys/myjobs/new_from_path and setup the job&lt;br /&gt;
  * Source path: '''/share/apps/ngs-ccts/ood-igv/jobs'''&lt;br /&gt;
  * Name: '''setup IGV 2.5'''&lt;br /&gt;
  * Script Name: '''2.5.sh'''&lt;br /&gt;
  * Cluster: '''Cheaha'''&lt;br /&gt;
  * '''SAVE'''&lt;br /&gt;
  * [[File:A1.ood job composer.jpg|700px]]&lt;br /&gt;
# Run/Submit the job&lt;br /&gt;
  * Click on the green &amp;quot;play&amp;quot; arrow.  [[File:A2.ood job submit.png|700px]]&lt;br /&gt;
  * Status changes to &amp;quot;queued&amp;quot; [[File:A3.ood job queued.png|700px]]&lt;br /&gt;
  * wait until job completes. [[File:A4.ood job completed.png|700px]]&lt;br /&gt;
# now open OOD desktop to launch IGV from desktop icon&lt;br /&gt;
  * see [[#Running_IGV_from_OOD_Desktop]]&lt;br /&gt;
&lt;br /&gt;
=== Install via Terminal in OOD Desktop ===&lt;br /&gt;
&lt;br /&gt;
# launch an interactive desktop with OOD https://rc.uab.edu&lt;br /&gt;
  * Requst an OOD Desktop [[File:A.ood start desktop.png|700px]]&lt;br /&gt;
  * Set Request RAM and HOURS [[File:A.ood set mem.png|700px]]&lt;br /&gt;
  * Open the desktop, once running [[File:A.ood launch desktop.png|700px]]&lt;br /&gt;
  * desktop open [[File:A.ood desktop.png|700px]]&lt;br /&gt;
# start a &amp;quot;Terminal&amp;quot; &lt;br /&gt;
  * open terminal app [[File:B.ood with terminal highlight.jpg|700px]]&lt;br /&gt;
# in terminal, enter  &amp;lt;nowiki&amp;gt;/share/apps/ngs-ccts/ood-igv/2.5.sh&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  * enter path to installer [[File:C.ood terminal setup 25.png|700px]] &lt;br /&gt;
# that will install IGV locally, and launch it. &lt;br /&gt;
  * installer will scroll a lot of text, some in alarming colors. &lt;br /&gt;
  * a few seconds after the text stop, the desktop icon and loading bar will appear&lt;br /&gt;
  * installer finished, IGV loading [[File:D.ood setup ivg loading.png|700px]] &lt;br /&gt;
&lt;br /&gt;
=== Running IGV from OOD Desktop ===&lt;br /&gt;
&lt;br /&gt;
# Setup should create a desktop icon called &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:G.ood desktop with icon.png|700px]]&lt;br /&gt;
# In the future, you can just start OOD, then click on &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:E.ood igv loading.png|800px]]&lt;br /&gt;
&lt;br /&gt;
=== Script source code ===&lt;br /&gt;
Code can be found at https://gitlab.rc.uab.edu/CCTS-Informatics-Pipelines/ood-igv&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6186</id>
		<title>OOD IGV</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6186"/>
		<updated>2021-06-17T18:29:50Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Added update section for interactive IGV job&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Running IGV from OOD ==&lt;br /&gt;
&lt;br /&gt;
To start an interactive IGV job using Open OnDemand (OOD), please navigate to [https://rc.uab.edu rc.uab.edu] and log in. At the top bar, click the &amp;quot;Interactive Apps&amp;quot; drop down menu and select &amp;quot;IGV&amp;quot; from the list.&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_interactive_jobs.png|border]]&lt;br /&gt;
&lt;br /&gt;
You should arrive at a new page with a job resource request selection form. Please fill out the form with values appropriate for your use case, then click the &amp;quot;Launch&amp;quot; button.&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_job_setup.png|border|960px]]&lt;br /&gt;
&lt;br /&gt;
You should be taken to a new page where all of your currently running interactive jobs are available. The job just created in the previous step should be starting up. Please be patient until the &amp;quot;Launch Desktop in new tab&amp;quot; button appears. When it does, click it to open a new tab with an interactive IGV session. If anything goes wrong, please reach out to us for [[Support]].&lt;br /&gt;
&lt;br /&gt;
[[File:Igv_launch.png|border]]&lt;br /&gt;
&lt;br /&gt;
== First time setup ==&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
# get a cheaha account (see [[Cheaha_GettingStarted]])&lt;br /&gt;
  * then install EITHER&lt;br /&gt;
    * via [[#Install_IGV_via_Job|OOD job launcher]]&lt;br /&gt;
    * OR&lt;br /&gt;
    * via [[#Install_via_Terminal_in_OOD_Desktop|Terminal on the OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Install IGV via Job ==&lt;br /&gt;
# launch Job Composer/Create New Job/From a Specified Path: https://rc.uab.edu/pun/sys/myjobs/new_from_path and setup the job&lt;br /&gt;
  * Source path: '''/share/apps/ngs-ccts/ood-igv/jobs'''&lt;br /&gt;
  * Name: '''setup IGV 2.5'''&lt;br /&gt;
  * Script Name: '''2.5.sh'''&lt;br /&gt;
  * Cluster: '''Cheaha'''&lt;br /&gt;
  * '''SAVE'''&lt;br /&gt;
  * [[File:A1.ood job composer.jpg|700px]]&lt;br /&gt;
# Run/Submit the job&lt;br /&gt;
  * Click on the green &amp;quot;play&amp;quot; arrow.  [[File:A2.ood job submit.png|700px]]&lt;br /&gt;
  * Status changes to &amp;quot;queued&amp;quot; [[File:A3.ood job queued.png|700px]]&lt;br /&gt;
  * wait until job completes. [[File:A4.ood job completed.png|700px]]&lt;br /&gt;
# now open OOD desktop to launch IGV from desktop icon&lt;br /&gt;
  * see [[#Running_IGV_from_OOD_Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Install via Terminal in OOD Desktop ==&lt;br /&gt;
# launch an interactive desktop with OOD https://rc.uab.edu&lt;br /&gt;
  * Requst an OOD Desktop [[File:A.ood start desktop.png|700px]]&lt;br /&gt;
  * Set Request RAM and HOURS [[File:A.ood set mem.png|700px]]&lt;br /&gt;
  * Open the desktop, once running [[File:A.ood launch desktop.png|700px]]&lt;br /&gt;
  * desktop open [[File:A.ood desktop.png|700px]]&lt;br /&gt;
# start a &amp;quot;Terminal&amp;quot; &lt;br /&gt;
  * open terminal app [[File:B.ood with terminal highlight.jpg|700px]]&lt;br /&gt;
# in terminal, enter  &amp;lt;nowiki&amp;gt;/share/apps/ngs-ccts/ood-igv/2.5.sh&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  * enter path to installer [[File:C.ood terminal setup 25.png|700px]] &lt;br /&gt;
# that will install IGV locally, and launch it. &lt;br /&gt;
  * installer will scroll a lot of text, some in alarming colors. &lt;br /&gt;
  * a few seconds after the text stop, the desktop icon and loading bar will appear&lt;br /&gt;
  * installer finished, IGV loading [[File:D.ood setup ivg loading.png|700px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD Desktop ==&lt;br /&gt;
&lt;br /&gt;
# Setup should create a desktop icon called &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:G.ood desktop with icon.png|700px]]&lt;br /&gt;
# In the future, you can just start OOD, then click on &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:E.ood igv loading.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD interactive desktop ==&lt;br /&gt;
&lt;br /&gt;
IGV is also available from an interactive desktop job, giving the full desktop experience. Because the IGV interface is programmed in Java, we must tell IGV how much memory is available in our job context. If we don't then the default value of 2 GB is used, likely insufficient. Please see [[Java#Xmx|Java Xmx]] for more information and a more robust method of calculation. The instructions below assume an interactive desktop job has been created and prepared and that the terminal is open in that interactive desktop. This may be done using the Open OnDemand web portal.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# replace &amp;lt;version&amp;gt; with one available in the list from `module avail IGV`&lt;br /&gt;
module load IGV/&amp;lt;version&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# compute memory, leave 512 for JVM, rest for heap&lt;br /&gt;
avail_mem=$(($SLURM_MEM_PER_CPU * $SLURM_JOB_CPUS_PER_NODE))&lt;br /&gt;
heap_mem=$((avail_mem - 512))&lt;br /&gt;
&lt;br /&gt;
# start with heap allocation hint&lt;br /&gt;
igv.sh -Xmx${heap_mem}m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Script source code ==&lt;br /&gt;
Code can be found at https://gitlab.rc.uab.edu/CCTS-Informatics-Pipelines/ood-igv&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:Igv_launch.png&amp;diff=6185</id>
		<title>File:Igv launch.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:Igv_launch.png&amp;diff=6185"/>
		<updated>2021-06-17T18:17:39Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: IGV interactive job launch panel with launch button highlighted at lower left.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;IGV interactive job launch panel with launch button highlighted at lower left.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:Igv_job_setup.png&amp;diff=6184</id>
		<title>File:Igv job setup.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:Igv_job_setup.png&amp;diff=6184"/>
		<updated>2021-06-17T18:16:55Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: IGV interactive job setup page.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;IGV interactive job setup page.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:Igv_interactive_jobs.png&amp;diff=6183</id>
		<title>File:Igv interactive jobs.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:Igv_interactive_jobs.png&amp;diff=6183"/>
		<updated>2021-06-17T18:16:21Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: List of interactive jobs in OOD, including IGV.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;List of interactive jobs in OOD, including IGV.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6182</id>
		<title>OOD IGV</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6182"/>
		<updated>2021-06-17T18:00:56Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Running IGV from OOD interactive desktop */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== First time setup ==&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
# get a cheaha account (see [[Cheaha_GettingStarted]])&lt;br /&gt;
  * then install EITHER&lt;br /&gt;
    * via [[#Install_IGV_via_Job|OOD job launcher]]&lt;br /&gt;
    * OR&lt;br /&gt;
    * via [[#Install_via_Terminal_in_OOD_Desktop|Terminal on the OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Install IGV via Job ==&lt;br /&gt;
# launch Job Composer/Create New Job/From a Specified Path: https://rc.uab.edu/pun/sys/myjobs/new_from_path and setup the job&lt;br /&gt;
  * Source path: '''/share/apps/ngs-ccts/ood-igv/jobs'''&lt;br /&gt;
  * Name: '''setup IGV 2.5'''&lt;br /&gt;
  * Script Name: '''2.5.sh'''&lt;br /&gt;
  * Cluster: '''Cheaha'''&lt;br /&gt;
  * '''SAVE'''&lt;br /&gt;
  * [[File:A1.ood job composer.jpg|700px]]&lt;br /&gt;
# Run/Submit the job&lt;br /&gt;
  * Click on the green &amp;quot;play&amp;quot; arrow.  [[File:A2.ood job submit.png|700px]]&lt;br /&gt;
  * Status changes to &amp;quot;queued&amp;quot; [[File:A3.ood job queued.png|700px]]&lt;br /&gt;
  * wait until job completes. [[File:A4.ood job completed.png|700px]]&lt;br /&gt;
# now open OOD desktop to launch IGV from desktop icon&lt;br /&gt;
  * see [[#Running_IGV_from_OOD_Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Install via Terminal in OOD Desktop ==&lt;br /&gt;
# launch an interactive desktop with OOD https://rc.uab.edu&lt;br /&gt;
  * Requst an OOD Desktop [[File:A.ood start desktop.png|700px]]&lt;br /&gt;
  * Set Request RAM and HOURS [[File:A.ood set mem.png|700px]]&lt;br /&gt;
  * Open the desktop, once running [[File:A.ood launch desktop.png|700px]]&lt;br /&gt;
  * desktop open [[File:A.ood desktop.png|700px]]&lt;br /&gt;
# start a &amp;quot;Terminal&amp;quot; &lt;br /&gt;
  * open terminal app [[File:B.ood with terminal highlight.jpg|700px]]&lt;br /&gt;
# in terminal, enter  &amp;lt;nowiki&amp;gt;/share/apps/ngs-ccts/ood-igv/2.5.sh&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  * enter path to installer [[File:C.ood terminal setup 25.png|700px]] &lt;br /&gt;
# that will install IGV locally, and launch it. &lt;br /&gt;
  * installer will scroll a lot of text, some in alarming colors. &lt;br /&gt;
  * a few seconds after the text stop, the desktop icon and loading bar will appear&lt;br /&gt;
  * installer finished, IGV loading [[File:D.ood setup ivg loading.png|700px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD Desktop ==&lt;br /&gt;
&lt;br /&gt;
# Setup should create a desktop icon called &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:G.ood desktop with icon.png|700px]]&lt;br /&gt;
# In the future, you can just start OOD, then click on &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:E.ood igv loading.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD interactive desktop ==&lt;br /&gt;
&lt;br /&gt;
IGV is also available from an interactive desktop job, giving the full desktop experience. Because the IGV interface is programmed in Java, we must tell IGV how much memory is available in our job context. If we don't then the default value of 2 GB is used, likely insufficient. Please see [[Java#Xmx|Java Xmx]] for more information and a more robust method of calculation. The instructions below assume an interactive desktop job has been created and prepared and that the terminal is open in that interactive desktop. This may be done using the Open OnDemand web portal.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# replace &amp;lt;version&amp;gt; with one available in the list from `module avail IGV`&lt;br /&gt;
module load IGV/&amp;lt;version&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# compute memory, leave 512 for JVM, rest for heap&lt;br /&gt;
avail_mem=$(($SLURM_MEM_PER_CPU * $SLURM_JOB_CPUS_PER_NODE))&lt;br /&gt;
heap_mem=$((avail_mem - 512))&lt;br /&gt;
&lt;br /&gt;
# start with heap allocation hint&lt;br /&gt;
igv.sh -Xmx${heap_mem}m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Script source code ==&lt;br /&gt;
Code can be found at https://gitlab.rc.uab.edu/CCTS-Informatics-Pipelines/ood-igv&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6181</id>
		<title>OOD IGV</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6181"/>
		<updated>2021-06-17T18:00:01Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Running IGV from OOD interactive desktop */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== First time setup ==&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
# get a cheaha account (see [[Cheaha_GettingStarted]])&lt;br /&gt;
  * then install EITHER&lt;br /&gt;
    * via [[#Install_IGV_via_Job|OOD job launcher]]&lt;br /&gt;
    * OR&lt;br /&gt;
    * via [[#Install_via_Terminal_in_OOD_Desktop|Terminal on the OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Install IGV via Job ==&lt;br /&gt;
# launch Job Composer/Create New Job/From a Specified Path: https://rc.uab.edu/pun/sys/myjobs/new_from_path and setup the job&lt;br /&gt;
  * Source path: '''/share/apps/ngs-ccts/ood-igv/jobs'''&lt;br /&gt;
  * Name: '''setup IGV 2.5'''&lt;br /&gt;
  * Script Name: '''2.5.sh'''&lt;br /&gt;
  * Cluster: '''Cheaha'''&lt;br /&gt;
  * '''SAVE'''&lt;br /&gt;
  * [[File:A1.ood job composer.jpg|700px]]&lt;br /&gt;
# Run/Submit the job&lt;br /&gt;
  * Click on the green &amp;quot;play&amp;quot; arrow.  [[File:A2.ood job submit.png|700px]]&lt;br /&gt;
  * Status changes to &amp;quot;queued&amp;quot; [[File:A3.ood job queued.png|700px]]&lt;br /&gt;
  * wait until job completes. [[File:A4.ood job completed.png|700px]]&lt;br /&gt;
# now open OOD desktop to launch IGV from desktop icon&lt;br /&gt;
  * see [[#Running_IGV_from_OOD_Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Install via Terminal in OOD Desktop ==&lt;br /&gt;
# launch an interactive desktop with OOD https://rc.uab.edu&lt;br /&gt;
  * Requst an OOD Desktop [[File:A.ood start desktop.png|700px]]&lt;br /&gt;
  * Set Request RAM and HOURS [[File:A.ood set mem.png|700px]]&lt;br /&gt;
  * Open the desktop, once running [[File:A.ood launch desktop.png|700px]]&lt;br /&gt;
  * desktop open [[File:A.ood desktop.png|700px]]&lt;br /&gt;
# start a &amp;quot;Terminal&amp;quot; &lt;br /&gt;
  * open terminal app [[File:B.ood with terminal highlight.jpg|700px]]&lt;br /&gt;
# in terminal, enter  &amp;lt;nowiki&amp;gt;/share/apps/ngs-ccts/ood-igv/2.5.sh&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  * enter path to installer [[File:C.ood terminal setup 25.png|700px]] &lt;br /&gt;
# that will install IGV locally, and launch it. &lt;br /&gt;
  * installer will scroll a lot of text, some in alarming colors. &lt;br /&gt;
  * a few seconds after the text stop, the desktop icon and loading bar will appear&lt;br /&gt;
  * installer finished, IGV loading [[File:D.ood setup ivg loading.png|700px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD Desktop ==&lt;br /&gt;
&lt;br /&gt;
# Setup should create a desktop icon called &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:G.ood desktop with icon.png|700px]]&lt;br /&gt;
# In the future, you can just start OOD, then click on &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:E.ood igv loading.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD interactive desktop ==&lt;br /&gt;
&lt;br /&gt;
IGV is also available from an interactive desktop job, giving the full desktop experience. Because the IGV interface is programmed in Java, we must tell IGV how much memory is available in our job context. If we don't then the default value of 2 GB is used, likely insufficient. Please see [[Java#Xmx|Java Xmx]] for more information and a more robust method of calculation. The instructions below assume an interactive desktop job has been created and prepared and that the terminal is open in that interactive desktop. This may be done using the Open OnDemand web portal.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load IGV/&amp;lt;version&amp;gt;  # replace &amp;lt;version&amp;gt; with one available in the list from `module avail IGV`&lt;br /&gt;
avail_mem=$(($SLURM_MEM_PER_CPU * $SLURM_JOB_CPUS_PER_NODE))&lt;br /&gt;
heap_mem=$((avail_mem - 512)) # leave 512 for the JVM itself, rest for heap&lt;br /&gt;
igv.sh -Xmx${heap_mem}m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Script source code ==&lt;br /&gt;
Code can be found at https://gitlab.rc.uab.edu/CCTS-Informatics-Pipelines/ood-igv&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6180</id>
		<title>OOD IGV</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=OOD_IGV&amp;diff=6180"/>
		<updated>2021-06-17T17:55:26Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Added new section for interactive desktop with jvm xmx flag&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== First time setup ==&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
# get a cheaha account (see [[Cheaha_GettingStarted]])&lt;br /&gt;
  * then install EITHER&lt;br /&gt;
    * via [[#Install_IGV_via_Job|OOD job launcher]]&lt;br /&gt;
    * OR&lt;br /&gt;
    * via [[#Install_via_Terminal_in_OOD_Desktop|Terminal on the OOD Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Install IGV via Job ==&lt;br /&gt;
# launch Job Composer/Create New Job/From a Specified Path: https://rc.uab.edu/pun/sys/myjobs/new_from_path and setup the job&lt;br /&gt;
  * Source path: '''/share/apps/ngs-ccts/ood-igv/jobs'''&lt;br /&gt;
  * Name: '''setup IGV 2.5'''&lt;br /&gt;
  * Script Name: '''2.5.sh'''&lt;br /&gt;
  * Cluster: '''Cheaha'''&lt;br /&gt;
  * '''SAVE'''&lt;br /&gt;
  * [[File:A1.ood job composer.jpg|700px]]&lt;br /&gt;
# Run/Submit the job&lt;br /&gt;
  * Click on the green &amp;quot;play&amp;quot; arrow.  [[File:A2.ood job submit.png|700px]]&lt;br /&gt;
  * Status changes to &amp;quot;queued&amp;quot; [[File:A3.ood job queued.png|700px]]&lt;br /&gt;
  * wait until job completes. [[File:A4.ood job completed.png|700px]]&lt;br /&gt;
# now open OOD desktop to launch IGV from desktop icon&lt;br /&gt;
  * see [[#Running_IGV_from_OOD_Desktop]]&lt;br /&gt;
&lt;br /&gt;
== Install via Terminal in OOD Desktop ==&lt;br /&gt;
# launch an interactive desktop with OOD https://rc.uab.edu&lt;br /&gt;
  * Requst an OOD Desktop [[File:A.ood start desktop.png|700px]]&lt;br /&gt;
  * Set Request RAM and HOURS [[File:A.ood set mem.png|700px]]&lt;br /&gt;
  * Open the desktop, once running [[File:A.ood launch desktop.png|700px]]&lt;br /&gt;
  * desktop open [[File:A.ood desktop.png|700px]]&lt;br /&gt;
# start a &amp;quot;Terminal&amp;quot; &lt;br /&gt;
  * open terminal app [[File:B.ood with terminal highlight.jpg|700px]]&lt;br /&gt;
# in terminal, enter  &amp;lt;nowiki&amp;gt;/share/apps/ngs-ccts/ood-igv/2.5.sh&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
  * enter path to installer [[File:C.ood terminal setup 25.png|700px]] &lt;br /&gt;
# that will install IGV locally, and launch it. &lt;br /&gt;
  * installer will scroll a lot of text, some in alarming colors. &lt;br /&gt;
  * a few seconds after the text stop, the desktop icon and loading bar will appear&lt;br /&gt;
  * installer finished, IGV loading [[File:D.ood setup ivg loading.png|700px]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD Desktop ==&lt;br /&gt;
&lt;br /&gt;
# Setup should create a desktop icon called &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:G.ood desktop with icon.png|700px]]&lt;br /&gt;
# In the future, you can just start OOD, then click on &amp;quot;IGV-2.5.sh&amp;quot;&lt;br /&gt;
  * [[File:E.ood igv loading.png|800px]]&lt;br /&gt;
&lt;br /&gt;
== Running IGV from OOD interactive desktop ==&lt;br /&gt;
&lt;br /&gt;
IGV is also available from an interactive desktop job, giving the full desktop experience. Because the IGV interface is programmed in Java, we must tell IGV how much memory is available in our job context. If we don't then the default value of 2 GB is used, likely insufficient. Please see [[Java:Xmx]] for more information and a more robust method of calculation. The instructions below assume an interactive desktop job has been created and prepared and that the terminal is open in that interactive desktop. This may be done using the Open OnDemand web portal.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load IGV/&amp;lt;version&amp;gt;  # replace &amp;lt;version&amp;gt; with one available in the list from `module avail IGV`&lt;br /&gt;
avail_mem=$(($SLURM_MEM_PER_CPU * $SLURM_JOB_CPUS_PER_NODE))&lt;br /&gt;
heap_mem=$((avail_mem - 512)) # leave 512 for the JVM itself, rest for heap&lt;br /&gt;
igv.sh -Xmx${heap_mem}m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Script source code ==&lt;br /&gt;
Code can be found at https://gitlab.rc.uab.edu/CCTS-Informatics-Pipelines/ood-igv&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Java&amp;diff=6179</id>
		<title>Java</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Java&amp;diff=6179"/>
		<updated>2021-06-17T17:36:55Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Xms */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Java is a programming language originally developed by Sun Microsystems in 1995. It is an object-oriented programming language created with portability in mind. To that end the language was designed to run in a virtual machine, meaning applications written in Java can be run on any operating system with a Java Virtual Machine (JVM) available.&lt;br /&gt;
&lt;br /&gt;
== JVM Flags ==&lt;br /&gt;
&lt;br /&gt;
=== Xmx ===&lt;br /&gt;
&lt;br /&gt;
An important flag for the Java Virtual Machine (JVM), &amp;quot;-Xmx&amp;quot;, controls the amount of system memory available for the heap. The heap is where dynamic variable contents are placed and is thus critical for proper program execution and avoiding out of memory errors. By default the JVM is not aware of how much system memory is available on any operating system. When using the JVM in a job context in SLURM it is important to indicate to the JVM the available memory allocated to the job. &lt;br /&gt;
&lt;br /&gt;
It is also important to leave some memory available for the JVM itself. To see why, suppose the full memory allocation of the SLURM job is made available to the heap. The JVM is already consuming some memory. If the heap fills then more memory will be used than was allocated and SLURM will terminate the job.&lt;br /&gt;
&lt;br /&gt;
One way to achieve the above requirements is shown below. The contents of the following block may be copied into the .bashrc file to be available at the terminal and in job scripts. The function may also be copied directly into a job script, before other commands, and used there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
function jvm_mem_calc() {&lt;br /&gt;
    # Computes JVM heap memory from total allocated memory for a single-node job.&lt;br /&gt;
    #&lt;br /&gt;
    # Two inputs are expected:&lt;br /&gt;
    # 1. $SLURM_MEM_PER_CPU - units of MB&lt;br /&gt;
    # 2. $SLURM_CPUS_PER_NODE&lt;br /&gt;
    # &lt;br /&gt;
    # The return value may be used with the JVM flag -Xmx{$return}M and has&lt;br /&gt;
    # units of MB.&lt;br /&gt;
    #&lt;br /&gt;
    # Available memory is computed from the product of the two inputs. If this&lt;br /&gt;
    # value is less than 5 GB, the return value will be 90% of the input.&lt;br /&gt;
    # Otherwise the return value will be the input value minus 0.5 GB. &lt;br /&gt;
    &lt;br /&gt;
    default_jvm_other_mb=512 # default to 0.5 GB&lt;br /&gt;
    total_available_mb=$(($1 * $2))&lt;br /&gt;
    if [ $total_available_mb -le $((default_jvm_other_mb * 10)) ]; then&lt;br /&gt;
      heap_available_mb=$((9 * $total_available_mb / 10)) # total &amp;lt; 5G --&amp;gt; heap = 90% of total&lt;br /&gt;
    else&lt;br /&gt;
      heap_available_mb=$((total_available_mb - default_jvm_other_mb)) # otherwise heap = total - 512&lt;br /&gt;
    fi&lt;br /&gt;
    echo $heap_available_mb&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To use the function given above in a job script, please first source it as described in the previous paragraphs, then follow the pattern below. Note that the method may only work for single-node jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
heap_mem=$(jvm_mem_calc $SLURM_MEM_PER_CPU $SLURM_JOB_CPUS_PER_NODE)&lt;br /&gt;
jvm -Xmx${heap_mem}m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The flag may be used with any of the obvious Greek unit size prefixes, e.g. m (megabyte), g (gigabyte), t (terabyte).&lt;br /&gt;
&lt;br /&gt;
=== Xms ===&lt;br /&gt;
&lt;br /&gt;
Another memory management flag is also available as &amp;quot;-Xms&amp;quot;. This flag instructs the JVM to set the starting heap value to the value provided. The syntax is the same as for &amp;quot;-Xmx&amp;quot;, e.g. a number followed by a Greek prefix letter, e.g. m, g, or t. If the flag `-Xmx4g` is provided, the heap will start with 4 gigabytes available. This memory is directly allocated by the OS and will be immediately unavailable to other processes. SLURM will also count this usage when reporting in sacct and seff. For this reason we recommend not using this flag.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Java&amp;diff=6178</id>
		<title>Java</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Java&amp;diff=6178"/>
		<updated>2021-06-17T17:34:13Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Added base page, added xmx and xms flags&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Java is a programming language originally developed by Sun Microsystems in 1995. It is an object-oriented programming language created with portability in mind. To that end the language was designed to run in a virtual machine, meaning applications written in Java can be run on any operating system with a Java Virtual Machine (JVM) available.&lt;br /&gt;
&lt;br /&gt;
== JVM Flags ==&lt;br /&gt;
&lt;br /&gt;
=== Xmx ===&lt;br /&gt;
&lt;br /&gt;
An important flag for the Java Virtual Machine (JVM), &amp;quot;-Xmx&amp;quot;, controls the amount of system memory available for the heap. The heap is where dynamic variable contents are placed and is thus critical for proper program execution and avoiding out of memory errors. By default the JVM is not aware of how much system memory is available on any operating system. When using the JVM in a job context in SLURM it is important to indicate to the JVM the available memory allocated to the job. &lt;br /&gt;
&lt;br /&gt;
It is also important to leave some memory available for the JVM itself. To see why, suppose the full memory allocation of the SLURM job is made available to the heap. The JVM is already consuming some memory. If the heap fills then more memory will be used than was allocated and SLURM will terminate the job.&lt;br /&gt;
&lt;br /&gt;
One way to achieve the above requirements is shown below. The contents of the following block may be copied into the .bashrc file to be available at the terminal and in job scripts. The function may also be copied directly into a job script, before other commands, and used there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
function jvm_mem_calc() {&lt;br /&gt;
    # Computes JVM heap memory from total allocated memory for a single-node job.&lt;br /&gt;
    #&lt;br /&gt;
    # Two inputs are expected:&lt;br /&gt;
    # 1. $SLURM_MEM_PER_CPU - units of MB&lt;br /&gt;
    # 2. $SLURM_CPUS_PER_NODE&lt;br /&gt;
    # &lt;br /&gt;
    # The return value may be used with the JVM flag -Xmx{$return}M and has&lt;br /&gt;
    # units of MB.&lt;br /&gt;
    #&lt;br /&gt;
    # Available memory is computed from the product of the two inputs. If this&lt;br /&gt;
    # value is less than 5 GB, the return value will be 90% of the input.&lt;br /&gt;
    # Otherwise the return value will be the input value minus 0.5 GB. &lt;br /&gt;
    &lt;br /&gt;
    default_jvm_other_mb=512 # default to 0.5 GB&lt;br /&gt;
    total_available_mb=$(($1 * $2))&lt;br /&gt;
    if [ $total_available_mb -le $((default_jvm_other_mb * 10)) ]; then&lt;br /&gt;
      heap_available_mb=$((9 * $total_available_mb / 10)) # total &amp;lt; 5G --&amp;gt; heap = 90% of total&lt;br /&gt;
    else&lt;br /&gt;
      heap_available_mb=$((total_available_mb - default_jvm_other_mb)) # otherwise heap = total - 512&lt;br /&gt;
    fi&lt;br /&gt;
    echo $heap_available_mb&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To use the function given above in a job script, please first source it as described in the previous paragraphs, then follow the pattern below. Note that the method may only work for single-node jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
heap_mem=$(jvm_mem_calc $SLURM_MEM_PER_CPU $SLURM_JOB_CPUS_PER_NODE)&lt;br /&gt;
jvm -Xmx${heap_mem}m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The flag may be used with any of the obvious Greek unit size prefixes, e.g. m (megabyte), g (gigabyte), t (terabyte).&lt;br /&gt;
&lt;br /&gt;
=== Xms ===&lt;br /&gt;
&lt;br /&gt;
Another memory management flag is also available as &amp;quot;-Xms&amp;quot;. This flag instructs the JVM to set the starting heap value to the value provided. If the flag `-Xmx4g` is provided, the heap will start with 4 gigabytes available. This memory is directly allocated by the OS and will be immediately unavailable to other processes. SLURM will also count this usage when reporting in sacct and seff. For this reason we recommend not using this flag.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Cheaha_GettingStarted&amp;diff=6177</id>
		<title>Cheaha GettingStarted</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Cheaha_GettingStarted&amp;diff=6177"/>
		<updated>2021-06-10T19:21:38Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Sample Job Scripts */ Added SLURM_ARRAY_TASK_ID example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Cheaha is a cluster computing environment for UAB researchers. Information about the history and future plans for Cheaha is available on the [[Cheaha]] page.&lt;br /&gt;
&lt;br /&gt;
== Access (Cluster Account Request) ==&lt;br /&gt;
&lt;br /&gt;
To request an account on [[Cheaha]], please {{CheahaAccountRequest}}.  Please include some background information about the work you plan on doing on the cluster and the group you work with, ie. your lab or affiliation.&lt;br /&gt;
&lt;br /&gt;
'''NOTE:'''&lt;br /&gt;
The email you send to support@listserv.uab.edu will send back a '''confirmation email which must be acknowledged''' in order to submit the account request.&lt;br /&gt;
This additional step is meant to cut down on spam to the support list and is only needed for the initial account creation request sent to the support list. &lt;br /&gt;
&lt;br /&gt;
Usage of Cheaha is governed by [https://www.uab.edu/policies/content/Pages/UAB-IT-POL-0000004.aspx UAB's Acceptable Use Policy (AUP)] for computer resources. &lt;br /&gt;
&lt;br /&gt;
=== External Collaborator===&lt;br /&gt;
To request an account for an external collaborator, please follow the steps [https://docs.uabgrid.uab.edu/wiki/Collaborator_Account here.]&lt;br /&gt;
&lt;br /&gt;
== Login ==&lt;br /&gt;
===Overview===&lt;br /&gt;
Once your account has been created, you'll receive an email containing your user ID, generally your Blazer ID. You can [https://rc.uab.edu log into Cheaha via your web browser] using the new web-based HPC experience. &lt;br /&gt;
&lt;br /&gt;
You can also log into Cheaha via a traditional SSH client. Most UAB Windows workstations already have an SSH client installed, possibly named '''SSH Secure Shell Client''' or [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY]. Linux and Mac OS X systems should have an SSH client installed by default.&lt;br /&gt;
&lt;br /&gt;
Usage of Cheaha is governed by [https://www.uab.edu/policies/content/Pages/UAB-IT-POL-0000004.aspx UAB's Acceptable Use Policy (AUP)] for computer and network resources.&lt;br /&gt;
&lt;br /&gt;
===Client Configuration===&lt;br /&gt;
This section will cover steps to configure Windows, Linux and Mac OS X clients to connect to Cheaha.&lt;br /&gt;
&lt;br /&gt;
The official DNS name of Cheaha's frontend machine is ''cheaha.rc.uab.edu''. If you want to refer to the machine as ''cheaha'', you'll have to either add the &amp;quot;rc.uab.edu&amp;quot; to you computer's DNS search path.  On Unix-derived systems (Linux, Mac) you can edit your computers /etc/resolv.conf as follows (you'll need administrator access to edit this file)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
search rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or you can customize your SSH configuration to use the short name &amp;quot;cheaha&amp;quot; as a connection name. On systems using OpenSSH you can add the following to your  ~/.ssh/config file&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
 Hostname cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Linux====&lt;br /&gt;
Linux systems, regardless of the flavor (RedHat, SuSE, Ubuntu, etc...), should already have an SSH client on the system as part of the default install.&lt;br /&gt;
# Start a terminal (on RedHat click Applications -&amp;gt; Accessories -&amp;gt; Terminal, on Ubuntu Ctrl+Alt+T)&lt;br /&gt;
# At the prompt, enter the following command to connect to Cheaha ('''Replace blazerid with your Cheaha userid''')&lt;br /&gt;
 ssh '''blazerid'''@cheaha.rc.uab.edu&lt;br /&gt;
&lt;br /&gt;
====Mac OS X====&lt;br /&gt;
Mac OS X is a Unix operating system (BSD) and has a built in ssh client.&lt;br /&gt;
# Start a terminal (click Finder, type Terminal and double click on Terminal under the Applications category)&lt;br /&gt;
# At the prompt, enter the following command to connect to Cheaha ('''Replace blazerid with your Cheaha userid''')&lt;br /&gt;
 ssh '''blazerid'''@cheaha.rc.uab.edu&lt;br /&gt;
&lt;br /&gt;
====Windows====&lt;br /&gt;
There are many SSH clients available for Windows, some commercial and some that are free (GPL). This section will cover two clients that are commonly found on UAB Windows systems.&lt;br /&gt;
=====MobaXterm=====&lt;br /&gt;
[http://mobaxterm.mobatek.net/ MobaXterm] is a free (also available for a price in a Profession version) suite of SSH tools. Of the Windows clients we've used, MobaXterm is the easiest to use and feature complete. [http://mobaxterm.mobatek.net/features.html Features] include (but not limited to):&lt;br /&gt;
* SSH client (in a handy web browser like tabbed interface)&lt;br /&gt;
* Embedded Cygwin (which allows Windows users to run many Linux commands like grep, rsync, sed)&lt;br /&gt;
* Remote file system browser (graphical SFTP)&lt;br /&gt;
* X11 forwarding for remotely displaying graphical content from Cheaha&lt;br /&gt;
* Installs without requiring Windows Administrator rights&lt;br /&gt;
&lt;br /&gt;
Start MobaXterm and click the Session toolbar button (top left). Click SSH for the session type, enter the following information and click OK. Once finished, double click cheaha.rc.uab.edu in the list of Saved sessions under PuTTY sessions:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Cheaha Settings&lt;br /&gt;
|-&lt;br /&gt;
|'''Remote host'''&lt;br /&gt;
|cheaha.rc.uab.edu&lt;br /&gt;
|-&lt;br /&gt;
|'''Port'''&lt;br /&gt;
|22&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=====PuTTY=====&lt;br /&gt;
[http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is a free suite of SSH and telnet tools written and maintained by [http://www.pobox.com/~anakin/ Simon Tatham]. PuTTY supports SSH, secure FTP (SFTP), and X forwarding (XTERM) among other tools.&lt;br /&gt;
&lt;br /&gt;
* Start PuTTY (Click START -&amp;gt; All Programs -&amp;gt; PuTTY -&amp;gt; PuTTY). The 'PuTTY Configuration' window will open&lt;br /&gt;
* Use these settings for each of the clusters that you would like to configure&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Cheaha Settings&lt;br /&gt;
|-&lt;br /&gt;
|'''Host Name (or IP address)'''&lt;br /&gt;
|cheaha.rc.uab.edu&lt;br /&gt;
|-&lt;br /&gt;
|'''Port'''&lt;br /&gt;
|22&lt;br /&gt;
|-&lt;br /&gt;
|'''Protocol'''&lt;br /&gt;
|SSH&lt;br /&gt;
|-&lt;br /&gt;
|'''Saved Sessions'''&lt;br /&gt;
|cheaha.rc.uab.edu&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
* Click '''Save''' to save the configuration, repeat the previous steps for the other clusters&lt;br /&gt;
* The next time you start PuTTY, simply double click on the cluster name under the 'Saved Sessions' list&lt;br /&gt;
&lt;br /&gt;
=====SSH Secure Shell Client=====&lt;br /&gt;
SSH Secure Shell is a commercial application that is installed on many Windows workstations on campus and can be configured as follows:&lt;br /&gt;
* Start the program (Click START -&amp;gt; All Programs -&amp;gt; SSH Secure Shell -&amp;gt; Secure Shell Client). The 'default - SSH Secure Shell' window will open&lt;br /&gt;
* Click File -&amp;gt; Profiles -&amp;gt; Add Profile to open the 'Add Profile' window&lt;br /&gt;
* Type in the name of the cluster (for example: cheaha) in the field and click 'Add to Profiles'&lt;br /&gt;
* Click File -&amp;gt; Profiles -&amp;gt; Edit Profiles to open the 'Profiles' window&lt;br /&gt;
* Single click on your new profile name&lt;br /&gt;
* Use these settings for the clusters&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Cheaha Settings&lt;br /&gt;
|-&lt;br /&gt;
|'''Host name'''&lt;br /&gt;
|cheaha.rc.uab.edu&lt;br /&gt;
|-&lt;br /&gt;
|'''User name'''&lt;br /&gt;
|blazerid (insert your blazerid here)&lt;br /&gt;
|-&lt;br /&gt;
|'''Port'''&lt;br /&gt;
|22&lt;br /&gt;
|-&lt;br /&gt;
|'''Protocol'''&lt;br /&gt;
|SSH&lt;br /&gt;
|-&lt;br /&gt;
|'''Encryption algorithm'''&lt;br /&gt;
|&amp;lt;Default&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|'''MAC algorithm'''&lt;br /&gt;
|&amp;lt;Default&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|'''Compression'''&lt;br /&gt;
|&amp;lt;None&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|'''Terminal answerback'''&lt;br /&gt;
|vt100&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
* Leave 'Connect through firewall' and 'Request tunnels only' unchecked&lt;br /&gt;
* Click '''OK''' to save the configuration, repeat the previous steps for the other clusters&lt;br /&gt;
* The next time you start SSH Secure Shell, click 'Profiles' and click the cluster name&lt;br /&gt;
&lt;br /&gt;
=== Logging in to Cheaha ===&lt;br /&gt;
No matter which client you use to connect to the Cheaha, the first time you connect, the SSH client should display a message asking if you would like to import the hosts public key. Answer '''Yes''' to this question.&lt;br /&gt;
&lt;br /&gt;
* Connect to Cheaha using one of the methods listed above&lt;br /&gt;
* Answer '''Yes''' to import the cluster's public key&lt;br /&gt;
** Enter your BlazerID password&lt;br /&gt;
&lt;br /&gt;
* After successfully logging in for the first time, You may see the following message '''just press ENTER for the next three prompts, don't type any passphrases!'''&lt;br /&gt;
 &lt;br /&gt;
 It doesn't appear that you have set up your ssh key.&lt;br /&gt;
 This process will make the files:&lt;br /&gt;
      /home/joeuser/.ssh/id_rsa.pub&lt;br /&gt;
      /home/joeuser/.ssh/id_rsa&lt;br /&gt;
      /home/joeuser/.ssh/authorized_keys&lt;br /&gt;
 &lt;br /&gt;
 Generating public/private rsa key pair.&lt;br /&gt;
 Enter file in which to save the key (/home/joeuser/.ssh/id_rsa):&lt;br /&gt;
** Enter file in which to save the key (/home/joeuser/.ssh/id_rsa):'''Press Enter'''&lt;br /&gt;
** Enter passphrase (empty for no passphrase):'''Press Enter'''&lt;br /&gt;
** Enter same passphrase again:'''Press Enter'''&lt;br /&gt;
 Your identification has been saved in /home/joeuser/.ssh/id_rsa.&lt;br /&gt;
 Your public key has been saved in /home/joeuser/.ssh/id_rsa.pub.&lt;br /&gt;
 The key fingerprint is:&lt;br /&gt;
 f6:xx:xx:xx:xx:dd:9a:79:7b:83:xx:f9:d7:a7:d6:27 joeuser@cheaha.rc.uab.edu&lt;br /&gt;
&lt;br /&gt;
==== Users without a blazerid (collaborators from other universities) ====&lt;br /&gt;
** If you were issued a temporary password, enter it (Passwords are CaSE SensitivE!!!) You should see a message similar to this&lt;br /&gt;
 You are required to change your password immediately (password aged)&lt;br /&gt;
&lt;br /&gt;
 WARNING: Your password has expired.&lt;br /&gt;
 You must change your password now and login again!&lt;br /&gt;
 Changing password for user joeuser.&lt;br /&gt;
 Changing password for joeuser&lt;br /&gt;
 (current) UNIX password:&lt;br /&gt;
*** (current) UNIX password: '''Enter your temporary password at this prompt and press enter'''&lt;br /&gt;
*** New UNIX password: '''Enter your new strong password and press enter'''&lt;br /&gt;
*** Retype new UNIX password: '''Enter your new strong password again and press enter'''&lt;br /&gt;
*** After you enter your new password for the second time and press enter, the shell may exit automatically. If it doesn't, type exit and press enter&lt;br /&gt;
*** Log in again, this time use your new password&lt;br /&gt;
&lt;br /&gt;
Congratulations, you should now have a command prompt and be ready to start [[Cheaha_GettingStarted#Example_Batch_Job_Script | submitting jobs]]!!!&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
[[Image:Chehah2_2016.png|center|thumb|450px|Logical Diagram of Cheaha Configuration]]&lt;br /&gt;
&lt;br /&gt;
The Cheaha Compute Platform includes commodity compute hardware, totaling 2800 compute cores and over 4.7PB of usable storage (6.6PB raw capacity). The following descriptions highlight the current hardware profile that provides an aggregate theoretical peak performance of 468 teraflops.&lt;br /&gt;
&lt;br /&gt;
* Compute &lt;br /&gt;
** 36 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 128GB DDR4 RAM, FDR InfiniBand and 10GigE network cards&lt;br /&gt;
** 38 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 256GB DDR4 RAM, FDR InfiniBand and 10GigE network cards&lt;br /&gt;
** 14 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 384GB DDR4 RAM, FDR InfiniBand and 10GigE network cards&lt;br /&gt;
** 4 Compute Nodes with Nvidia Tesla K80 and two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 128GB DDR4 RAM, FDR InfiniBand and 10GigE network cards&lt;br /&gt;
** 4 Compute Nodes with Intel Phi coprocessor SE10/7120 and two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 128GB DDR4 RAM, FDR InfiniBand and 10GigE network cards&lt;br /&gt;
** 18 Compute Nodes with two 14 core processors (Intel Xeon E5-2680 v4 2.4GHz)with 256GB DDR4 RAM, four NVIDIA Tesla P100 16GB GPUs, EDR InfiniBand and 10GigE network cards&lt;br /&gt;
&lt;br /&gt;
* Networking&lt;br /&gt;
**FDR and EDR InfiniBand Switch&lt;br /&gt;
** 10Gigabit Ethernet Switch&lt;br /&gt;
&lt;br /&gt;
* Storage -- DDN SFA12KX with GPFS) &lt;br /&gt;
** 2 x 12KX40D-56IB controllers&lt;br /&gt;
** 10 x SS8460 disk enclosures&lt;br /&gt;
** 825 x 4K SAS drives&lt;br /&gt;
&lt;br /&gt;
* Management &lt;br /&gt;
** Management node and gigabit switch for cluster management&lt;br /&gt;
** Bright Advanced Cluster Management software licenses&lt;br /&gt;
&lt;br /&gt;
== Cluster Software ==&lt;br /&gt;
* BrightCM 7.2&lt;br /&gt;
* CentOS 7.2 x86_64&lt;br /&gt;
* [[Slurm]] 15.08&lt;br /&gt;
&lt;br /&gt;
== Queuing System ==&lt;br /&gt;
All work on Cheaha must be submitted to '''our queuing system ([[Slurm]])'''. A common mistake made by new users is to run 'jobs' on the login node. This section gives a basic overview of what a queuing system is and why we use it.&lt;br /&gt;
=== What is a queuing system? ===&lt;br /&gt;
* Software that gives users fair allocation of the cluster's resources&lt;br /&gt;
* Schedules jobs based using resource requests (the following are commonly requested resources, there are many more that are available)&lt;br /&gt;
** Number of processors (often referred to as &amp;quot;slots&amp;quot;)&lt;br /&gt;
** Maximum memory (RAM) required per slot&lt;br /&gt;
** Maximum run time&lt;br /&gt;
* Common queuing systems:&lt;br /&gt;
** '''[[Slurm]]'''&lt;br /&gt;
** Sun Grid Engine (Also know as SGE, OGE, GE)&lt;br /&gt;
** OpenPBS&lt;br /&gt;
** Torque&lt;br /&gt;
** LSF (load sharing facility)&lt;br /&gt;
&lt;br /&gt;
[http://slurm.schedmd.com/ Slurm] is a queue management system and stands for Simple Linux Utility for Resource Management. Slurm was developed at the Lawrence Livermore National Lab and currently runs some of the largest compute clusters in the world. '''[[Slurm]]''' is now the primary job manager on Cheaha, it replaces SUN Grid Engine ([[https://docs.uabgrid.uab.edu/wiki/Cheaha_GettingStarted_deprecated SGE]]) the job manager used earlier. Instructions of using SLURM and writing SLURM scripts for jobs submission on Cheaha can be found '''[[Slurm | here]]'''.&lt;br /&gt;
&lt;br /&gt;
=== Typical Workflow ===&lt;br /&gt;
* Stage data to $USER_SCRATCH (your scratch directory)&lt;br /&gt;
* Research how to run your code in &amp;quot;batch&amp;quot; mode. Batch mode typically means the ability to run it from the command line without requiring any interaction from the user.&lt;br /&gt;
* Identify the appropriate resources needed to run the job. The following are mandatory resource requests for all jobs on Cheaha&lt;br /&gt;
** Maximum memory (RAM) required per slot&lt;br /&gt;
** Maximum runtime&lt;br /&gt;
* Write a job script specifying queuing system parameters, resource requests and commands to run program&lt;br /&gt;
* Submit script to queuing system (sbatch script.job)&lt;br /&gt;
* Monitor job (squeue)&lt;br /&gt;
* Review the results and resubmit as necessary&lt;br /&gt;
* Clean up the scratch directory by moving or deleting the data off of the cluster&lt;br /&gt;
&lt;br /&gt;
=== Resource Requests ===&lt;br /&gt;
Accurate resource requests are extremely important to the health of the over all cluster. In order for Cheaha to operate properly, the queing system must know how much runtime and RAM each job will need.&lt;br /&gt;
&lt;br /&gt;
==== Mandatory Resource Requests ====&lt;br /&gt;
&lt;br /&gt;
* -t, --time=&amp;lt;time&amp;gt;&lt;br /&gt;
Set a limit on the total run time of the job allocation. If the requested time limit exceeds the partition's time limit, the job will be left in a PENDING state (possibly indefinitely).&lt;br /&gt;
* For Array jobs, this represents the maximum run time for each task&lt;br /&gt;
** For serial or parallel jobs, this represents the maximum run time for the entire job&lt;br /&gt;
&lt;br /&gt;
* --mem-per-cpu=&amp;lt;MB&amp;gt;&lt;br /&gt;
Mimimum memory required per allocated CPU in MegaBytes.&lt;br /&gt;
&lt;br /&gt;
==== Other Common Resource Requests ====&lt;br /&gt;
* -N, --nodes=&amp;lt;minnodes[-maxnodes]&amp;gt;&lt;br /&gt;
Request that a minimum of minnodes nodes be allocated to this job. A maximum node count may also be specified with maxnodes. If only one number is specified, this is used as both the minimum and maximum node count.&lt;br /&gt;
&lt;br /&gt;
* -n, --ntasks=&amp;lt;number&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the Slurm controller that job steps run within the allocation will launch a maximum of number tasks and to provide for   sufficient resources. The default is one task per node&lt;br /&gt;
&lt;br /&gt;
* --mem=&amp;lt;MB&amp;gt;&lt;br /&gt;
Specify the real memory required per node in MegaBytes.&lt;br /&gt;
&lt;br /&gt;
* -c, --cpus-per-task=&amp;lt;ncpus&amp;gt;&lt;br /&gt;
Advise the Slurm controller that ensuing job steps will require ncpus number of processors per task. Without this option, the controller will just try to allocate one processor per task.&lt;br /&gt;
&lt;br /&gt;
* -p, --partition=&amp;lt;partition_names&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. Available partitions are: express(max 2 hrs), short(max 12 hrs), medium(max 50 hrs), long(max 150 hrs), sinteractive(0-2 hrs)&lt;br /&gt;
&lt;br /&gt;
=== Submitting Jobs ===&lt;br /&gt;
Batch Jobs are submitted on Cheaha by using the &amp;quot;sbatch&amp;quot; command. The full manual for sbtach is available by running the following command&lt;br /&gt;
 man sbatch&lt;br /&gt;
&lt;br /&gt;
==== Job Script File Format ====&lt;br /&gt;
To submit a job to the queuing systems, you will first define your job in a script (a text file) and then submit that script to the queuing system.&lt;br /&gt;
&lt;br /&gt;
The script file needs to be '''formatted as a UNIX file''', not a Windows or Mac text file. In geek speak, this means that the end of line (EOL) character should be a line feed (LF) rather than a carriage return line feed (CRLF) for Windows or carriage return (CR) for Mac.&lt;br /&gt;
&lt;br /&gt;
If you submit a job script formatted as a Windows or Mac text file, your job will likely fail with misleading messages, for example that the path specified does not exist.&lt;br /&gt;
&lt;br /&gt;
Windows '''Notepad''' does not have the ability to save files using the UNIX file format. Do NOT use Notepad to create files intended for use on the clusters. Instead use one of the alternative text editors listed in the following section.&lt;br /&gt;
&lt;br /&gt;
===== Converting Files to UNIX Format =====&lt;br /&gt;
====== Dos2Unix Method ======&lt;br /&gt;
The lines below that begin with $ are commands, the $ represents the command prompt and should not be typed!&lt;br /&gt;
&lt;br /&gt;
The dos2unix program can be used to convert Windows text files to UNIX files with a simple command. After you have copied the file to your home directory on the cluster, you can identify that the file is a Windows file by executing the following (Windows uses CR LF as the line terminator, where UNIX uses only LF and Mac uses only CR):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ file testfile.txt&lt;br /&gt;
 &lt;br /&gt;
testfile.txt: ASCII text, with CRLF line terminators&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, convert the file to UNIX&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ dos2unix testfile.txt&lt;br /&gt;
 &lt;br /&gt;
dos2unix: converting file testfile.txt to UNIX format ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Verify the conversion using the file command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ file testfile.txt&lt;br /&gt;
 &lt;br /&gt;
testfile.txt: ASCII text&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====== Alternative Windows Text Editors ======&lt;br /&gt;
There are many good text editors available for Windows that have the capability to save files using the UNIX file format. Here are a few:&lt;br /&gt;
* [[http://www.geany.org/ Geany]] is an excellent free text editor for Windows and Linux that supports Windows, UNIX and Mac file formats, syntax highlighting and many programming features. To convert from Windows to UNIX click '''Document''' click '''Set Line Endings''' and then '''Convert and Set to LF (Unix)'''&lt;br /&gt;
* [[http://notepad-plus.sourceforge.net/uk/site.htm Notepad++]] is a great free Windows text editor that supports Windows, UNIX and Mac file formats, syntax highlighting and many programming features. To convert from Windows to UNIX click '''Format''' and then click '''Convert to UNIX Format'''&lt;br /&gt;
* [[http://www.textpad.com/ TextPad]] is another excellent Windows text editor. TextPad is not free, however.&lt;br /&gt;
&lt;br /&gt;
==== Example Batch Job Script ====&lt;br /&gt;
A shared cluster environment like Cheaha uses a job scheduler to run tasks on the cluster to provide optimal resource sharing among users. Cheaha uses a job scheduling system call Slurm to schedule and manage jobs. A user needs to tell Slurm about resource requirements (e.g. CPU, memory) so that it can schedule jobs effectively. These resource requirements along with actual application code can be specified in a single file commonly referred as 'Job Script/File'. Following is a simple job script that prints job number and hostname.&lt;br /&gt;
&lt;br /&gt;
'''Note:'''Jobs '''must request''' the appropriate partition (ex: ''--partition=short'') to satisfy the jobs resource request (maximum runtime, number of compute nodes, etc...)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=test&lt;br /&gt;
#SBATCH --output=res.txt&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --partition=express&lt;br /&gt;
#SBATCH --time=10:00&lt;br /&gt;
#SBATCH --mem-per-cpu=100&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=YOUR_EMAIL_ADDRESS&lt;br /&gt;
&lt;br /&gt;
srun hostname&lt;br /&gt;
srun sleep 60&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lines starting with '#SBATCH' have a special meaning in the Slurm world. Slurm specific configuration options are specified after the '#SBATCH' characters. Above configuration options are useful for most job scripts and for additional configuration options refer to Slurm commands manual. A job script is submitted to the cluster using Slurm specific commands. There are many commands available, but following three commands are the most common:&lt;br /&gt;
* sbatch - to submit job&lt;br /&gt;
* scancel - to delete job&lt;br /&gt;
* squeue - to view job status&lt;br /&gt;
&lt;br /&gt;
We can submit above job script using sbatch command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch HelloCheaha.sh&lt;br /&gt;
Submitted batch job 52707&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the job script is submitted, Slurm  queues it up and assigns it a job number (e.g. 52707 in above example). The job number is available inside job script using environment variable $JOB_ID. This variable can be used inside job script to create job related directory structure or file names.&lt;br /&gt;
&lt;br /&gt;
=== Interactive Resources ===&lt;br /&gt;
Login Node (the host that you connected to when you setup the SSH connection to Cheaha) is supposed to be used for submitting jobs and/or lighter prep work required for the job scripts. '''Do not run heavy computations on the login node'''. If you have a heavier workload to prepare for a batch job (eg. compiling code or other manipulations of data) or your compute application requires interactive control, you should request a dedicated interactive node for this work.&lt;br /&gt;
&lt;br /&gt;
Interactive resources are requested by submitting an &amp;quot;interactive&amp;quot; job to the scheduler. Interactive jobs will provide you a command line on a compute resource that you can use just like you would the command line on the login node. The difference is that the scheduler has dedicated the requested resources to your job and you can run your interactive commands without having to worry about impacting other users on the login node.&lt;br /&gt;
&lt;br /&gt;
Interactive jobs, that can be run on command line,  are requested with the '''srun''' command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command requests for 4 cores (--cpus-per-task) for a single task (--ntasks) with each cpu requesting size 4GB of RAM (--mem-per-cpu) for 8 hrs (--time).&lt;br /&gt;
&lt;br /&gt;
More advanced interactive scenarios to support graphical applications are available using [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session VNC] or X11 tunneling [http://www.uab.edu/it/software X-Win32 2014 for Windows]&lt;br /&gt;
&lt;br /&gt;
Interactive jobs that requires running a graphical application, are requested with the '''sinteractive''' command, via '''Terminal''' on your VNC window.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sinteractive --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please note, sinteractive starts your shell in a screen session.  Screen is a terminal emulator that is designed to make it possible to detach and reattach a session.  This feature can mostly be ignored.  If you application uses `ctrl-a` as a special command sequence (e.g. Emacs), however, you may find the application doesn't receive this special character.  When using screen, you need to type `ctrl-a a` (ctrl-a followed by a single &amp;quot;a&amp;quot; key press) to send a ctrl-a to your application.  Screen uses ctrl-a as it's own command character, so this special sequence issues the command to screen to &amp;quot;send ctrl-a to my app&amp;quot;.   Learn more about [https://www.gnu.org/software/screen/manual/html_node/Overview.html#Overview screen from it's documentation].&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
=== Privacy ===&lt;br /&gt;
{{SensitiveInformation}}&lt;br /&gt;
&lt;br /&gt;
=== File and Directory Permissions ===&lt;br /&gt;
&lt;br /&gt;
The default permissions for all user data storage locations described below are as follows. In these descriptions, the &amp;quot;$USER&amp;quot; variable should be replaced with the user's account name string:&lt;br /&gt;
&lt;br /&gt;
* /home/$USER - the owner ($USER) of the directory can read, write/delete, and list files.  No other users or groups have permissions to this directory.&lt;br /&gt;
* /data/user/$USER - the owner ($USER) of the directory can read, write/delete, and list files.  No other users or groups have permissions to this directory.&lt;br /&gt;
* /scratch/$USER - the owner ($USER) of the directory can read, write/delete, and list files.  No other users or groups have permissions to this directory.&lt;br /&gt;
* /data/projects/&amp;lt;projectname&amp;gt; - a PI can request project space for their lab or specific collaborations.  The project directory is created with the PI/requestor as the user-owner and a dedicated collaboration group as the group-owner.  The PI and all members of the dedicated collaboration group have can read, write/delete, and list files. No privileges are granted to other users of the system.  Additional controls can be implemented via access control lists (ACLs).  The PI/requestor can modify the ACLs to allow additional access to specific users.&lt;br /&gt;
&lt;br /&gt;
These permissions are the default configuration.  While it is possible to modify these permissions or change the group owner of a file to any group to which a user belongs, users are encouraged to work within the default configuration and contact support@listserv.uab.edu if the default permissions are not adequate.  Setting up a collaboration group and associated project directory can address most collaboration need while keep data access restricted to the minimum necessary users for the collaboration.&lt;br /&gt;
&lt;br /&gt;
Additional background on Linux file system permissions can be found here:&lt;br /&gt;
* https://its.unc.edu/research-computing/techdocs/how-to-use-unix-and-linux-file-permissions/&lt;br /&gt;
* https://www.rc.fas.harvard.edu/resources/documentation/linux/unix-permissions/&lt;br /&gt;
* https://hpc.nih.gov/storage/permissions.html&lt;br /&gt;
&lt;br /&gt;
=== No Automatic Backups ===&lt;br /&gt;
&lt;br /&gt;
{{ClusterDataBackup}}&lt;br /&gt;
&lt;br /&gt;
=== Home directories ===&lt;br /&gt;
&lt;br /&gt;
Your home directory on Cheaha is NFS-mounted to the compute nodes as /home/$USER or $HOME. It is acceptable to use your home directory as a location to store job scripts and custom code. You are responsible for keeping your home directory under 10GB in size!&lt;br /&gt;
&lt;br /&gt;
'''The home directory must not be used to store large amounts of data.''' Please use $USER_SCRATCH &lt;br /&gt;
for actively used data sets and $USER_DATA for storage of non scratch data.&lt;br /&gt;
&lt;br /&gt;
=== Scratch ===&lt;br /&gt;
Research Computing policy requires that all bulky input and output must be located on the scratch space. The home directory is intended to store your job scripts, log files, libraries and other supporting files.&lt;br /&gt;
&lt;br /&gt;
'''Important Information:'''&lt;br /&gt;
* Scratch space (network and local) '''is not backed up'''.&lt;br /&gt;
* Research Computing expects each user to keep their scratch areas clean. The cluster scratch area are not to be used for archiving data.&lt;br /&gt;
&lt;br /&gt;
Cheaha has two types of scratch space, network mounted and local.&lt;br /&gt;
* Network scratch ($USER_SCRATCH) is available on the login node and each compute node. This storage is a GPFS high performance file system providing roughly 4.7PB of usable storage. This should be your jobs primary working directory, unless the job would benefit from local scratch (see below).&lt;br /&gt;
* Local scratch is physically located on each compute node and is not accessible to the other nodes (including the login node). This space is useful if the job performs a lot of file I/O. Most of the jobs that run on our clusters do not fall into this category. Because the local scratch is inaccessible outside the job, it is important to note that you must move any data between local scratch to your network accessible scratch within your job. For example, step 1 in the job could be to copy the input from $USER_SCRATCH to ${USER_SCRATCH}, step 2 code execution, step 3 move the data back to $USER_SCRATCH.&lt;br /&gt;
&lt;br /&gt;
==== Network Scratch ====&lt;br /&gt;
Network scratch is available using the environment variable $USER_SCRATCH or directly by /data/scratch/$USER&lt;br /&gt;
&lt;br /&gt;
It is advisable to use the environment variable whenever possible rather than the hard coded path.&lt;br /&gt;
&lt;br /&gt;
==== Local Scratch ====&lt;br /&gt;
Each compute node has a local scratch directory that is accessible via the variable '''$LOCAL_SCRATCH'''. If your job performs a lot of file I/O, the job should use $LOCAL_SCRATCH rather than  $USER_SCRATCH to prevent bogging down the network scratch file system. The amount of scratch space available is approximately 800GB.&lt;br /&gt;
&lt;br /&gt;
The $LOCAL_SCRATCH is a special temporary directory and it's important to note that this directory is deleted when the job completes, so the job script has to move the results to $USER_SCRATCH or other location prior to the job exiting.&lt;br /&gt;
&lt;br /&gt;
Note that $LOCAL_SCRATCH is only useful for jobs in which all processes run on the same compute node, so MPI jobs are not candidates for this solution.&lt;br /&gt;
&lt;br /&gt;
The following is an array job example that uses $LOCAL_SCRATCH by transferring the inputs into $LOCAL_SCRATCH at the beginning of the script and the result out of $LOCAL_SCRATCH at the end of the script.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-10&lt;br /&gt;
#SBATCH --share&lt;br /&gt;
#SBATCH --partition=express&lt;br /&gt;
#&lt;br /&gt;
# Name your job to make it easier for you to track&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=R_array_job&lt;br /&gt;
#&lt;br /&gt;
# Set your error and output files&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --error=R_array_job.err&lt;br /&gt;
#SBATCH --output=R_array_job.out&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#&lt;br /&gt;
# Tell the scheduler only need 10 minutes and the appropriate partition&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --time=00:10:00&lt;br /&gt;
#SBATCH --mem-per-cpu=256&lt;br /&gt;
#&lt;br /&gt;
# Set your email address and request notification when you job is complete or if it fails&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=YOUR_EMAIL_ADDRESS&lt;br /&gt;
&lt;br /&gt;
module load R/3.2.0-goolf-1.7.20&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;TMPDIR: $LOCAL_SCRATCH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
cd $LOCAL_SCRATCH&lt;br /&gt;
# Create a working directory under the special scheduler local scratch directory&lt;br /&gt;
# using the array job's taskID&lt;br /&gt;
mdkir $SLURM_ARRAY_TASK_ID&lt;br /&gt;
cd $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
# Next copy the input data to the local scratch&lt;br /&gt;
echo &amp;quot;Copying input data from network scratch to $LOCAL_SCRATCH/$SLURM_ARRAY_TASK_ID - $(date)&lt;br /&gt;
# The input data in this case has a numerical file extension that&lt;br /&gt;
# matches $SLURM_ARRAY_TASK_ID&lt;br /&gt;
cp -a $USER_SCRATCH/GeneData/INP*.$SLURM_ARRAY_TASK_ID ./&lt;br /&gt;
echo &amp;quot;copied input data from network scratch to $LOCAL_SCRATCH/$SLURM_ARRAY_TASK_ID - $(date)&lt;br /&gt;
&lt;br /&gt;
someapp -S 1 -D 10 -i INP*.$SLURM_ARRAY_TASK_ID -o geneapp.out.$SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
# Lastly copy the results back to network scratch&lt;br /&gt;
echo &amp;quot;Copying results from local $LOCAL_SCRATCH/$SLURM_ARRAY_TASK_ID to network - $(date)&lt;br /&gt;
cp -a geneapp.out.$SLURM_ARRAY_TASK_ID $USER_SCRATCH/GeneData/&lt;br /&gt;
echo &amp;quot;Copied results from local $LOCAL_SCRATCH/$SLURM_ARRAY_TASK_ID to network - $(date)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Project Storage ===&lt;br /&gt;
Cheaha has a location where shared data can be stored called $SHARE_PROJECT. As with user scratch, this area '''is not backed up'''!&lt;br /&gt;
&lt;br /&gt;
This is helpful if a team of researchers must access the same data. Please open a help desk ticket to request a project directory under $SHARE_PROJECT.&lt;br /&gt;
&lt;br /&gt;
=== Uploading Data ===&lt;br /&gt;
{{SensitiveInformation}}&lt;br /&gt;
Data can be moved onto the cluster (pushed) from a remote client (ie. you desktop) via SCP or SFTP.  Data can also be downloaded to the cluster (pulled) by issuing transfer commands once you are logged into the cluster. Common transfer methods are `wget &amp;lt;URL&amp;gt;`, FTP, or SCP, and depend on how the data is made available from the data provider.&lt;br /&gt;
&lt;br /&gt;
Large data sets should be staged directly to your $USER_SCRATCH directory so as not to fill up $HOME.  If you are working on a data set shared with multiple users, it's preferable to request space in $SHARE_PROJECT rather than duplicating the data for each user.&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
[http://modules.sourceforge.net/ Environment Modules] is installed on Cheaha and should be used when constructing your job scripts if an applicable module file exists. Using the module command you can easily configure your environment for specific software packages without having to know the specific environment variables and values to set. Modules allows you to dynamically configure your environment without having to logout / login for the changes to take affect.&lt;br /&gt;
&lt;br /&gt;
If you find that specific software does not have a module, please submit a [http://etlab.eng.uab.edu/ helpdesk ticket] to request the module.&lt;br /&gt;
&lt;br /&gt;
* Cheaha supports bash completion for the module command. For example, type 'module' and press the TAB key twice to see a list of options:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module TAB TAB&lt;br /&gt;
&lt;br /&gt;
add          display      initlist     keyword      refresh      switch       use          &lt;br /&gt;
apropos      help         initprepend  list         rm           unload       whatis       &lt;br /&gt;
avail        initadd      initrm       load         show         unuse        &lt;br /&gt;
clear        initclear    initswitch   purge        swap         update&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To see the list of available modulefiles on the cluster, run the '''module avail''' command (note the example list below may not be complete!) or '''module load ''' followed by two tab key presses:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module avail&lt;br /&gt;
 &lt;br /&gt;
----------------------------------------------------------------------------------------- /cm/shared/modulefiles -----------------------------------------------------------------------------------------&lt;br /&gt;
acml/gcc/64/5.3.1                    acml/open64-int64/mp/fma4/5.3.1      fftw2/openmpi/gcc/64/float/2.1.5     intel-cluster-runtime/ia32/3.8       netcdf/gcc/64/4.3.3.1&lt;br /&gt;
acml/gcc/fma4/5.3.1                  blacs/openmpi/gcc/64/1.1patch03      fftw2/openmpi/open64/64/double/2.1.5 intel-cluster-runtime/intel64/3.8    netcdf/open64/64/4.3.3.1&lt;br /&gt;
acml/gcc/mp/64/5.3.1                 blacs/openmpi/open64/64/1.1patch03   fftw2/openmpi/open64/64/float/2.1.5  intel-cluster-runtime/mic/3.8        netperf/2.7.0&lt;br /&gt;
acml/gcc/mp/fma4/5.3.1               blas/gcc/64/3.6.0                    fftw3/openmpi/gcc/64/3.3.4           intel-tbb-oss/ia32/44_20160526oss    open64/4.5.2.1&lt;br /&gt;
acml/gcc-int64/64/5.3.1              blas/open64/64/3.6.0                 fftw3/openmpi/open64/64/3.3.4        intel-tbb-oss/intel64/44_20160526oss openblas/dynamic/0.2.15&lt;br /&gt;
acml/gcc-int64/fma4/5.3.1            bonnie++/1.97.1                      gdb/7.9                              iozone/3_434                         openmpi/gcc/64/1.10.1&lt;br /&gt;
acml/gcc-int64/mp/64/5.3.1           cmgui/7.2                            globalarrays/openmpi/gcc/64/5.4      lapack/gcc/64/3.6.0                  openmpi/open64/64/1.10.1&lt;br /&gt;
acml/gcc-int64/mp/fma4/5.3.1         cuda75/blas/7.5.18                   globalarrays/openmpi/open64/64/5.4   lapack/open64/64/3.6.0               pbspro/13.0.2.153173&lt;br /&gt;
acml/open64/64/5.3.1                 cuda75/fft/7.5.18                    hdf5/1.6.10                          mpich/ge/gcc/64/3.2                  puppet/3.8.4&lt;br /&gt;
acml/open64/fma4/5.3.1               cuda75/gdk/352.79                    hdf5_18/1.8.16                       mpich/ge/open64/64/3.2               rc-base&lt;br /&gt;
acml/open64/mp/64/5.3.1              cuda75/nsight/7.5.18                 hpl/2.1                              mpiexec/0.84_432                     scalapack/mvapich2/gcc/64/2.0.2&lt;br /&gt;
acml/open64/mp/fma4/5.3.1            cuda75/profiler/7.5.18               hwloc/1.10.1                         mvapich/gcc/64/1.2rc1                scalapack/openmpi/gcc/64/2.0.2&lt;br /&gt;
acml/open64-int64/64/5.3.1           cuda75/toolkit/7.5.18                intel/compiler/32/15.0/2015.5.223    mvapich/open64/64/1.2rc1             sge/2011.11p1&lt;br /&gt;
acml/open64-int64/fma4/5.3.1         default-environment                  intel/compiler/64/15.0/2015.5.223    mvapich2/gcc/64/2.2b                 slurm/15.08.6&lt;br /&gt;
acml/open64-int64/mp/64/5.3.1        fftw2/openmpi/gcc/64/double/2.1.5    intel-cluster-checker/2.2.2          mvapich2/open64/64/2.2b              torque/6.0.0.1&lt;br /&gt;
&lt;br /&gt;
---------------------------------------------------------------------------------------- /share/apps/modulefiles -----------------------------------------------------------------------------------------&lt;br /&gt;
rc/BrainSuite/15b                       rc/freesurfer/freesurfer-5.3.0          rc/intel/compiler/64/ps_2016/2016.0.047 rc/matlab/R2015a                        rc/SAS/v9.4&lt;br /&gt;
rc/cmg/2012.116.G                       rc/gromacs-intel/5.1.1                  rc/Mathematica/10.3                     rc/matlab/R2015b&lt;br /&gt;
rc/dsistudio/dsistudio-20151020         rc/gtool/0.7.5                          rc/matlab/R2012a                        rc/MRIConvert/2.0.8&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------------- /share/apps/rc/modules/all ---------------------------------------------------------------------------------------&lt;br /&gt;
AFNI/linux_openmp_64-goolf-1.7.20-20160616                gperf/3.0.4-intel-2016a                                   MVAPICH2/2.2b-GCC-4.9.3-2.25&lt;br /&gt;
Amber/14-intel-2016a-AmberTools-15-patchlevel-13-13       grep/2.15-goolf-1.4.10                                    NASM/2.11.06-goolf-1.7.20&lt;br /&gt;
annovar/2016Feb01-foss-2015b-Perl-5.22.1                  GROMACS/5.0.5-intel-2015b-hybrid                          NASM/2.11.08-foss-2015b&lt;br /&gt;
ant/1.9.6-Java-1.7.0_80                                   GSL/1.16-goolf-1.7.20                                     NASM/2.11.08-intel-2016a&lt;br /&gt;
APBS/1.4-linux-static-x86_64                              GSL/1.16-intel-2015b                                      NASM/2.12.02-foss-2016a&lt;br /&gt;
ASHS/rev103_20140612                                      GSL/2.1-foss-2015b                                        NASM/2.12.02-intel-2015b&lt;br /&gt;
Aspera-Connect/3.6.1                                      gtool/0.7.5_linux_x86_64                                  NASM/2.12.02-intel-2016a&lt;br /&gt;
ATLAS/3.10.1-gompi-1.5.12-LAPACK-3.4.2                    guile/1.8.8-GNU-4.9.3-2.25                                ncurses/5.9-foss-2015b&lt;br /&gt;
Autoconf/2.69-foss-2016a                                  HAPGEN2/2.2.0                                             ncurses/5.9-GCC-4.8.4&lt;br /&gt;
Autoconf/2.69-GCC-4.8.4                                   HarfBuzz/1.2.7-intel-2016a                                ncurses/5.9-GNU-4.9.3-2.25&lt;br /&gt;
Autoconf/2.69-GNU-4.9.3-2.25                              HDF5/1.8.15-patch1-intel-2015b                            ncurses/5.9-goolf-1.4.10&lt;br /&gt;
 . &lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some software packages have multiple module files, for example:&lt;br /&gt;
* GCC/4.7.2                            &lt;br /&gt;
* GCC/4.8.1                                                &lt;br /&gt;
* GCC/4.8.2                                                &lt;br /&gt;
* GCC/4.8.4                                                &lt;br /&gt;
* GCC/4.9.2                               &lt;br /&gt;
* GCC/4.9.3                                                &lt;br /&gt;
* GCC/4.9.3-2.25                &lt;br /&gt;
&lt;br /&gt;
In this case, the GCC module will always load the latest version, so loading this module is equivalent to loading GCC/4.9.3-2.25. If you always want to use the latest version, use this approach. If you want use a specific version, use the module file containing the appropriate version number.&lt;br /&gt;
&lt;br /&gt;
Some modules, when loaded, will actually load other modules. For example, the ''GROMACS/5.0.5-intel-2015b-hybrid '' module will also load ''intel/2015b'' and other related tools.&lt;br /&gt;
&lt;br /&gt;
* To load a module, ex: for a GROMACS job, use the following '''module load''' command in your job script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load  GROMACS/5.0.5-intel-2015b-hybrid &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* To see a list of the modules that you currently have loaded use the '''module list''' command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module list&lt;br /&gt;
 &lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) slurm/15.08.6                                       9) impi/5.0.3.048-iccifort-2015.3.187-GNU-4.9.3-2.25  17) Tcl/8.6.3-intel-2015b&lt;br /&gt;
  2) rc-base                                            10) iimpi/7.3.5-GNU-4.9.3-2.25                         18) SQLite/3.8.8.1-intel-2015b&lt;br /&gt;
  3) GCC/4.9.3-binutils-2.25                            11) imkl/11.2.3.187-iimpi-7.3.5-GNU-4.9.3-2.25         19) Tk/8.6.3-intel-2015b-no-X11&lt;br /&gt;
  4) binutils/2.25-GCC-4.9.3-binutils-2.25              12) intel/2015b                                        20) Python/2.7.9-intel-2015b&lt;br /&gt;
  5) GNU/4.9.3-2.25                                     13) bzip2/1.0.6-intel-2015b                            21) Boost/1.58.0-intel-2015b-Python-2.7.9&lt;br /&gt;
  6) icc/2015.3.187-GNU-4.9.3-2.25                      14) zlib/1.2.8-intel-2015b                             22) GROMACS/5.0.5-intel-2015b-hybrid&lt;br /&gt;
  7) ifort/2015.3.187-GNU-4.9.3-2.25                    15) ncurses/5.9-intel-2015b&lt;br /&gt;
  8) iccifort/2015.3.187-GNU-4.9.3-2.25                 16) libreadline/6.3-intel-2015b&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* A module can be removed from your environment by using the '''module unload''' command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module unload GROMACS/5.0.5-intel-2015b-hybrid&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* The definition of a module can also be viewed using the '''module show''' command, revealing what a specific module will do to your environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module show GROMACS/5.0.5-intel-2015b-hybrid &lt;br /&gt;
-------------------------------------------------------------------&lt;br /&gt;
/share/apps/rc/modules/all/GROMACS/5.0.5-intel-2015b-hybrid:&lt;br /&gt;
&lt;br /&gt;
module-whatis  GROMACS is a versatile package to perform molecular dynamics,&lt;br /&gt;
 i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. - Homepage: http://www.gromacs.org &lt;br /&gt;
conflict   GROMACS &lt;br /&gt;
prepend-path   CPATH /share/apps/rc/software/GROMACS/5.0.5-intel-2015b-hybrid/include &lt;br /&gt;
prepend-path   LD_LIBRARY_PATH /share/apps/rc/software/GROMACS/5.0.5-intel-2015b-hybrid/lib64 &lt;br /&gt;
prepend-path   LIBRARY_PATH /share/apps/rc/software/GROMACS/5.0.5-intel-2015b-hybrid/lib64 &lt;br /&gt;
prepend-path   MANPATH /share/apps/rc/software/GROMACS/5.0.5-intel-2015b-hybrid/share/man &lt;br /&gt;
prepend-path   PATH /share/apps/rc/software/GROMACS/5.0.5-intel-2015b-hybrid/bin &lt;br /&gt;
prepend-path   PKG_CONFIG_PATH /share/apps/rc/software/GROMACS/5.0.5-intel-2015b-hybrid/lib64/pkgconfig &lt;br /&gt;
setenv     EBROOTGROMACS /share/apps/rc/software/GROMACS/5.0.5-intel-2015b-hybrid &lt;br /&gt;
setenv     EBVERSIONGROMACS 5.0.5 &lt;br /&gt;
setenv     EBDEVELGROMACS /share/apps/rc/software/GROMACS/5.0.5-intel-2015b-hybrid/easybuild/GROMACS-5.0.5-intel-2015b-hybrid-easybuild-devel &lt;br /&gt;
-------------------------------------------------------------------&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Error Using Modules from a Job Script ===&lt;br /&gt;
&lt;br /&gt;
If you are using modules and the command your job executes runs fine from the command line but fails when you run it from the job, you may be having an issue with the script initialization.   If you see this error in your job error output file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
-bash: module: line 1: syntax error: unexpected end of file&lt;br /&gt;
-bash: error importing function definition for `BASH_FUNC_module'&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Add the command `unset module` before calling your module files.  The -V job argument will cause a conflict with the module function used in your script.&lt;br /&gt;
&lt;br /&gt;
== Sample Job Scripts ==&lt;br /&gt;
The following are sample job scripts, please be careful to edit these for your environment (i.e. replace &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;YOUR_EMAIL_ADDRESS&amp;lt;/font&amp;gt; with your real email address), set the h_rt to an appropriate runtime limit and modify the job name and any other parameters.&lt;br /&gt;
&lt;br /&gt;
'''Hello World''' is the classic example used throughout programming. We don't want to buck the system, so we'll use it as well to demonstrate jobs submission with one minor variation: our hello world will send us a greeting using the name of whatever machine it runs on. For example, when run on the Cheaha login node, it would print &amp;quot;Hello from login001&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Hello World (serial) ===&lt;br /&gt;
&lt;br /&gt;
A serial job is one that can run independently of other commands, ie. it doesn't depend on the data from other jobs running simultaneously. You can run many serial jobs in any order. This is a common solution to processing lots of data when each command works on a single piece of data. For example, running the same conversion on 100s of images.&lt;br /&gt;
&lt;br /&gt;
Here we show how to create job script for one simple command. Running more than one command just requires submitting more jobs.&lt;br /&gt;
&lt;br /&gt;
* Create your hello world application. Run this command to create a script, turn it into to a command, and run the command (just copy and past the following on to the command line).&lt;br /&gt;
1. Create the file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vim helloworld.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Write into &amp;quot;helloworld.sh&amp;quot; file (To write in vim editor: press '''shift + I''' )&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
echo Hello from `hostname`&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Save the file by pressing the '''esc''' key, type the following&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
:wq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Need to give permission the &amp;quot;helloworld.sh&amp;quot; file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ chmod +x helloworld.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Create the Slurm job script that will request 256 MB RAM and a maximum runtime of 10 minutes.&lt;br /&gt;
1. Create the JOB file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vim helloworld.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Write into &amp;quot;helloworld.job&amp;quot; file (To write in vim editor: press '''shift + I''' )&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --share&lt;br /&gt;
#SBATCH --partition=express&lt;br /&gt;
#&lt;br /&gt;
# Name your job to make it easier for you to track&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=helloworld&lt;br /&gt;
#&lt;br /&gt;
# Set your error and output files&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --error=helloworld.err&lt;br /&gt;
#SBATCH --output=helloworld.out&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#&lt;br /&gt;
# Tell the scheduler only need 10 minutes&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --time=00:10:00&lt;br /&gt;
#SBATCH --mem-per-cpu=256&lt;br /&gt;
#&lt;br /&gt;
# Set your email address and request notification when you job is complete or if it fails&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=$USER@uab.edu&lt;br /&gt;
&lt;br /&gt;
./helloworld.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Save the file by pressing the '''esc''' key, type the following&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
:wq&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Submit the job to Slurm scheduler and check the status using squeue&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch helloworld.job&lt;br /&gt;
Submitted batch job 52888&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When the job completes, you should have output files named helloworld.out and helloworld.err &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat helloworld.out &lt;br /&gt;
Hello from c0003&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hello World (parallel with MPI) ===&lt;br /&gt;
&lt;br /&gt;
MPI is used to coordinate the activity of many computations occurring in parallel.  It is commonly used in simulation software for molecular dynamics, fluid dynamics, and similar domains where there is significant communication (data) exchanged between cooperating process.&lt;br /&gt;
&lt;br /&gt;
Here is a simple parallel Slurm job script for running commands the rely on MPI. This example also includes the example of compiling the code and submitting the job script to the Slurm scheduler.&lt;br /&gt;
&lt;br /&gt;
* First, create a directory for the Hello World jobs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir -p ~/jobs/helloworld&lt;br /&gt;
$ cd ~/jobs/helloworld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Create the Hello World code written in C (this example of MPI enabled Hello World includes a 3 minute sleep to ensure the job runs for several minutes, a normal hello world example would run in a matter of seconds).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vi helloworld-mpi.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
main(int argc, char **argv)&lt;br /&gt;
{&lt;br /&gt;
   int rank, size;&lt;br /&gt;
&lt;br /&gt;
   int i, j;&lt;br /&gt;
   float f;&lt;br /&gt;
&lt;br /&gt;
   MPI_Init(&amp;amp;argc,&amp;amp;argv);&lt;br /&gt;
   MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);&lt;br /&gt;
   MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;size);&lt;br /&gt;
&lt;br /&gt;
   printf(&amp;quot;Hello World from process %d of %d.\n&amp;quot;, rank, size);&lt;br /&gt;
   sleep(180);&lt;br /&gt;
   for (j=0; j&amp;lt;=100000; j++)&lt;br /&gt;
      for(i=0; i&amp;lt;=100000; i++)&lt;br /&gt;
          f=i*2.718281828*i+i+i*3.141592654;&lt;br /&gt;
&lt;br /&gt;
   MPI_Finalize();&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Compile the code, first purging any modules you may have loaded followed by loading the module for OpenMPI GNU. The mpicc command will compile the code and produce a binary named helloworld_gnu_openmpi&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module purge&lt;br /&gt;
$ module load DefaultModules&lt;br /&gt;
$ module load OpenMPI/4.0.1-GCC-8.3.0-2.32&lt;br /&gt;
&lt;br /&gt;
$ mpicc helloworld-mpi.c -o helloworld_gnu_openmpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Create the Slurm job script that will request 8 cpu slots and a maximum runtime of 10 minutes&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vi helloworld.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --share&lt;br /&gt;
#SBATCH --partition=express&lt;br /&gt;
#&lt;br /&gt;
# Name your job to make it easier for you to track&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=helloworld_mpi&lt;br /&gt;
#&lt;br /&gt;
# Set your error and output files&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --error=helloworld_mpi.err&lt;br /&gt;
#SBATCH --output=helloworld_mpi.out&lt;br /&gt;
#SBATCH --ntasks=8&lt;br /&gt;
#&lt;br /&gt;
# Tell the scheduler only need 10 minutes&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --time=00:10:00&lt;br /&gt;
#SBATCH --mem-per-cpu=256&lt;br /&gt;
#&lt;br /&gt;
# Set your email address and request notification when you job is complete or if it fails&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=YOUR_EMAIL_ADDRESS&lt;br /&gt;
&lt;br /&gt;
module load OpenMPI/1.8.8-GNU-4.9.3-2.25&lt;br /&gt;
mpirun -np $SLURM_NTASKS helloworld_gnu_openmpi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Submit the job to Slurm scheduler and check the status using squeue -u $USER&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch helloworld.job&lt;br /&gt;
&lt;br /&gt;
Submitted batch job 52893&lt;br /&gt;
&lt;br /&gt;
$ squeue -u BLAZERID&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
             52893   express hellowor   BLAZERID  R       2:07      2 c[0005-0006]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* When the job completes, you should have output files named helloworld_mpi.out and helloworld_mpi.err&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat helloworld_mpi.out&lt;br /&gt;
&lt;br /&gt;
Hello World from process 1 of 8.&lt;br /&gt;
Hello World from process 3 of 8.&lt;br /&gt;
Hello World from process 4 of 8.&lt;br /&gt;
Hello World from process 7 of 8.&lt;br /&gt;
Hello World from process 5 of 8.&lt;br /&gt;
Hello World from process 6 of 8.&lt;br /&gt;
Hello World from process 0 of 8.&lt;br /&gt;
Hello World from process 2 of 8.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hello World (serial) -- revisited ===&lt;br /&gt;
&lt;br /&gt;
The job submit scripts (sbatch scripts) are actually bash shell scripts in their own right.  The reason for using the funky #SBATCH prefix in the scripts is so that bash interprets any such line as a comment and won't execute it. Because the # character starts a comment in bash, we can weave the  Slurm scheduler directives (the #SBATCH lines) into standard bash scripts.  This lets us build scripts that we can execute locally and then easily run the same script to on a cluster node by calling it with sbatch. This can be used to our advantage to create a more fluid experience in moving between development and production job runs. &lt;br /&gt;
&lt;br /&gt;
The following example is a simple variation on the serial job above.  All we will do is convert our Slurm job script into a command called helloworld that calls the helloworld.sh command.&lt;br /&gt;
&lt;br /&gt;
If the first line of a file is #!/bin/bash and that file is executable, the shell will automatically run the command as if were any other system command, eg. ls.   That is, the &amp;quot;.sh&amp;quot; extension on our HelloWorld.sh script is completely optional and is only meaningful to the user.&lt;br /&gt;
&lt;br /&gt;
Copy the serial helloworld.job script to a new file, add a the special #!/bin/bash as the first line, and make it executable with the following command (note: those are single quotes in the echo command): &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
echo '#!/bin/bash' | cat helloworld.job &amp;gt; helloworld ; chmod +x helloworld&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Our sbatch script has now become a regular command. We can now execute the command with the simple prefix &amp;quot;./helloworld&amp;quot;, which means &amp;quot;execute this file in the current directory&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./helloworld&lt;br /&gt;
Hello from login001&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or if we want to run the command on a compute node, replace the &amp;quot;./&amp;quot; prefix with &amp;quot;sbatch &amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch helloworld&lt;br /&gt;
Submitted batch job 53001&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And when the cluster run is complete you can look at the content of the output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ $ cat helloworld.out &lt;br /&gt;
Hello from c0003&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use this approach of treating you sbatch files as command wrappers to build a collection of commands that can be executed locally or via sbatch.  The other examples can be restructured similarly.&lt;br /&gt;
&lt;br /&gt;
To avoid having to use the &amp;quot;./&amp;quot; prefix, just add the current directory to your PATH. Also, if you plan to do heavy development using this feature on the cluster, please be sure to run [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Session sinteractive] first so you don't load the login node with your development work.&lt;br /&gt;
&lt;br /&gt;
=== Gromacs ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=short&lt;br /&gt;
#&lt;br /&gt;
# Name your job to make it easier for you to track&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=test_gromacs&lt;br /&gt;
#&lt;br /&gt;
# Set your error and output files&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --error=test_gromacs.err&lt;br /&gt;
#SBATCH --output=test_gromacs.out&lt;br /&gt;
#SBATCH --ntasks=8&lt;br /&gt;
#&lt;br /&gt;
# Tell the scheduler only need 10 minutes&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --time=10:00:00&lt;br /&gt;
#SBATCH --mem-per-cpu=2048&lt;br /&gt;
#&lt;br /&gt;
# Set your email address and request notification when you job is complete or if it fails&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=YOUR_EMAIL_ADDRESS&lt;br /&gt;
&lt;br /&gt;
module load OpenMPI/1.8.8-GNU-4.9.3-2.25&lt;br /&gt;
&lt;br /&gt;
module load GROMACS/5.0.5-intel-2015b-hybrid &lt;br /&gt;
&lt;br /&gt;
# Change directory to the job working directory if not already there&lt;br /&gt;
cd ${USER_SCRATCH}/jobs/gromacs&lt;br /&gt;
&lt;br /&gt;
# Single precision&lt;br /&gt;
MDRUN=mdrun_mpi&lt;br /&gt;
&lt;br /&gt;
# Enter your tpr file over here&lt;br /&gt;
export MYFILE=example.tpr&lt;br /&gt;
&lt;br /&gt;
mpirun -np SLURM_NTASKS $MDRUN -v -s $MYFILE -o $MYFILE -c $MYFILE -x $MYFILE -e $MYFILE -g ${MYFILE}.log&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== R (array job) ===&lt;br /&gt;
&lt;br /&gt;
The following is an example job script that will use an array of 10 tasks (--array=1-10), each task has a max runtime of 2 hours and will use no more than 256 MB of RAM per task.  Array's of tasks are useful when you have lots of simple jobs that work on their own separate files or a sub-set of the problem that can be selected by the array task index.  For [https://gitlab.rc.uab.edu/rc-training-sessions/Tutorial_parallelism a more comprehensive introduction please see this tutorial].&lt;br /&gt;
&lt;br /&gt;
Create a working directory and the job submission script&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir -p ~/jobs/ArrayExample&lt;br /&gt;
$ cd ~/jobs/ArrayExample&lt;br /&gt;
$ vi R-example-array.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-10&lt;br /&gt;
#SBATCH --share&lt;br /&gt;
#SBATCH --partition=express&lt;br /&gt;
#&lt;br /&gt;
# Name your job to make it easier for you to track&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=R_array_job&lt;br /&gt;
#&lt;br /&gt;
# Set your error and output files&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --error=R_array_job.err&lt;br /&gt;
#SBATCH --output=R_array_job.out&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#&lt;br /&gt;
# Tell the scheduler only need 10 minutes&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --time=00:10:00&lt;br /&gt;
#SBATCH --mem-per-cpu=256&lt;br /&gt;
#&lt;br /&gt;
# Set your email address and request notification when you job is complete or if it fails&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=YOUR_EMAIL_ADDRESS&lt;br /&gt;
&lt;br /&gt;
module load R/3.2.0-goolf-1.7.20 &lt;br /&gt;
cd ~/jobs/ArrayExample/rep$SLURM_ARRAY_TASK_ID&lt;br /&gt;
srun R CMD BATCH rscript.R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submit the job to the Slurm scheduler and check the status of the job using the squeue command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch R-example-array.job&lt;br /&gt;
$ squeue -u $USER&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Array Job Parameterization ===&lt;br /&gt;
Suppose you need to submit thousands of jobs. While you could do this in a for loop, the global limit on jobs in the SLURM queue is 10,000. The limit is in place for performance reasons and the jobs may be rejected with the following error message and an incomplete set of tasks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;sbatch: error: Slurm temporarily unable to accept job, sleeping and retrying&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The preferred way to handle this scenario is to allow SLURM to schedule the jobs for you using the array flag in an sbatch script. This allows many jobs to be submitted as a single entry in the queue, letting SLURM handle the for loop and queueing. It is possible to reference the current loop index, or task id, as $SLURM_ARRAY_TASK_ID.&lt;br /&gt;
&lt;br /&gt;
An example using $SLURM_ARRAY_TASK_ID to load input files and create output files is shown below. Suppose you have a short script called my_processing_script that needs to be run on 20,000 separate files. Suppose each instance only needs 1 cpu and 2 GB of RAM and finishes in 5 minutes. Submitting these files all at once won't work and at least half of them will be rejected by SLURM. Instead we can use the sbatch array flag. Note that some other useful flags have been omitted for brevity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/bash&lt;br /&gt;
#SBATCH --partition=express&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem-per-cpu=2G&lt;br /&gt;
&lt;br /&gt;
#SBATCH --array=1-20000%100&lt;br /&gt;
# This will run tasks 1 through 20000, with up to 100 at a time.&lt;br /&gt;
# It is possible to provide any comma-separated list of intervals.&lt;br /&gt;
# An example of a valid subset is --array=1,2,5-1000,3777,4995-5000%100&lt;br /&gt;
&lt;br /&gt;
INPUT_FILE=$USER_DATA/input/file_$SLURM_ARRAY_TASK_ID.txt&lt;br /&gt;
OUTPUT_FILE=$USER_DATA/output/file_$SLURM_ARRAY_TASK_ID.txt&lt;br /&gt;
&lt;br /&gt;
my_processing_script --input=&amp;quot;$INPUT_FILE&amp;quot; --output=&amp;quot;$OUTPUT_FILE&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU JOB ===&lt;br /&gt;
A Graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. &lt;br /&gt;
Create a math.sh file as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$vim math.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
(e=5)&lt;br /&gt;
 echo $e&lt;br /&gt;
 (( e = e + 3 ))&lt;br /&gt;
 echo $e&lt;br /&gt;
 (( e=e+4 ))  # -- spaces or no spaces, it doesn't matter&lt;br /&gt;
 echo $e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Give File permissions for script as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$chmod +x math.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Create Job submission script file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$vi math.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --share&lt;br /&gt;
#SBATCH --partition=pascalnodes&lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
# Name your job to make it easier for you to track&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=math&lt;br /&gt;
#&lt;br /&gt;
# Set your error and output files&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --error=math.err&lt;br /&gt;
#SBATCH --output=math.out&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#&lt;br /&gt;
# Tell the scheduler only need 10 minutes&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --time=00:10:00&lt;br /&gt;
#SBATCH --mem-per-cpu=256&lt;br /&gt;
#&lt;br /&gt;
# Set your email address and request notification when you job is complete or if it fails&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=$USER@uab.edu&lt;br /&gt;
./math.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submitting batch script to Slurm scheduler&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$sbatch math.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can also request GPU's on cluster as:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$sinteractive --ntasks=1 --time=00:10:00 --exclusive --partition=pascalnodes -N2 --gres=gpu:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Job (with MPI) ===&lt;br /&gt;
As mentioned above, MPI is used to coordinate the activity of many computations occurring in parallel. It is commonly used in simulation software for molecular dynamics, fluid dynamics, and similar domains where there is significant communication (data) exchanged between cooperating process.&lt;br /&gt;
&lt;br /&gt;
An example of an GPU job with MPI can be found by visiting [https://gitlab.rc.uab.edu/wsmonroe/horovod-environment/blob/master/README.md this link].&lt;br /&gt;
&lt;br /&gt;
Be sure to request the appropiate amount of gpu resources for your job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sinteractive --ntasks=8 --time=08:00:00 --exclusive --partition=pascalnodes -N2 --gres=gpu:4&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Singularity Container ===&lt;br /&gt;
Singularity is designed so that you can use it within SLURM jobs and it does not violate security constraints on the cluster.  Singularity was built keeping HPC in mind, i.e a shared environment. Using Singularity container with SLURM job script is very easy, as the containers run as a process on the host machine, just like any other command in a batch script. You just need to load Singularity in your job script and run the command via a singularity process. Here's an example job script below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=test-singularity&lt;br /&gt;
#SBATCH --output=res.out&lt;br /&gt;
#SBATCH --error=res.err&lt;br /&gt;
#&lt;br /&gt;
# Number of tasks needed for this job. Generally, used with MPI jobs&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --partition=express&lt;br /&gt;
#&lt;br /&gt;
# Time format = HH:MM:SS, DD-HH:MM:SS&lt;br /&gt;
#SBATCH --time=10:00&lt;br /&gt;
#&lt;br /&gt;
# Number of CPUs allocated to each task. &lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#&lt;br /&gt;
# Mimimum memory required per allocated  CPU  in  MegaBytes. &lt;br /&gt;
#SBATCH --mem-per-cpu=100&lt;br /&gt;
#&lt;br /&gt;
# Send mail to the email address when the job fails&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=$USER@uab.edu&lt;br /&gt;
&lt;br /&gt;
#Set your environment here&lt;br /&gt;
module load Singularity/2.5.2-GCC-5.4.0-2.26&lt;br /&gt;
&lt;br /&gt;
#Run your singularity or any other commands here&lt;br /&gt;
singularity exec -B /data/user/$USER /data/user/$USER/rc-training-sessions/neurodebian-neurodebian-master-latest.simg dcm2nii PATH_TO_YOUR_DICOM_FILES&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For [https://gitlab.rc.uab.edu/rc-training-sessions/singularity_containers a more comprehensive introduction please see this tutorial].&lt;br /&gt;
&lt;br /&gt;
== Installed Software ==&lt;br /&gt;
&lt;br /&gt;
A partial list of installed software with additional instructions for their use is available on the [[Cheaha Software]] page.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Cheaha&amp;diff=6176</id>
		<title>Cheaha</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Cheaha&amp;diff=6176"/>
		<updated>2021-06-10T18:32:58Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Support */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
'''Cheaha''' is a campus resource dedicated to enhancing research computing productivity at UAB. [http://cheaha.uabgrid.uab.edu Cheaha] is managed by [http://www.uab.edu/it UAB Information Technology's Research Computing group (UAB ITRC)] and is available to members of the UAB community in need of increased computational capacity.  Cheaha supports [http://en.wikipedia.org/wiki/High-performance_computing high-performance computing (HPC)] and [http://en.wikipedia.org/wiki/High-throughput_computing high throughput computing (HTC)] paradigms.&lt;br /&gt;
&lt;br /&gt;
Cheaha provides users with a traditional command-line interactive environment with access to many scientific tools that can leverage its dedicated pool of local compute resources.  Alternately, users of graphical applications can start a [[Setting_Up_VNC_Session|cluster desktop]]. The local compute pool provides access to compute hardware based on the [http://en.wikipedia.org/wiki/X86_64 x86-64 64-bit architecture].  The compute resources are organized into a unified Research Computing System.  The compute fabric for this system is anchored by the Cheaha cluster, [[ Resources |a commodity cluster with approximately 2400 cores]] connected by low-latency Fourteen Data Rate (FDR) InfiniBand networks.  The compute nodes are backed by 6.6PB raw GPFS storage on DDN SFA12KX hardware, an additional 20TB available for home directories on a traditional Hitachi SAN, and other ancillary services. The compute nodes combine to provide over 110TFlops of dedicated computing power.   &lt;br /&gt;
&lt;br /&gt;
Cheaha is composed of resources that span data centers located in the UAB Shared Computing facility UAB 936 Building and the RUST Computer Center. Resource design and development is lead by UAB IT Research Computing in open collaboration with community members. Operational [mailto:support@listserv.uab.edu support] is provided by UAB IT's Research Computing group.&lt;br /&gt;
&lt;br /&gt;
Cheaha is named in honor of [http://en.wikipedia.org/wiki/Cheaha_Mountain Cheaha Mountain], the highest peak in the state of Alabama.  Cheaha is a popular destination whose summit offers clear vistas of the surrounding landscape. (Cheaha Mountain photo-streams on [http://www.flickr.com/search/?q=cheaha  Flikr] and [http://picasaweb.google.com/lh/view?q=cheaha&amp;amp;psc=G&amp;amp;filter=1# Picasa]).&lt;br /&gt;
&lt;br /&gt;
== Using ==&lt;br /&gt;
&lt;br /&gt;
=== Getting Started ===&lt;br /&gt;
&lt;br /&gt;
For information on getting an account, logging in, and running a job, please see [[Cheaha2_GettingStarted|Getting Started]].&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Research-computing-platform.png|right|thumb|450px|Logical Diagram of Cheaha Configuration]]&lt;br /&gt;
&lt;br /&gt;
=== 2005 ===&lt;br /&gt;
&lt;br /&gt;
In 2002 UAB was awarded an infrastructure development grant through the NSF EPsCoR program.  This led to the 2005 acquisition of a 64 node compute cluster with two AMD Opteron 242 1.6Ghz CPUs per node (128 total cores).  This cluster was named Cheaha.  Cheaha expanded the compute capacity available at UAB and was the first general-access resource for the community. It lead to expanded roles for UAB IT in research computing support through the development of the UAB Shared HPC Facility in BEC and provided further engagement in Globus-based grid computing resource development on campus via UABgrid and regionally via [http://www.suragrid.org SURAgrid].&lt;br /&gt;
&lt;br /&gt;
=== 2008 ===&lt;br /&gt;
&lt;br /&gt;
In 2008, money was allocated by UAB IT for hardware upgrades which lead to the acquisition of an additional 192 cores based on a Dell clustering solution with Intel Quad-Core E5450 3.0Ghz CPU in August of 2008. This uprade migrated Cheaha's core infrastructure to the Dell blade clustering solution. It provided a 3 fold increase in processor density over the original hardware and enables more computing power to be located in the same physical space with room for expansion, an important consideration in light of the continued growth in processing demand.  This hardware represented a major technology upgrade that included space for additional expansion to address over-all capacity demand and enable resource reservation.  &lt;br /&gt;
&lt;br /&gt;
The 2008 upgrade began a continuous resource improvement plan that includes a phased development approach for Cheaha with on-going increases in capacity and feature enhancements being brought into production via an [http://projects.uabgrid.uab.edu/cheaha open community process].&lt;br /&gt;
&lt;br /&gt;
Software improvements rolled into the 2008 upgrade included grid computing services to access distributed compute resources and orchestrate jobs using the [http://www.gridway.org GridWay] meta-scheduler. An initial 10Gigabit Ethernet link establishing the UABgrid Research Network was designed to supports high speed data transfers between clusters connected to this network.&lt;br /&gt;
&lt;br /&gt;
=== 2009 ===&lt;br /&gt;
&lt;br /&gt;
In 2009, annual investment funds were directed toward establishing a fully connected dual data rate Infiniband network between the compute nodes added in 2008 and laying the foundation for a research storage system with a 60TB DDN storage system accessed via the Lustre distributed file system.  The Infiniband and storage fabrics were designed to support significant increases in research data sets and their associate analytical demand.&lt;br /&gt;
&lt;br /&gt;
=== 2010 ===&lt;br /&gt;
&lt;br /&gt;
In 2010, UAB was awarded an NIH Small Instrumentation Grant (SIG) to further increase analytical and storage capacity.  The grant funds were combined with the annual investment funds adding 576 cores (48 nodes) based on the Intel Westmere 2.66 GHz CPU, a quad data rate Infiniband fabric with 32 uplinks, an additional 120 TB of storage for the DDN fabric, and additional hardware to improve reliability. Additional improvements to the research compute platform involved extending the UAB Research Network to link the BEC and RUST data centers, adding 20TB of user and ancillary services storage&lt;br /&gt;
&lt;br /&gt;
=== 2012 ===&lt;br /&gt;
&lt;br /&gt;
In 2012, UAB IT Research Computing invested in the foundation hardware to expand long term storage and virtual machine capabilities with aqcuisition of 12 Dell 720xd system, each containing 16 cores, 96GB RAM, and 36TB of storage, creating a 192 core and 432TB virtual compute and storage fabric.&lt;br /&gt;
&lt;br /&gt;
Additionaly hardware investment by the School of Public Health's Section on Statistical Genetics added three 384GB large memory nodes and an additional 48 cores to the QDR Infiniband fabric.&lt;br /&gt;
&lt;br /&gt;
=== 2013 ===&lt;br /&gt;
&lt;br /&gt;
In 2013, UAB IT Research Computing acquired an [http://blogs.uabgrid.uab.edu/jpr/2013/03/were-going-with-openstack/ OpenStack cloud and Ceph storage software fabric] through a partnership between Dell and Inktank in order to [http://dev.uabgrid.uab.edu extend cloud computing solutions] to the researchers at UAB and enhance the interfacing capabilities for HPC.&lt;br /&gt;
&lt;br /&gt;
=== 2015 === &lt;br /&gt;
&lt;br /&gt;
UAB IT received $500,000 from the university’s Mission Support Fund for a compute cluster seed expansion of 48 teraflops.  This added 936 cores across 40 nodes with 2x12 core 2.5 GHz Intel Xeon E5-2680 v3 compute nodes and FDR InfiniBand interconnect.&lt;br /&gt;
&lt;br /&gt;
UAB received a $500,000 grant from the Alabama Innovation Fund for a three petabyte research storage array. This funding with additional matching from UAB provided a multi-petabyte [https://en.wikipedia.org/wiki/IBM_General_Parallel_File_System GPFS] parallel file system to the cluster which went live in 2016.&lt;br /&gt;
&lt;br /&gt;
=== 2016 ===&lt;br /&gt;
&lt;br /&gt;
In 2016 UAB IT Research computing received additional funding from Deans of CAS, Engineering, and Public Heath to grow the compute capacity provided by the prior year's seed funding.  This added an additional compute nodes providing researchers at UAB with a 96 2x12 core (2304 cores total) 2.5 GHz Intel Xeon E5-2680 v3 compute nodes with FDR InfiniBand interconnect. Out of the 96 compute nodes, 36 nodes have 128 GB RAM, 38 nodes have 256 GB RAM, and 14 nodes have 384 GB RAM. There are also four compute nodes with the Intel Xeon Phi 7210 accelerator cards and four compute nodes with the NVIDIA K80 GPUs. More information can be found at [[Resources]].  &lt;br /&gt;
&lt;br /&gt;
In addition to the compute, the GPFS six petabyte file system came online. This file system, provided each user five terabyte of personal space, additional space for shared projects and a greatly expanded scratch storage all in a single file system.&lt;br /&gt;
&lt;br /&gt;
The 2015 and 2016 investments combined to provide a completely new core for the Cheaha cluster, allowing the retirement of earlier compute generations.&lt;br /&gt;
&lt;br /&gt;
== Grant and Publication Resources ==&lt;br /&gt;
&lt;br /&gt;
The following description may prove useful in summarizing the services available via Cheaha.  If you are using Cheaha for grant funded research please send information about your grant (funding source and grant number), a statement of intent for the research project and a list of the applications you are using to UAB IT Research Computing.  If you are using Cheaha for exploratory research, please send a similar note on your research interest.  Finally, any publications that rely on computations performed on Cheaha should include a statement acknowledging the use of UAB Research Computing facilities in your research, see the suggested example below.  Please note, your acknowledgment may also need to include an addition statement acknowledging grant-funded hardware.  We also ask that you send any references to publications based on your use of Cheaha compute resources.&lt;br /&gt;
&lt;br /&gt;
=== Description of Cheaha for Grants (short) ===&lt;br /&gt;
&lt;br /&gt;
UAB IT Research Computing maintains high performance compute and storage resources for investigators. The Cheaha compute cluster provides approximately 3744 CPU cores and 80 accelerators (including 72 NVIDIA P100 GPUS's) interconnected via an InfiniBand network and provides over 572 TFLOP/s of aggregate theoretical peak performance. A high-performance, 12PB raw GPFS storage on DDN SFA12KX hardware is also connected to these compute nodes via the Infiniband fabric. An additional 20TB of traditional SAN storage is also available for home directories. This general access compute fabric is available to all UAB investigators.&lt;br /&gt;
&lt;br /&gt;
=== Description of Cheaha for Grants (Detailed) ===&lt;br /&gt;
&lt;br /&gt;
The Cyberinfrastructure supporting University of Alabama at Birmingham (UAB) investigators includes high performance computing clusters, storage, campus, statewide and regionally connected high-bandwidth networks, and conditioned space for hosting and operating HPC systems, research applications and network equipment. &lt;br /&gt;
&lt;br /&gt;
==== Cheaha HPC system ====&lt;br /&gt;
&lt;br /&gt;
Cheaha is a campus HPC resource dedicated to enhancing research computing productivity at UAB. Cheaha is managed by UAB Information Technology's Research Computing group (RC) and is available to members of the UAB community in need of increased computational capacity. Cheaha supports high-performance computing (HPC) and high throughput computing (HTC) paradigms. Cheaha is composed of resources that span data centers located in the UAB IT Data Centers in the 936 Building and the RUST Computer Center. Research Computing in open collaboration with the campus research community is leading the design and development of these resources.&lt;br /&gt;
&lt;br /&gt;
==== Compute Resources ====&lt;br /&gt;
&lt;br /&gt;
The UAB Cheaha High Performance Computing environment includes a high performance cluster with approximately 3744 CPU cores, 18 GPU nodes, and large memory nodes. The compute nodes combine to provide over 572 TFIops of dedicated computing power. The Ruffner OpenStack private cloud is available to develop and host scientific applications.&lt;br /&gt;
&lt;br /&gt;
==== Storage Resources ====&lt;br /&gt;
&lt;br /&gt;
The high performance compute nodes are backed by a replicated 6PB (12PB raw) high speed storage system with an Infiniband fabric. Additional storage tiers for project space and archive are also available.&lt;br /&gt;
&lt;br /&gt;
==== Network Resources ====&lt;br /&gt;
&lt;br /&gt;
The UAB Research Network is currently a dedicated 40Gbps optical link. The UAB LAN provides 1Gbs to the desktop and 10Gbs for instruments. &lt;br /&gt;
&lt;br /&gt;
The research network also includes a secure Science DMZ with data transfer nodes (DTNs) connected directly to the border router that provide a &amp;quot;friction-free&amp;quot; pathway to access external data repositories and other computational resources. &lt;br /&gt;
&lt;br /&gt;
UAB connects to the Internet2 high-speed research network at 100 Gbs via the University of Alabama System Regional Optical Network (UASRON). &lt;br /&gt;
&lt;br /&gt;
Globus technologies provide secure, reliable and fast data transfers.&lt;br /&gt;
&lt;br /&gt;
==== Personnel ====&lt;br /&gt;
&lt;br /&gt;
UAB IT Research Computing currently maintains a support staff of 10 lead by the Assistant Vice President for Research Computing and includes an HPC Architect-Manager, four Software developers, two Scientists, a system administrator and a project coordinator.&lt;br /&gt;
&lt;br /&gt;
=== Acknowledgment in Publications ===&lt;br /&gt;
&lt;br /&gt;
To acknowledge the use of Cheaha for compute time in published work, please consider adding the following to the acknowledgements section of your publication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
The authors gratefully acknowledge the resources provided by the University of Alabama at Birmingham IT-Research Computing group for high performance computing (HPC) support and CPU time on the Cheaha compute cluster.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If Globus was used to transfer data to/from Cheaha, please consider adding the following to the acknowledgements section of your publication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
This work was supported in part by the National Science Foundation under Grants Nos. OAC-1541310, the University of Alabama at Birmingham, and the Alabama Innovation Fund. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the University of Alabama at Birmingham.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== System Profile ==&lt;br /&gt;
&lt;br /&gt;
=== Performance ===&lt;br /&gt;
{{CheahaTflops}}&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
The Cheaha Compute Platform includes three generations of commodity compute hardware, totaling 868 compute cores, 2.8TB of RAM, and over 200TB of storage.&lt;br /&gt;
&lt;br /&gt;
The hardware is grouped into generations designated gen1, gen2, and gen3 (oldest to newest). The following descriptions highlight the hardware profile for each generation. &lt;br /&gt;
&lt;br /&gt;
* Generation 1 (gen1) -- 64 2-CPU AMD 1.6 GHz compute nodes with Gigabit interconnect. This is the original hardware collection purchased with NSF EPSCoR funds in 2005, approx $150K. These nodes are sometimes called the &amp;quot;Verari&amp;quot; nodes. These nodes are tagged as &amp;quot;verari-compute-#-#&amp;quot; in the ROCKS naming convention.&lt;br /&gt;
* Generation 2 (gen2) -- 24 2x4 core (196 cores total) Intel 3.0 GHz Intel compute nodes with dual data rate Infiniband interconnect and the initial high-perf storage implementation using 60TB DDN. This is the hardware collection purchased exclusively with the annual VPIT funds allocation, approx $150K/yr for the 2008 and 2009 fiscal years.  These nodes are sometimes confusingly called &amp;quot;cheaha2&amp;quot; or &amp;quot;cheaha&amp;quot; nodes. These nodes are tagged as &amp;quot;cheaha-compute-#-#&amp;quot; in the ROCKS naming convention. &lt;br /&gt;
* Generation 3 (gen3) -- 48 2x6 core (576 cores total) 2.66 GHz Intel compute nodes with quad data rate Infiniband, ScaleMP, and the high-perf storage build-out for capacity and redundancy with 120TB DDN. This is the hardware collection purchased with a combination of the NIH SIG funds and some of the 2010 annual VPIT investment. These nodes were given the code name &amp;quot;sipsey&amp;quot; and tagged as such in the node naming for the queue system. These nodes are tagged as &amp;quot;sipsey-compute-#-#&amp;quot; in the ROCKS naming convention. 16 of the gen3 nodes (sipsey-compute-0-1 thru sipsey-compute-0-16) were upgraded in 2014 from 48GB to 96GB of memory per node. &lt;br /&gt;
* Generation 4 (gen4) -- 3 16 core (48 cores total) compute nodes. This hardware collection purchase by [http://www.soph.uab.edu/ssg/people/tiwari Dr. Hemant Tiwari of SSG]. These nodes were given the code name &amp;quot;ssg&amp;quot; and tagged as such in the node naming for the queue system. These nodes are tagged as &amp;quot;ssg-compute-0-#&amp;quot; in the ROCKS naming convention. &lt;br /&gt;
* Generation 6 (gen6) -- &lt;br /&gt;
** 44 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 128GB DDR4 RAM, FDR InfiniBand and 10GigE network cards (4 nodes with NVIDIA K80 GPUs and 4 nodes with Intel Xeon Phi 7120P accelerators)&lt;br /&gt;
** 38 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 256GB DDR4 RAM, FDR InfiniBand and 10GigE network cards&lt;br /&gt;
** 14 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 384GB DDR4 RAM, FDR InfiniBand and 10GigE network card&lt;br /&gt;
** FDR InfiniBand Switch&lt;br /&gt;
** 10Gigabit Ethernet Switch&lt;br /&gt;
** Management node and gigabit switch for cluster management&lt;br /&gt;
** Bright Advanced Cluster Management software licenses &lt;br /&gt;
&lt;br /&gt;
Summarized, Cheaha's compute pool includes:&lt;br /&gt;
* gen4 is 48 cores of [http://ark.intel.com/products/64583/Intel-Xeon-Processor-E5-2680-20M-Cache-2_70-GHz-8_00-GTs-Intel-QPI 2.70GHz eight-core Intel Xeon E5-2680 processors] with 24G of RAM per core or 384GB total&lt;br /&gt;
* gen3 is 192 cores of [http://ark.intel.com/products/47922/Intel-Xeon-Processor-X5650-12M-Cache-2_66-GHz-6_40-GTs-Intel-QPI?q=x5650 2.67GHz six-core Intel Xeon X5650 processors] with 8Gb RAM per core or 96GB total&lt;br /&gt;
* gen3 is 384 cores of [http://ark.intel.com/products/47922/Intel-Xeon-Processor-X5650-12M-Cache-2_66-GHz-6_40-GTs-Intel-QPI?q=x5650 2.67GHz six-core Intel Xeon X5650 processors] with 4Gb RAM per core or 48GB total&lt;br /&gt;
* gen2 is 192 cores of [http://ark.intel.com/products/33083/Intel-Xeon-Processor-E5450-12M-Cache-3_00-GHz-1333-MHz-FSB 3.0GHz quad-core Intel Xeon E5450 processors] with 2Gb RAM per core&lt;br /&gt;
* gen1 is 100 cores of 1.6GhZ AMD Opteron 242 processors with 1Gb RAM per core &lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ Physical Nodes&lt;br /&gt;
|- bgcolor=grey&lt;br /&gt;
!gen!!queue!!#nodes!!cores/node!!RAM/node&lt;br /&gt;
|-&lt;br /&gt;
|gen6|| default || 44 || 24 || 128G&lt;br /&gt;
|-&lt;br /&gt;
|gen6|| default || 38 || 24 || 256G&lt;br /&gt;
|-&lt;br /&gt;
|gen6|| default || 14 || 24 || 384G&lt;br /&gt;
|-&lt;br /&gt;
|gen5||Ceph/OpenStack|| 12 || 20 || 96G&lt;br /&gt;
|-&lt;br /&gt;
|gen4||ssg||3||16||385G&lt;br /&gt;
|-&lt;br /&gt;
|gen3||sipsey||16||12||96G&lt;br /&gt;
|-&lt;br /&gt;
|gen3||sipsey||32||12||48G&lt;br /&gt;
|-&lt;br /&gt;
|gen2||cheaha||24||8||16G&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Software ===&lt;br /&gt;
&lt;br /&gt;
Details of the software available on Cheaha can be found on the [https://docs.uabgrid.uab.edu/wiki/Cheaha_Software Installed software page], an overview follows.&lt;br /&gt;
&lt;br /&gt;
Cheaha uses [http://modules.sourceforge.net/ Environment Modules] to support account configuration. Please follow these [http://me.eng.uab.edu/wiki/index.php?title=Cheaha#Environment_Modules specific steps for using environment modules].&lt;br /&gt;
&lt;br /&gt;
Cheaha's software stack is built with the [http://www.brightcomputing.com Bright Cluster Manager]. Cheaha's operating system is CentOS with the following major cluster components:&lt;br /&gt;
* BrightCM 7.2&lt;br /&gt;
* CentOS 7.2 x86_64&lt;br /&gt;
* [[Slurm]] 15.08&lt;br /&gt;
&lt;br /&gt;
A brief summary of the some of the available computational software and tools available includes:&lt;br /&gt;
* Amber&lt;br /&gt;
* FFTW&lt;br /&gt;
* Gromacs&lt;br /&gt;
* GSL&lt;br /&gt;
* NAMD&lt;br /&gt;
* VMD&lt;br /&gt;
* Intel Compilers&lt;br /&gt;
* GNU Compilers&lt;br /&gt;
* Java&lt;br /&gt;
* R&lt;br /&gt;
* OpenMPI&lt;br /&gt;
* MATLAB&lt;br /&gt;
&lt;br /&gt;
=== Network ===&lt;br /&gt;
&lt;br /&gt;
Cheaha is connected to the UAB Research Network which provides a dedicated 10Gbs networking backplane between clusters located in the 936 data center and the campus network core.  Data transfers rates of almost 8Gbps between these hosts have been demonstrated using Grid FTP, a multi-channel file transfer service that is used to move data between clusters as part of the job management operations.  This performance promises very efficient job management and the seamless integration of other clusters as connectivity to the research network is expanded.&lt;br /&gt;
&lt;br /&gt;
=== Benchmarks ===&lt;br /&gt;
&lt;br /&gt;
The continuous resource improvement process involves collecting benchmarks of the system.  One of the measures of greatest interest to users of the system are benchmarks of specific application codes.  The following benchmarks have been performed on the system and will be further expanded as additional benchmarks are performed.&lt;br /&gt;
&lt;br /&gt;
* [[Cheaha-BGL_Comparison|Cheaha-BGL Comparison]]&lt;br /&gt;
&lt;br /&gt;
* [[Gromacs_Benchmark|Gromacs]]&lt;br /&gt;
&lt;br /&gt;
* [[NAMD_Benchmarks|NAMD]]&lt;br /&gt;
&lt;br /&gt;
=== Cluster Usage Statistics ===&lt;br /&gt;
&lt;br /&gt;
Cheaha uses Bright Cluster Manager to report cluster performance data. This information provides a helpful overview of the current and historical operating stats for Cheaha.  You can access the status monitoring page [https://cheaha-master01.rc.uab.edu/userportal/ here] (accessible only on the UAB network or through VPN).&lt;br /&gt;
&lt;br /&gt;
== Availability ==&lt;br /&gt;
&lt;br /&gt;
Cheaha is a general-purpose computer resource made available to the UAB community by UAB IT.  As such, it is available for legitimate research and educational needs and is governed by [http://www.uabgrid.uab.edu/aup UAB's Acceptable Use Policy (AUP)] for computer resources.  &lt;br /&gt;
&lt;br /&gt;
Many software packages commonly used across UAB are available via Cheaha.&lt;br /&gt;
&lt;br /&gt;
To request access to Cheaha, please send a request to [mailto:support@listserv.uab.edu send a request] to the cluster support group.&lt;br /&gt;
&lt;br /&gt;
Cheaha's intended use implies broad access to the community, however, no guarantees are made that specific computational resources will be available to all users.  Availability guarantees can only be made for reserved resources.&lt;br /&gt;
&lt;br /&gt;
=== Secure Shell Access ===&lt;br /&gt;
&lt;br /&gt;
Please configure you client secure shell software to use the official host name to access Cheaha:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Scheduling Framework ==&lt;br /&gt;
&lt;br /&gt;
[http://slurm.schedmd.com/ Slurm] is a queue management system and stands for Simple Linux Utility for Resource Management. Slurm was developed at the Lawrence Livermore National Lab and currently runs some of the largest compute clusters in the world. '''[[Slurm]]''' is now the primary job manager on Cheaha, it replaces SUN Grid Engine (SGE) the job manager used earlier.&lt;br /&gt;
&lt;br /&gt;
Slurm is similar in many ways to GridEngine or most other queue systems. You write a batch script then submit it to the queue manager (scheduler). The queue manager then schedules your job to run on the queue (or '''partition''' in Slurm parlance) that you designate. Below we will provide an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress.&lt;br /&gt;
&lt;br /&gt;
== Support ==&lt;br /&gt;
&lt;br /&gt;
Operational support for Cheaha is provided by the Research Computing group in UAB IT.  For questions regarding the operational status of Cheaha please send your request to [mailto:support@listserv.uab.edu support@listserv.uab.edu]. For more details on optimizing your support experience, please see [[Support email]]. As a user of Cheaha you will automatically by subscribed to the hpc-announce email list.  This subscription is mandatory for all users of Cheaha.  It is our way of communicating important information regarding Cheaha to you.  The traffic on this list is restricted to official communication and has a very low volume.&lt;br /&gt;
&lt;br /&gt;
We have limited capacity, however, to support non-operational issue like &amp;quot;How do I write a job script&amp;quot; or &amp;quot;How do I compile a program&amp;quot;.  For such requests, you may find it more fruitful to send your questions to the hpc-users email list and request help from our peers in the HPC community at UAB.   As with all mailing lists, please observe [http://lifehacker.com/5473859/basic-etiquette-for-email-lists-and-forums common mailing etiquette].&lt;br /&gt;
&lt;br /&gt;
Finally, please remember that as you learned about HPC from others it becomes part of your responsibilty to help others on their quest.  You should update this documentation or respond to mailing list requests of others. &lt;br /&gt;
&lt;br /&gt;
You can subscribe to hpc-users by sending an email to:&lt;br /&gt;
&lt;br /&gt;
[mailto:sympa@vo.uabgrid.uab.edu?subject=subscribe%20hpc-users  sympa@vo.uabgrid.uab.edu with the subject ''subscribe hpc-users''].&lt;br /&gt;
&lt;br /&gt;
You can unsubribe from hpc-users by sending an email to:&lt;br /&gt;
&lt;br /&gt;
[mailto:sympa@vo.uabgrid.uab.edu?subject=unsubscribe%20hpc-users  sympa@vo.uabgrid.uab.edu with the subject ''unsubscribe hpc-users''].&lt;br /&gt;
&lt;br /&gt;
You can review archives of the list in the [http://vo.uabgrid.uab.edu/sympa/arc/hpc-users web hpc-archives].&lt;br /&gt;
&lt;br /&gt;
If you need help using the list service please send an email to:&lt;br /&gt;
&lt;br /&gt;
[mailto:sympa@vo.uabgrid.uab.edu?subject=help sympa@vo.uabgrid.uab.edu with the subject ''help'']&lt;br /&gt;
&lt;br /&gt;
If you have questions about the operation of the list itself, please send an email to the owners of the list:&lt;br /&gt;
&lt;br /&gt;
[mailto:hpc-users-request@vo.uabgrid.uab.edu sympa@vo.uabgrid.uab.edu with a subject relavent to your issue with the list]&lt;br /&gt;
&lt;br /&gt;
If you are interested in contributing to the enhancement of HPC features at UAB or would like to talk to other cluster administrators, [mailto:sympa@vo.uabgrid.uab.edu?subject=subscribe%20hpc-dev please join the hpc developers community at UAB].&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Support_email&amp;diff=6175</id>
		<title>Support email</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Support_email&amp;diff=6175"/>
		<updated>2021-06-02T17:47:53Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Further clarifications&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Creating and Managing Support Request Tickets =&lt;br /&gt;
&lt;br /&gt;
Research Computing provides helpful email-based shortcuts for creating and managing support requests.&lt;br /&gt;
&lt;br /&gt;
== Creating a Support Request Ticket ==&lt;br /&gt;
&lt;br /&gt;
Send an email to [mailto:support@listserv.uab.edu support@listserv.uab.edu] to create the ticket. Every email sent to this address will create a new ticket, so please do not include it in email conversations. Please include all relevant information that can help us work as quickly and efficiently as possible to ensure speedy resolution. More information is available at [[#How_to_Get_Support_More_Quickly|How to Get Support More Quickly]].&lt;br /&gt;
&lt;br /&gt;
If you want other users to be able to see and add comments to your ticket, please list their email addresses in the body of your request and state that you want them added to the watch list. If they are unfamiliar with the process of creating tickets feel free to send them the URL to this page.&lt;br /&gt;
&lt;br /&gt;
Within a few minutes you should receive two emails. One email will come from AskIT@uab.edu with the subject line &amp;quot;Service Request Opened&amp;quot;. This email is sent for all tickets created with UAB IT and is for your records. It contains a link to the ticket in our ServiceNow application with the form &amp;quot;RITM#######&amp;quot;, or RITM followed by 7 digits.&lt;br /&gt;
&lt;br /&gt;
The second email will come from support-watch@listserv.uab.edu on behalf of AskIT@uab.edu. This email is only sent for tickets created by sending an email to support@listserv.uab.edu. The subject line of this ticket will include &amp;quot;Research Computing request opened --- RITM#######&amp;quot; and will also contain the link to the ticket. Replying to this email will add comments to the ticket and generate a new email sent to all emails on the watch list. Replying to those new emails will have the same effect.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''' Please do not send multiple emails to support@listserv.uab.edu nor add it to reply-all conversations. Every email will create a new ticket. This can increase the time it takes to resolve your request.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''' When replying to support-watch@listserv.uab.edu emails, please delete the old contents of the email before composing your reply. Otherwise, the comment length will quickly grow out of control, which can increase the time it takes to resolve your request.&lt;br /&gt;
&lt;br /&gt;
== An Example Workflow ==&lt;br /&gt;
&lt;br /&gt;
# You encounter an issue.&lt;br /&gt;
# You send an email to support@listserv.uab.edu from abc@uab.edu, with &amp;quot;Please add xyz@uab.edu to the watch list.&amp;quot;&lt;br /&gt;
# Email sent to abc@uab.edu from AskIT@uab.edu&lt;br /&gt;
# Email with original request text sent to abc@uab.edu from support-watch@listserv.uab.edu.&lt;br /&gt;
# We add xyz@uab.edu to watch list.&lt;br /&gt;
# You reply to support-watch email to add comments, or click the RITM####### link to add comments via the ServiceNow portal.&lt;br /&gt;
# Email with new comments sent to abc@uab.edu and xyz@uab.edu from support-watch@listserv.uab.edu.&lt;br /&gt;
# XYZ replies to new support-watch email to add comments.&lt;br /&gt;
# Repeat...&lt;br /&gt;
&lt;br /&gt;
= How to Get Support More Quickly =&lt;br /&gt;
&lt;br /&gt;
Please remember that our mission is to facilitate your research. We want to help you as effectively and efficiently as possible. To best achieve this we may have to ask technical questions. Many of these questions are very common and will be asked for most support requests. To save you time, please answer the most common questions listed below, arranged by type of request.&lt;br /&gt;
&lt;br /&gt;
== General Issues ==&lt;br /&gt;
&lt;br /&gt;
To have your request resolved as soon as possible, please answer the following questions:&lt;br /&gt;
&lt;br /&gt;
# What do you want to achieve, at a high level?&lt;br /&gt;
# What steps were taken?&lt;br /&gt;
# What was expected?&lt;br /&gt;
# What actually happened?&lt;br /&gt;
# How were you accessing the cluster? Web Portal, SSH, VNC, etc.?&lt;br /&gt;
# What software were you using? Please be as specific as possible. The command `module list` can be helpful.&lt;br /&gt;
&lt;br /&gt;
We also encourage including relevant error message text, using copy and paste, and any screenshots of the issue.&lt;br /&gt;
&lt;br /&gt;
== Outages ==&lt;br /&gt;
&lt;br /&gt;
Outages can affect individual nodes or the entire cluster. Please do not hesitate to report outages and please don't assume other users have already reported an outage.&lt;br /&gt;
&lt;br /&gt;
# What part of the cluster is affected? Please list any relevant affected nodes or other hardware that is not accessible. If you are unable to access the cluster please state that instead.&lt;br /&gt;
# What were you working on when you noticed the outage?&lt;br /&gt;
# How were you accessing the cluster? Web Portal, SSH, VNC, etc.?&lt;br /&gt;
&lt;br /&gt;
== Software Requests ==&lt;br /&gt;
&lt;br /&gt;
We are happy to install new software or new versions of existing software on Cheaha. First please check if the software you need is available as a module on the system using the command below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;module spider &amp;lt;software name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you can't find the software as a module, or the version you need is not listed, please be aware that many free and open-source softwares (FOSS) are available through [[Anaconda]], especially genomics software. Anaconda environments are shareable and reproducible, so please consider using one if possible. You can try searching for packages at [https://anaconda.org/|https://anaconda.org/].&lt;br /&gt;
&lt;br /&gt;
If you are unable to find the software you need please make a request by answering the following questions:&lt;br /&gt;
&lt;br /&gt;
# What is the name of the software you need?&lt;br /&gt;
# What version of the software do you need?&lt;br /&gt;
# Where can we find the software? If you aren't sure, we may have additional questions if the name turns out to be ambiguous.&lt;br /&gt;
# Does your software require a license?&lt;br /&gt;
# What is your overall goal? Sometimes we can help you find alternatives that may better suit your needs.&lt;br /&gt;
&lt;br /&gt;
We are happy to install existing paid software on Cheaha. We are also happy to set up existing licenses to work with software on Cheaha.&lt;br /&gt;
&lt;br /&gt;
== Project Requests ==&lt;br /&gt;
&lt;br /&gt;
Projects are virtual spaces with controlled access for storing project-specific code and data. Projects are useful for collaborating with students, staff and faculty on a shared goal or within a lab. All projects must have an owner and should be related to a legitimate research need.&lt;br /&gt;
&lt;br /&gt;
The intended project owner should send requests relating to projects. If another user sends an email, we will require written approval from the project owner before making changes. Please be mindful that the overall storage space is limited.&lt;br /&gt;
&lt;br /&gt;
=== New Projects ===&lt;br /&gt;
&lt;br /&gt;
# What is the purpose of the project space?&lt;br /&gt;
# What should we name the project? Short, descriptive, memorable names work best. The name will be used as the project folder name in our file system, so alphanumeric, underscore and dash characters only please.&lt;br /&gt;
# Who should have access? Please provide a list of blazerids. The intended project owner will always have access.&lt;br /&gt;
&lt;br /&gt;
=== Access Management ===&lt;br /&gt;
&lt;br /&gt;
# What is the name of the project?&lt;br /&gt;
# Who should be given access?&lt;br /&gt;
# Who should no longer have access?&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
# What is the name of the project?&lt;br /&gt;
# How much additional or total space will be needed? Please be mindful of our limited shared storage when making this request.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Support_email&amp;diff=6174</id>
		<title>Support email</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Support_email&amp;diff=6174"/>
		<updated>2021-06-02T17:40:17Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Clarified how support email works, do not reply-all&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Creating and Managing Support Request Tickets =&lt;br /&gt;
&lt;br /&gt;
Research Computing provides helpful email-based shortcuts for creating and managing support requests.&lt;br /&gt;
&lt;br /&gt;
== Creating a Support Request Ticket ==&lt;br /&gt;
&lt;br /&gt;
Send an email to [mailto:support@listserv.uab.edu support@listserv.uab.edu] to create the ticket. Every email sent to this address will create a new ticket, so please do not include it in email conversations. Please include all relevant information that can help us work as quickly and efficiently as possible to ensure speedy resolution. More information is available at [[#How_to_Get_Support_More_Quickly|How to Get Support More Quickly]].&lt;br /&gt;
&lt;br /&gt;
If you want other users to be able to see and add comments to your ticket, please list their email addresses in the body of your request and state that you want them added to the watch list. If they are unfamiliar with the process of creating tickets feel free to send them the URL to this page.&lt;br /&gt;
&lt;br /&gt;
Within a few minutes you should receive two emails. One email will come from AskIT@uab.edu with the subject line &amp;quot;Service Request Opened&amp;quot;. This email is sent for all tickets created with UAB IT and is for your records. It contains a link to the ticket in our ServiceNow application with the form &amp;quot;RITM#######&amp;quot;, or RITM followed by 7 digits.&lt;br /&gt;
&lt;br /&gt;
The second email will come from support-watch@listserv.uab.edu on behalf of AskIT@uab.edu. This email is only sent for tickets created by sending an email to support@listserv.uab.edu. The subject line of this ticket will include &amp;quot;Research Computing request opened --- RITM#######&amp;quot; and will also contain the link to the ticket. Replying to this email will add comments to the ticket and generate a new email sent to all emails on the watch list. Replying to those new emails will have the same effect.&lt;br /&gt;
&lt;br /&gt;
== An Example Workflow ==&lt;br /&gt;
&lt;br /&gt;
# You encounter an issue.&lt;br /&gt;
# You send an email to support@listserv.uab.edu from abc@uab.edu, with &amp;quot;Please add xyz@uab.edu to the watch list.&amp;quot;&lt;br /&gt;
# Email sent to abc@uab.edu from AskIT@uab.edu&lt;br /&gt;
# Email with original request text sent to abc@uab.edu from support-watch@listserv.uab.edu.&lt;br /&gt;
# We add xyz@uab.edu to watch list.&lt;br /&gt;
# You reply to support-watch email to add comments, or click the RITM####### link to add comments via the ServiceNow portal.&lt;br /&gt;
# Email with new comments sent to abc@uab.edu and xyz@uab.edu from support-watch@listserv.uab.edu.&lt;br /&gt;
# XYZ replies to new support-watch email to add comments.&lt;br /&gt;
# Repeat...&lt;br /&gt;
&lt;br /&gt;
= How to Get Support More Quickly =&lt;br /&gt;
&lt;br /&gt;
Please remember that our mission is to facilitate your research. We want to help you as effectively and efficiently as possible. To best achieve this we may have to ask technical questions. Many of these questions are very common and will be asked for most support requests. To save you time, please answer the most common questions listed below, arranged by type of request.&lt;br /&gt;
&lt;br /&gt;
== General Issues ==&lt;br /&gt;
&lt;br /&gt;
To have your request resolved as soon as possible, please answer the following questions:&lt;br /&gt;
&lt;br /&gt;
# What do you want to achieve, at a high level?&lt;br /&gt;
# What steps were taken?&lt;br /&gt;
# What was expected?&lt;br /&gt;
# What actually happened?&lt;br /&gt;
# How were you accessing the cluster? Web Portal, SSH, VNC, etc.?&lt;br /&gt;
# What software were you using? Please be as specific as possible. The command `module list` can be helpful.&lt;br /&gt;
&lt;br /&gt;
We also encourage including relevant error message text, using copy and paste, and any screenshots of the issue.&lt;br /&gt;
&lt;br /&gt;
== Outages ==&lt;br /&gt;
&lt;br /&gt;
Outages can affect individual nodes or the entire cluster. Please do not hesitate to report outages and please don't assume other users have already reported an outage.&lt;br /&gt;
&lt;br /&gt;
# What part of the cluster is affected? Please list any relevant affected nodes or other hardware that is not accessible. If you are unable to access the cluster please state that instead.&lt;br /&gt;
# What were you working on when you noticed the outage?&lt;br /&gt;
# How were you accessing the cluster? Web Portal, SSH, VNC, etc.?&lt;br /&gt;
&lt;br /&gt;
== Software Requests ==&lt;br /&gt;
&lt;br /&gt;
We are happy to install new software or new versions of existing software on Cheaha. First please check if the software you need is available as a module on the system using the command below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;module spider &amp;lt;software name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you can't find the software as a module, or the version you need is not listed, please be aware that many free and open-source softwares (FOSS) are available through [[Anaconda]], especially genomics software. Anaconda environments are shareable and reproducible, so please consider using one if possible. You can try searching for packages at [https://anaconda.org/|https://anaconda.org/].&lt;br /&gt;
&lt;br /&gt;
If you are unable to find the software you need please make a request by answering the following questions:&lt;br /&gt;
&lt;br /&gt;
# What is the name of the software you need?&lt;br /&gt;
# What version of the software do you need?&lt;br /&gt;
# Where can we find the software? If you aren't sure, we may have additional questions if the name turns out to be ambiguous.&lt;br /&gt;
# Does your software require a license?&lt;br /&gt;
# What is your overall goal? Sometimes we can help you find alternatives that may better suit your needs.&lt;br /&gt;
&lt;br /&gt;
We are happy to install existing paid software on Cheaha. We are also happy to set up existing licenses to work with software on Cheaha.&lt;br /&gt;
&lt;br /&gt;
== Project Requests ==&lt;br /&gt;
&lt;br /&gt;
Projects are virtual spaces with controlled access for storing project-specific code and data. Projects are useful for collaborating with students, staff and faculty on a shared goal or within a lab. All projects must have an owner and should be related to a legitimate research need.&lt;br /&gt;
&lt;br /&gt;
The intended project owner should send requests relating to projects. If another user sends an email, we will require written approval from the project owner before making changes. Please be mindful that the overall storage space is limited.&lt;br /&gt;
&lt;br /&gt;
=== New Projects ===&lt;br /&gt;
&lt;br /&gt;
# What is the purpose of the project space?&lt;br /&gt;
# What should we name the project? Short, descriptive, memorable names work best. The name will be used as the project folder name in our file system, so alphanumeric, underscore and dash characters only please.&lt;br /&gt;
# Who should have access? Please provide a list of blazerids. The intended project owner will always have access.&lt;br /&gt;
&lt;br /&gt;
=== Access Management ===&lt;br /&gt;
&lt;br /&gt;
# What is the name of the project?&lt;br /&gt;
# Who should be given access?&lt;br /&gt;
# Who should no longer have access?&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
# What is the name of the project?&lt;br /&gt;
# How much additional or total space will be needed? Please be mindful of our limited shared storage when making this request.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Support&amp;diff=6173</id>
		<title>Support</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Support&amp;diff=6173"/>
		<updated>2021-06-02T17:35:33Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Redirect to support email&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Support_email]]&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Support_email&amp;diff=6172</id>
		<title>Support email</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Support_email&amp;diff=6172"/>
		<updated>2021-06-02T17:34:27Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Page about using support emails effectively&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Creating and Managing Support Request Tickets =&lt;br /&gt;
&lt;br /&gt;
Research Computing provides helpful email-based shortcuts for creating and managing support requests.&lt;br /&gt;
&lt;br /&gt;
== Creating a Support Request Ticket ==&lt;br /&gt;
&lt;br /&gt;
Send an email to [mailto:support@listserv.uab.edu support@listserv.uab.edu] to create the ticket. Please include all relevant information that can help us work as quickly and efficiently as possible to ensure speedy resolution. More information is available at [[#How_to_Get_Support_More_Quickly|How to Get Support More Quickly]].&lt;br /&gt;
&lt;br /&gt;
If you want other users to be able to see and add comments to your ticket, please list their email addresses in the body of your request and state that you want them added to the watch list. If they are unfamiliar with the process of creating tickets feel free to send them the URL to this page.&lt;br /&gt;
&lt;br /&gt;
Within a few minutes you should receive two emails. One email will come from AskIT@uab.edu with the subject line &amp;quot;Service Request Opened&amp;quot;. This email is sent for all tickets created with UAB IT and is for your records. It contains a link to the ticket in our ServiceNow application with the form &amp;quot;RITM#######&amp;quot;, or RITM followed by 7 digits.&lt;br /&gt;
&lt;br /&gt;
The second email will come from support-watch@listserv.uab.edu on behalf of AskIT@uab.edu. This email is only sent for tickets created by sending an email to support@listserv.uab.edu. The subject line of this ticket will include &amp;quot;Research Computing request opened --- RITM#######&amp;quot; and will also contain the link to the ticket. Replying to this email will add comments to the ticket and generate a new email sent to all emails on the watch list. Replying to those new emails will have the same effect.&lt;br /&gt;
&lt;br /&gt;
== An Example Workflow ==&lt;br /&gt;
&lt;br /&gt;
# You encounter an issue.&lt;br /&gt;
# You send an email to support@listserv.uab.edu from abc@uab.edu, with &amp;quot;Please add xyz@uab.edu to the watch list.&amp;quot;&lt;br /&gt;
# Email sent to abc@uab.edu from AskIT@uab.edu&lt;br /&gt;
# Email with original request text sent to abc@uab.edu from support-watch@listserv.uab.edu.&lt;br /&gt;
# We add xyz@uab.edu to watch list.&lt;br /&gt;
# You reply to support-watch email to add comments, or click the RITM####### link to add comments via the ServiceNow portal.&lt;br /&gt;
# Email with new comments sent to abc@uab.edu and xyz@uab.edu from support-watch@listserv.uab.edu.&lt;br /&gt;
# XYZ replies to new support-watch email to add comments.&lt;br /&gt;
# Repeat...&lt;br /&gt;
&lt;br /&gt;
= How to Get Support More Quickly =&lt;br /&gt;
&lt;br /&gt;
Please remember that our mission is to facilitate your research. We want to help you as effectively and efficiently as possible. To best achieve this we may have to ask technical questions. Many of these questions are very common and will be asked for most support requests. To save you time, please answer the most common questions listed below, arranged by type of request.&lt;br /&gt;
&lt;br /&gt;
== General Issues ==&lt;br /&gt;
&lt;br /&gt;
To have your request resolved as soon as possible, please answer the following questions:&lt;br /&gt;
&lt;br /&gt;
# What do you want to achieve, at a high level?&lt;br /&gt;
# What steps were taken?&lt;br /&gt;
# What was expected?&lt;br /&gt;
# What actually happened?&lt;br /&gt;
# How were you accessing the cluster? Web Portal, SSH, VNC, etc.?&lt;br /&gt;
# What software were you using? Please be as specific as possible. The command `module list` can be helpful.&lt;br /&gt;
&lt;br /&gt;
We also encourage including relevant error message text, using copy and paste, and any screenshots of the issue.&lt;br /&gt;
&lt;br /&gt;
== Outages ==&lt;br /&gt;
&lt;br /&gt;
Outages can affect individual nodes or the entire cluster. Please do not hesitate to report outages and please don't assume other users have already reported an outage.&lt;br /&gt;
&lt;br /&gt;
# What part of the cluster is affected? Please list any relevant affected nodes or other hardware that is not accessible. If you are unable to access the cluster please state that instead.&lt;br /&gt;
# What were you working on when you noticed the outage?&lt;br /&gt;
# How were you accessing the cluster? Web Portal, SSH, VNC, etc.?&lt;br /&gt;
&lt;br /&gt;
== Software Requests ==&lt;br /&gt;
&lt;br /&gt;
We are happy to install new software or new versions of existing software on Cheaha. First please check if the software you need is available as a module on the system using the command below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;module spider &amp;lt;software name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you can't find the software as a module, or the version you need is not listed, please be aware that many free and open-source softwares (FOSS) are available through [[Anaconda]], especially genomics software. Anaconda environments are shareable and reproducible, so please consider using one if possible. You can try searching for packages at [https://anaconda.org/|https://anaconda.org/].&lt;br /&gt;
&lt;br /&gt;
If you are unable to find the software you need please make a request by answering the following questions:&lt;br /&gt;
&lt;br /&gt;
# What is the name of the software you need?&lt;br /&gt;
# What version of the software do you need?&lt;br /&gt;
# Where can we find the software? If you aren't sure, we may have additional questions if the name turns out to be ambiguous.&lt;br /&gt;
# Does your software require a license?&lt;br /&gt;
# What is your overall goal? Sometimes we can help you find alternatives that may better suit your needs.&lt;br /&gt;
&lt;br /&gt;
We are happy to install existing paid software on Cheaha. We are also happy to set up existing licenses to work with software on Cheaha.&lt;br /&gt;
&lt;br /&gt;
== Project Requests ==&lt;br /&gt;
&lt;br /&gt;
Projects are virtual spaces with controlled access for storing project-specific code and data. Projects are useful for collaborating with students, staff and faculty on a shared goal or within a lab. All projects must have an owner and should be related to a legitimate research need.&lt;br /&gt;
&lt;br /&gt;
The intended project owner should send requests relating to projects. If another user sends an email, we will require written approval from the project owner before making changes. Please be mindful that the overall storage space is limited.&lt;br /&gt;
&lt;br /&gt;
=== New Projects ===&lt;br /&gt;
&lt;br /&gt;
# What is the purpose of the project space?&lt;br /&gt;
# What should we name the project? Short, descriptive, memorable names work best. The name will be used as the project folder name in our file system, so alphanumeric, underscore and dash characters only please.&lt;br /&gt;
# Who should have access? Please provide a list of blazerids. The intended project owner will always have access.&lt;br /&gt;
&lt;br /&gt;
=== Access Management ===&lt;br /&gt;
&lt;br /&gt;
# What is the name of the project?&lt;br /&gt;
# Who should be given access?&lt;br /&gt;
# Who should no longer have access?&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
# What is the name of the project?&lt;br /&gt;
# How much additional or total space will be needed? Please be mindful of our limited shared storage when making this request.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Anaconda&amp;diff=6163</id>
		<title>Anaconda</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Anaconda&amp;diff=6163"/>
		<updated>2021-03-24T19:50:45Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Moving conda directory */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://conda.io/docs/user-guide/overview.html Conda] is a powerful package manager and environment manager. Conda allows you to maintain distinct environments for your different projects, with dependency packages defined and installed for each project.&lt;br /&gt;
&lt;br /&gt;
===Creating a Conda virtual environment===&lt;br /&gt;
First step, direct conda to store files in $USER_DATA to avoid filling up $HOME. Create the '''$HOME/.condarc''' file by running the following code:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;lt;&amp;lt; &amp;quot;EOF&amp;quot; &amp;gt; ~/.condarc&lt;br /&gt;
pkgs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/pkgs&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/envs&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Load one of the conda environments available on Cheaha (Note, starting with Anaconda 2018.12, Anaconda releases changed to using YYYY.MM format for version numbers):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module -t avail Anaconda&lt;br /&gt;
...&lt;br /&gt;
Anaconda3/5.3.0&lt;br /&gt;
Anaconda3/5.3.1&lt;br /&gt;
Anaconda3/2019.10&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load Anaconda3/2019.10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have loaded Anaconda, you can create an environment using the following command (change '''test_env''' to whatever you want to name your environment):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda create --name test_env&lt;br /&gt;
&lt;br /&gt;
Solving environment: done&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: ~/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
  added / updated specs:&lt;br /&gt;
    - setuptools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be downloaded:&lt;br /&gt;
&lt;br /&gt;
    package                    |            build&lt;br /&gt;
    ---------------------------|-----------------&lt;br /&gt;
    python-3.7.0               |       h6e4f718_3        30.6 MB&lt;br /&gt;
    wheel-0.32.1               |           py37_0          35 KB&lt;br /&gt;
    setuptools-40.4.3          |           py37_0         556 KB&lt;br /&gt;
    ------------------------------------------------------------&lt;br /&gt;
                                           Total:        31.1 MB&lt;br /&gt;
&lt;br /&gt;
The following NEW packages will be INSTALLED:&lt;br /&gt;
&lt;br /&gt;
    ca-certificates: 2018.03.07-0&lt;br /&gt;
    certifi:         2018.8.24-py37_1&lt;br /&gt;
    libedit:         3.1.20170329-h6b74fdf_2&lt;br /&gt;
    libffi:          3.2.1-hd88cf55_4&lt;br /&gt;
    libgcc-ng:       8.2.0-hdf63c60_1&lt;br /&gt;
    libstdcxx-ng:    8.2.0-hdf63c60_1&lt;br /&gt;
    ncurses:         6.1-hf484d3e_0&lt;br /&gt;
    openssl:         1.0.2p-h14c3975_0&lt;br /&gt;
    pip:             10.0.1-py37_0&lt;br /&gt;
    python:          3.7.0-h6e4f718_3&lt;br /&gt;
    readline:        7.0-h7b6447c_5&lt;br /&gt;
    setuptools:      40.4.3-py37_0&lt;br /&gt;
    sqlite:          3.25.2-h7b6447c_0&lt;br /&gt;
    tk:              8.6.8-hbc83047_0&lt;br /&gt;
    wheel:           0.32.1-py37_0&lt;br /&gt;
    xz:              5.2.4-h14c3975_4&lt;br /&gt;
    zlib:            1.2.11-ha838bed_2&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&lt;br /&gt;
Downloading and Extracting Packages&lt;br /&gt;
python-3.7.0         | 30.6 MB   | ########################################################################### | 100%&lt;br /&gt;
wheel-0.32.1         | 35 KB     | ########################################################################### | 100%&lt;br /&gt;
setuptools-40.4.3    | 556 KB    | ########################################################################### | 100%&lt;br /&gt;
Preparing transaction: done&lt;br /&gt;
Verifying transaction: done&lt;br /&gt;
Executing transaction: done&lt;br /&gt;
#&lt;br /&gt;
# To activate this environment, use:&lt;br /&gt;
# &amp;gt; source activate test_env&lt;br /&gt;
#&lt;br /&gt;
# To deactivate an active environment, use:&lt;br /&gt;
# &amp;gt; source deactivate&lt;br /&gt;
#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also specify the packages that you want to install in the conda virtual environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda create --name test_env PACKAGE_NAME&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Listing all your conda virtual environments===&lt;br /&gt;
In case you forget the name of your virtual environments, you can list all your virtual environments by running '''conda env list'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda env list&lt;br /&gt;
# conda environments:&lt;br /&gt;
#&lt;br /&gt;
jupyter_test             ~/.conda/envs/jupyter_test&lt;br /&gt;
modeller                 ~/.conda/envs/modeller&lt;br /&gt;
psypy3                   ~/.conda/envs/psypy3&lt;br /&gt;
test                     ~/.conda/envs/test&lt;br /&gt;
test_env                 ~/.conda/envs/test_env&lt;br /&gt;
test_pytorch             ~/.conda/envs/test_pytorch&lt;br /&gt;
tomopy                   ~/.conda/envs/tomopy&lt;br /&gt;
base                  *  /share/apps/rc/software/Anaconda3/5.2.0&lt;br /&gt;
DeepNLP                  /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
ubrite-jupyter-base-1.0     /share/apps/rc/software/Anaconda3/5.2.0/envs/ubrite-jupyter-base-1.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: Virtual environment with the asterisk(*) next to it is the one that's currently active.&lt;br /&gt;
&lt;br /&gt;
===Activating a conda virtual environment===&lt;br /&gt;
You can activate your virtual environment for use by running '''source activate''' followed by '''conda activate ENV_NAME'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ source activate&lt;br /&gt;
$ conda activate test_env&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE: Your shell prompt would also include the name of the virtual environment that you activated.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT!'''&lt;br /&gt;
&lt;br /&gt;
The following only applies to versions prior to 2019.10. '''source activate &amp;lt;env&amp;gt;''' is not idempotent. Using it twice with the same environment in a given session can lead to unexpected behavior. The recommended workflow is to use '''source activate''' to source the '''conda activate''' script, followed by '''conda activate &amp;lt;env&amp;gt;'''.&lt;br /&gt;
&lt;br /&gt;
From version 2019.10 and on, simply use '''conda activate &amp;lt;env&amp;gt;'''.&lt;br /&gt;
&lt;br /&gt;
===Locate and install packages===&lt;br /&gt;
Conda allows you to search for packages that you want to install:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ conda search BeautifulSoup4&lt;br /&gt;
Loading channels: done&lt;br /&gt;
# Name                  Version           Build  Channel&lt;br /&gt;
beautifulsoup4            4.4.0          py27_0  pkgs/free&lt;br /&gt;
beautifulsoup4            4.4.0          py34_0  pkgs/free&lt;br /&gt;
beautifulsoup4            4.4.0          py35_0  pkgs/free&lt;br /&gt;
...&lt;br /&gt;
beautifulsoup4            4.6.3          py35_0  pkgs/main&lt;br /&gt;
beautifulsoup4            4.6.3          py36_0  pkgs/main&lt;br /&gt;
beautifulsoup4            4.6.3          py37_0  pkgs/main&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: Search is case-insensitive&lt;br /&gt;
&lt;br /&gt;
You can install the packages in conda environment using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ conda install beautifulsoup4&lt;br /&gt;
Solving environment: done&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: ~/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
  added / updated specs:&lt;br /&gt;
    - beautifulsoup4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be downloaded:&lt;br /&gt;
&lt;br /&gt;
    package                    |            build&lt;br /&gt;
    ---------------------------|-----------------&lt;br /&gt;
    beautifulsoup4-4.6.3       |           py37_0         138 KB&lt;br /&gt;
&lt;br /&gt;
The following NEW packages will be INSTALLED:&lt;br /&gt;
&lt;br /&gt;
    beautifulsoup4: 4.6.3-py37_0&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Downloading and Extracting Packages&lt;br /&gt;
beautifulsoup4-4.6.3 | 138 KB    | ########################################################################### | 100%&lt;br /&gt;
Preparing transaction: done&lt;br /&gt;
Verifying transaction: done&lt;br /&gt;
Executing transaction: done&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Deactivating your virtual environment===&lt;br /&gt;
You can deactivate your virtual environment using '''source deactivate'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ source deactivate&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sharing an environment===&lt;br /&gt;
You may want to share your environment with someone for testing or other purposes. Sharing the environemnt file for your virtual environment is the most starightforward metohd which allows other person to quickly create an environment identical to you.&lt;br /&gt;
====Export environment====&lt;br /&gt;
* Activate the virtual environment that you want to export.&lt;br /&gt;
* Export an environment.yml file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export -n test_env &amp;gt; environment.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Now you can send the recently created environment.yml file to the other person.&lt;br /&gt;
&lt;br /&gt;
====Create a virtual environment using environment.yml====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env create -f environment.yml -n test_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Delete a conda virtual environment===&lt;br /&gt;
You can use the '''remove''' parameter of conda to delete a conda virtual environment that you don't need:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda remove --name test_env --all&lt;br /&gt;
&lt;br /&gt;
Remove all packages in environment ~/.conda/envs/test_env:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: ~/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be REMOVED:&lt;br /&gt;
&lt;br /&gt;
    beautifulsoup4:  4.6.3-py37_0&lt;br /&gt;
    ca-certificates: 2018.03.07-0&lt;br /&gt;
    certifi:         2018.8.24-py37_1&lt;br /&gt;
    libedit:         3.1.20170329-h6b74fdf_2&lt;br /&gt;
    libffi:          3.2.1-hd88cf55_4&lt;br /&gt;
    libgcc-ng:       8.2.0-hdf63c60_1&lt;br /&gt;
    libstdcxx-ng:    8.2.0-hdf63c60_1&lt;br /&gt;
    ncurses:         6.1-hf484d3e_0&lt;br /&gt;
    openssl:         1.0.2p-h14c3975_0&lt;br /&gt;
    pip:             10.0.1-py37_0&lt;br /&gt;
    python:          3.7.0-h6e4f718_3&lt;br /&gt;
    readline:        7.0-h7b6447c_5&lt;br /&gt;
    setuptools:      40.4.3-py37_0&lt;br /&gt;
    sqlite:          3.25.2-h7b6447c_0&lt;br /&gt;
    tk:              8.6.8-hbc83047_0&lt;br /&gt;
    wheel:           0.32.1-py37_0&lt;br /&gt;
    xz:              5.2.4-h14c3975_4&lt;br /&gt;
    zlib:            1.2.11-ha838bed_2&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Moving conda directory===&lt;br /&gt;
As you build new conda environments, you may find that it is taking a lot of space in your $HOME directory, leading to issues with interactive sessions failing to start. Below are 2 methods to resolve the issue.&lt;br /&gt;
&lt;br /&gt;
Method 1: Move a pre-existing conda directory and create a symlink&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
mv ~/.conda $USER_DATA/&lt;br /&gt;
ln -s $USER_DATA/.conda .conda&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Method 2: Create a &amp;quot;$HOME/.condarc&amp;quot; file in the $HOME directory by running the following code&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;lt;&amp;lt; &amp;quot;EOF&amp;quot; &amp;gt; ~/.condarc&lt;br /&gt;
pkgs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/pkgs&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/envs&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=RStudio&amp;diff=6162</id>
		<title>RStudio</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=RStudio&amp;diff=6162"/>
		<updated>2021-03-24T19:49:35Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Added info on move/symlink rstudio dir to user data&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;RStudio is an integrated development environment (IDE) for R. It includes a console, syntax-highlighting editor that supports direct code execution, as well as tools for plotting, history, debugging and workspace management. To learn more about RStudio, click [https://www.rstudio.com/ here].&lt;br /&gt;
&lt;br /&gt;
===Starting a RStudio server session===&lt;br /&gt;
RStudio server session can be started on cheaha, by using command '''rserver'''.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[ravi89@login001 ~]$ rserver&lt;br /&gt;
Waiting for RStudio server to start&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
SSH port forwarding from laptop&lt;br /&gt;
ssh -L 8700:c0082:8700 ravi89@cheaha.rc.uab.edu&lt;br /&gt;
&lt;br /&gt;
Connection string for local browser&lt;br /&gt;
http://localhost:8700&lt;br /&gt;
&lt;br /&gt;
Authorization info for Rstudio&lt;br /&gt;
Username: ravi89&lt;br /&gt;
Password: ................&lt;br /&gt;
&lt;br /&gt;
[ravi89@login001 ~]$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Accessing the created RStudio session===&lt;br /&gt;
Once RStudio session has started after running '''rserver''' command, it would give you the information you need to connect to it.&lt;br /&gt;
&lt;br /&gt;
Here are the steps to connect to it, based on the information that it sends:&lt;br /&gt;
&lt;br /&gt;
====Port forwarding====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SSH port forwarding from laptop&lt;br /&gt;
ssh -L 8700:c0082:8700 ravi89@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are on a Mac/Linux system, start a new tab/terminal on your mac and copy the ssh line mentioned under '''SSH port forwarding from laptop''' which in the example above would be '''ssh -L 8700:c0082:8700 ravi89@cheaha.rc.uab.edu''' . On a Windows system, you can set up port forwarding on your system, using the methods defined [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here].&lt;br /&gt;
&lt;br /&gt;
====Local Browser Connection====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Connection string for local browser&lt;br /&gt;
http://localhost:8700&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, start up a web browser of your choice, Google Chrome, Firefox, Safari etc. , and go to the link mentioned under '''Connection string for local browser''' , which in the above example would be &amp;lt;nowiki&amp;gt;http://localhost:8700&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Authorization Info====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Authorization info for Rstudio&lt;br /&gt;
Username: ravi89&lt;br /&gt;
Password: ................&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Each RStudio server session is secured with a random temporary password, which can be found under '''Authorization info for Rstudio''' . Use this info to login to Rstudio server, on your web browser.&lt;br /&gt;
=====Setting your own password=====&lt;br /&gt;
You can setup your own password for accessing RStudio session, by setting environment variable RSTUDIO_PASSWORD . You can set an environment variable using the followng command on cheaha, before starting '''rserver'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[ravi89@login001 ~]$ export RSTUDIO_PASSWORD=asdfghjkl&lt;br /&gt;
[ravi89@login001 ~]$ rserver &lt;br /&gt;
Waiting for RStudio server to start&lt;br /&gt;
.............&lt;br /&gt;
&lt;br /&gt;
SSH port forwarding from laptop&lt;br /&gt;
ssh -L 8742:c0076:8742 ravi89@cheaha.rc.uab.edu&lt;br /&gt;
&lt;br /&gt;
Connection string for local browser&lt;br /&gt;
http://localhost:8742&lt;br /&gt;
&lt;br /&gt;
Authorization info for Rstudio&lt;br /&gt;
Username: ravi89&lt;br /&gt;
Password: asdfghjkl&lt;br /&gt;
&lt;br /&gt;
[ravi89@login001 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Default parameters===&lt;br /&gt;
If you use '''rserver''' without any additional parameters, it would start with the following default parameters&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Partition: Short&lt;br /&gt;
Time: 12:00:00&lt;br /&gt;
mem-per-cpu: 1024&lt;br /&gt;
cpus-per-task: 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Setting parameters===&lt;br /&gt;
You can set your own parameters with '''rserver''' like time, partition etc.&lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 rserver --time=05:00:00 --partition=short --mem-per-cpu=4096&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
List of parameters that you can set up with rserver:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Parallel run options:&lt;br /&gt;
  -a, --array=indexes         job array index values&lt;br /&gt;
  -A, --account=name          charge job to specified account&lt;br /&gt;
      --bb=&amp;lt;spec&amp;gt;             burst buffer specifications&lt;br /&gt;
      --bbf=&amp;lt;file_name&amp;gt;       burst buffer specification file&lt;br /&gt;
      --begin=time            defer job until HH:MM MM/DD/YY&lt;br /&gt;
      --comment=name          arbitrary comment&lt;br /&gt;
      --cpu-freq=min[-max[:gov]] requested cpu frequency (and governor)&lt;br /&gt;
  -c, --cpus-per-task=ncpus   number of cpus required per task&lt;br /&gt;
  -d, --dependency=type:jobid defer job until condition on jobid is satisfied&lt;br /&gt;
      --deadline=time         remove the job if no ending possible before&lt;br /&gt;
                              this deadline (start &amp;gt; (deadline - time[-min]))&lt;br /&gt;
      --delay-boot=mins       delay boot for desired node features&lt;br /&gt;
  -D, --workdir=directory     set working directory for batch script&lt;br /&gt;
  -e, --error=err             file for batch script's standard error&lt;br /&gt;
      --export[=names]        specify environment variables to export&lt;br /&gt;
      --export-file=file|fd   specify environment variables file or file&lt;br /&gt;
                              descriptor to export&lt;br /&gt;
      --get-user-env          load environment from local cluster&lt;br /&gt;
      --gid=group_id          group ID to run job as (user root only)&lt;br /&gt;
      --gres=list             required generic resources&lt;br /&gt;
      --gres-flags=opts       flags related to GRES management&lt;br /&gt;
  -H, --hold                  submit job in held state&lt;br /&gt;
      --ignore-pbs            Ignore #PBS options in the batch script&lt;br /&gt;
  -i, --input=in              file for batch script's standard input&lt;br /&gt;
  -I, --immediate             exit if resources are not immediately available&lt;br /&gt;
      --jobid=id              run under already allocated job&lt;br /&gt;
  -J, --job-name=jobname      name of job&lt;br /&gt;
  -k, --no-kill               do not kill job on node failure&lt;br /&gt;
  -L, --licenses=names        required license, comma separated&lt;br /&gt;
  -M, --clusters=names        Comma separated list of clusters to issue&lt;br /&gt;
                              commands to.  Default is current cluster.&lt;br /&gt;
                              Name of 'all' will submit to run on all clusters.&lt;br /&gt;
                              NOTE: SlurmDBD must up.&lt;br /&gt;
  -m, --distribution=type     distribution method for processes to nodes&lt;br /&gt;
                              (type = block|cyclic|arbitrary)&lt;br /&gt;
      --mail-type=type        notify on state change: BEGIN, END, FAIL or ALL&lt;br /&gt;
      --mail-user=user        who to send email notification for job state&lt;br /&gt;
                              changes&lt;br /&gt;
      --mcs-label=mcs         mcs label if mcs plugin mcs/group is used&lt;br /&gt;
  -n, --ntasks=ntasks         number of tasks to run&lt;br /&gt;
      --nice[=value]          decrease scheduling priority by value&lt;br /&gt;
      --no-requeue            if set, do not permit the job to be requeued&lt;br /&gt;
      --ntasks-per-node=n     number of tasks to invoke on each node&lt;br /&gt;
  -N, --nodes=N               number of nodes on which to run (N = min[-max])&lt;br /&gt;
  -o, --output=out            file for batch script's standard output&lt;br /&gt;
  -O, --overcommit            overcommit resources&lt;br /&gt;
  -p, --partition=partition   partition requested&lt;br /&gt;
      --parsable              outputs only the jobid and cluster name (if present),&lt;br /&gt;
                              separated by semicolon, only on successful submission.&lt;br /&gt;
      --power=flags           power management options&lt;br /&gt;
      --priority=value        set the priority of the job to value&lt;br /&gt;
      --profile=value         enable acct_gather_profile for detailed data&lt;br /&gt;
                              value is all or none or any combination of&lt;br /&gt;
                              energy, lustre, network or task&lt;br /&gt;
      --propagate[=rlimits]   propagate all [or specific list of] rlimits&lt;br /&gt;
      --qos=qos               quality of service&lt;br /&gt;
  -Q, --quiet                 quiet mode (suppress informational messages)&lt;br /&gt;
      --reboot                reboot compute nodes before starting job&lt;br /&gt;
      --requeue               if set, permit the job to be requeued&lt;br /&gt;
  -s, --oversubscribe         over subscribe resources with other jobs&lt;br /&gt;
  -S, --core-spec=cores       count of reserved cores&lt;br /&gt;
      --signal=[B:]num[@time] send signal when time limit within time seconds&lt;br /&gt;
      --spread-job            spread job across as many nodes as possible&lt;br /&gt;
      --switches=max-switches{@max-time-to-wait}&lt;br /&gt;
                              Optimum switches and max time to wait for optimum&lt;br /&gt;
      --thread-spec=threads   count of reserved threads&lt;br /&gt;
  -t, --time=minutes          time limit&lt;br /&gt;
      --time-min=minutes      minimum time limit (if distinct)&lt;br /&gt;
      --uid=user_id           user ID to run job as (user root only)&lt;br /&gt;
      --use-min-nodes         if a range of node counts is given, prefer the&lt;br /&gt;
                              smaller count&lt;br /&gt;
  -v, --verbose               verbose mode (multiple -v's increase verbosity)&lt;br /&gt;
  -W, --wait                  wait for completion of submitted job&lt;br /&gt;
      --wckey=wckey           wckey to run job under&lt;br /&gt;
      --wrap[=command string] wrap command string in a sh script and submit&lt;br /&gt;
&lt;br /&gt;
Constraint options:&lt;br /&gt;
      --contiguous            demand a contiguous range of nodes&lt;br /&gt;
  -C, --constraint=list       specify a list of constraints&lt;br /&gt;
  -F, --nodefile=filename     request a specific list of hosts&lt;br /&gt;
      --mem=MB                minimum amount of real memory&lt;br /&gt;
      --mincpus=n             minimum number of logical processors (threads)&lt;br /&gt;
                              per node&lt;br /&gt;
      --reservation=name      allocate resources from named reservation&lt;br /&gt;
      --tmp=MB                minimum amount of temporary disk&lt;br /&gt;
  -w, --nodelist=hosts...     request a specific list of hosts&lt;br /&gt;
  -x, --exclude=hosts...      exclude a specific list of hosts&lt;br /&gt;
&lt;br /&gt;
Consumable resources related options:&lt;br /&gt;
      --exclusive[=user]      allocate nodes in exclusive mode when&lt;br /&gt;
                              cpu consumable resource is enabled&lt;br /&gt;
      --exclusive[=mcs]       allocate nodes in exclusive mode when&lt;br /&gt;
                              cpu consumable resource is enabled&lt;br /&gt;
                              and mcs plugin is enabled&lt;br /&gt;
      --mem-per-cpu=MB        maximum amount of real memory per allocated&lt;br /&gt;
                              cpu required by the job.&lt;br /&gt;
                              --mem &amp;gt;= --mem-per-cpu if --mem is specified.&lt;br /&gt;
&lt;br /&gt;
Affinity/Multi-core options: (when the task/affinity plugin is enabled)&lt;br /&gt;
  -B  --extra-node-info=S[:C[:T]]            Expands to:&lt;br /&gt;
       --sockets-per-node=S   number of sockets per node to allocate&lt;br /&gt;
       --cores-per-socket=C   number of cores per socket to allocate&lt;br /&gt;
       --threads-per-core=T   number of threads per core to allocate&lt;br /&gt;
                              each field can be 'min' or wildcard '*'&lt;br /&gt;
                              total cpus requested = (N x S x C x T)&lt;br /&gt;
&lt;br /&gt;
      --ntasks-per-core=n     number of tasks to invoke on each core&lt;br /&gt;
      --ntasks-per-socket=n   number of tasks to invoke on each socket&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Moving rstudio directory===&lt;br /&gt;
As you accumulate rstudio packages, you may find that it is taking a lot of space in your $HOME directory, leading to issues with interactive sessions failing to start. The issue may be resolved by moving the directory and creating a shortcut to the new location in its place.&lt;br /&gt;
&lt;br /&gt;
How to: Move a pre-existing rstudio directory and create a symlink&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
mv ~/.rstudio $USER_DATA/&lt;br /&gt;
ln -s $USER_DATA/.rstudio .rstudio&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Cheaha&amp;diff=6118</id>
		<title>Cheaha</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Cheaha&amp;diff=6118"/>
		<updated>2021-02-24T17:27:12Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Acknowledgment in Publications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
'''Cheaha''' is a campus resource dedicated to enhancing research computing productivity at UAB. [http://cheaha.uabgrid.uab.edu Cheaha] is managed by [http://www.uab.edu/it UAB Information Technology's Research Computing group (UAB ITRC)] and is available to members of the UAB community in need of increased computational capacity.  Cheaha supports [http://en.wikipedia.org/wiki/High-performance_computing high-performance computing (HPC)] and [http://en.wikipedia.org/wiki/High-throughput_computing high throughput computing (HTC)] paradigms.&lt;br /&gt;
&lt;br /&gt;
Cheaha provides users with a traditional command-line interactive environment with access to many scientific tools that can leverage its dedicated pool of local compute resources.  Alternately, users of graphical applications can start a [[Setting_Up_VNC_Session|cluster desktop]]. The local compute pool provides access to compute hardware based on the [http://en.wikipedia.org/wiki/X86_64 x86-64 64-bit architecture].  The compute resources are organized into a unified Research Computing System.  The compute fabric for this system is anchored by the Cheaha cluster, [[ Resources |a commodity cluster with approximately 2400 cores]] connected by low-latency Fourteen Data Rate (FDR) InfiniBand networks.  The compute nodes are backed by 6.6PB raw GPFS storage on DDN SFA12KX hardware, an additional 20TB available for home directories on a traditional Hitachi SAN, and other ancillary services. The compute nodes combine to provide over 110TFlops of dedicated computing power.   &lt;br /&gt;
&lt;br /&gt;
Cheaha is composed of resources that span data centers located in the UAB Shared Computing facility UAB 936 Building and the RUST Computer Center. Resource design and development is lead by UAB IT Research Computing in open collaboration with community members. Operational [mailto:support@listserv.uab.edu support] is provided by UAB IT's Research Computing group.&lt;br /&gt;
&lt;br /&gt;
Cheaha is named in honor of [http://en.wikipedia.org/wiki/Cheaha_Mountain Cheaha Mountain], the highest peak in the state of Alabama.  Cheaha is a popular destination whose summit offers clear vistas of the surrounding landscape. (Cheaha Mountain photo-streams on [http://www.flickr.com/search/?q=cheaha  Flikr] and [http://picasaweb.google.com/lh/view?q=cheaha&amp;amp;psc=G&amp;amp;filter=1# Picasa]).&lt;br /&gt;
&lt;br /&gt;
== Using ==&lt;br /&gt;
&lt;br /&gt;
=== Getting Started ===&lt;br /&gt;
&lt;br /&gt;
For information on getting an account, logging in, and running a job, please see [[Cheaha2_GettingStarted|Getting Started]].&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Research-computing-platform.png|right|thumb|450px|Logical Diagram of Cheaha Configuration]]&lt;br /&gt;
&lt;br /&gt;
=== 2005 ===&lt;br /&gt;
&lt;br /&gt;
In 2002 UAB was awarded an infrastructure development grant through the NSF EPsCoR program.  This led to the 2005 acquisition of a 64 node compute cluster with two AMD Opteron 242 1.6Ghz CPUs per node (128 total cores).  This cluster was named Cheaha.  Cheaha expanded the compute capacity available at UAB and was the first general-access resource for the community. It lead to expanded roles for UAB IT in research computing support through the development of the UAB Shared HPC Facility in BEC and provided further engagement in Globus-based grid computing resource development on campus via UABgrid and regionally via [http://www.suragrid.org SURAgrid].&lt;br /&gt;
&lt;br /&gt;
=== 2008 ===&lt;br /&gt;
&lt;br /&gt;
In 2008, money was allocated by UAB IT for hardware upgrades which lead to the acquisition of an additional 192 cores based on a Dell clustering solution with Intel Quad-Core E5450 3.0Ghz CPU in August of 2008. This uprade migrated Cheaha's core infrastructure to the Dell blade clustering solution. It provided a 3 fold increase in processor density over the original hardware and enables more computing power to be located in the same physical space with room for expansion, an important consideration in light of the continued growth in processing demand.  This hardware represented a major technology upgrade that included space for additional expansion to address over-all capacity demand and enable resource reservation.  &lt;br /&gt;
&lt;br /&gt;
The 2008 upgrade began a continuous resource improvement plan that includes a phased development approach for Cheaha with on-going increases in capacity and feature enhancements being brought into production via an [http://projects.uabgrid.uab.edu/cheaha open community process].&lt;br /&gt;
&lt;br /&gt;
Software improvements rolled into the 2008 upgrade included grid computing services to access distributed compute resources and orchestrate jobs using the [http://www.gridway.org GridWay] meta-scheduler. An initial 10Gigabit Ethernet link establishing the UABgrid Research Network was designed to supports high speed data transfers between clusters connected to this network.&lt;br /&gt;
&lt;br /&gt;
=== 2009 ===&lt;br /&gt;
&lt;br /&gt;
In 2009, annual investment funds were directed toward establishing a fully connected dual data rate Infiniband network between the compute nodes added in 2008 and laying the foundation for a research storage system with a 60TB DDN storage system accessed via the Lustre distributed file system.  The Infiniband and storage fabrics were designed to support significant increases in research data sets and their associate analytical demand.&lt;br /&gt;
&lt;br /&gt;
=== 2010 ===&lt;br /&gt;
&lt;br /&gt;
In 2010, UAB was awarded an NIH Small Instrumentation Grant (SIG) to further increase analytical and storage capacity.  The grant funds were combined with the annual investment funds adding 576 cores (48 nodes) based on the Intel Westmere 2.66 GHz CPU, a quad data rate Infiniband fabric with 32 uplinks, an additional 120 TB of storage for the DDN fabric, and additional hardware to improve reliability. Additional improvements to the research compute platform involved extending the UAB Research Network to link the BEC and RUST data centers, adding 20TB of user and ancillary services storage&lt;br /&gt;
&lt;br /&gt;
=== 2012 ===&lt;br /&gt;
&lt;br /&gt;
In 2012, UAB IT Research Computing invested in the foundation hardware to expand long term storage and virtual machine capabilities with aqcuisition of 12 Dell 720xd system, each containing 16 cores, 96GB RAM, and 36TB of storage, creating a 192 core and 432TB virtual compute and storage fabric.&lt;br /&gt;
&lt;br /&gt;
Additionaly hardware investment by the School of Public Health's Section on Statistical Genetics added three 384GB large memory nodes and an additional 48 cores to the QDR Infiniband fabric.&lt;br /&gt;
&lt;br /&gt;
=== 2013 ===&lt;br /&gt;
&lt;br /&gt;
In 2013, UAB IT Research Computing acquired an [http://blogs.uabgrid.uab.edu/jpr/2013/03/were-going-with-openstack/ OpenStack cloud and Ceph storage software fabric] through a partnership between Dell and Inktank in order to [http://dev.uabgrid.uab.edu extend cloud computing solutions] to the researchers at UAB and enhance the interfacing capabilities for HPC.&lt;br /&gt;
&lt;br /&gt;
=== 2015 === &lt;br /&gt;
&lt;br /&gt;
UAB IT received $500,000 from the university’s Mission Support Fund for a compute cluster seed expansion of 48 teraflops.  This added 936 cores across 40 nodes with 2x12 core 2.5 GHz Intel Xeon E5-2680 v3 compute nodes and FDR InfiniBand interconnect.&lt;br /&gt;
&lt;br /&gt;
UAB received a $500,000 grant from the Alabama Innovation Fund for a three petabyte research storage array. This funding with additional matching from UAB provided a multi-petabyte [https://en.wikipedia.org/wiki/IBM_General_Parallel_File_System GPFS] parallel file system to the cluster which went live in 2016.&lt;br /&gt;
&lt;br /&gt;
=== 2016 ===&lt;br /&gt;
&lt;br /&gt;
In 2016 UAB IT Research computing received additional funding from Deans of CAS, Engineering, and Public Heath to grow the compute capacity provided by the prior year's seed funding.  This added an additional compute nodes providing researchers at UAB with a 96 2x12 core (2304 cores total) 2.5 GHz Intel Xeon E5-2680 v3 compute nodes with FDR InfiniBand interconnect. Out of the 96 compute nodes, 36 nodes have 128 GB RAM, 38 nodes have 256 GB RAM, and 14 nodes have 384 GB RAM. There are also four compute nodes with the Intel Xeon Phi 7210 accelerator cards and four compute nodes with the NVIDIA K80 GPUs. More information can be found at [[Resources]].  &lt;br /&gt;
&lt;br /&gt;
In addition to the compute, the GPFS six petabyte file system came online. This file system, provided each user five terabyte of personal space, additional space for shared projects and a greatly expanded scratch storage all in a single file system.&lt;br /&gt;
&lt;br /&gt;
The 2015 and 2016 investments combined to provide a completely new core for the Cheaha cluster, allowing the retirement of earlier compute generations.&lt;br /&gt;
&lt;br /&gt;
== Grant and Publication Resources ==&lt;br /&gt;
&lt;br /&gt;
The following description may prove useful in summarizing the services available via Cheaha.  If you are using Cheaha for grant funded research please send information about your grant (funding source and grant number), a statement of intent for the research project and a list of the applications you are using to UAB IT Research Computing.  If you are using Cheaha for exploratory research, please send a similar note on your research interest.  Finally, any publications that rely on computations performed on Cheaha should include a statement acknowledging the use of UAB Research Computing facilities in your research, see the suggested example below.  Please note, your acknowledgment may also need to include an addition statement acknowledging grant-funded hardware.  We also ask that you send any references to publications based on your use of Cheaha compute resources.&lt;br /&gt;
&lt;br /&gt;
=== Description of Cheaha for Grants (short) ===&lt;br /&gt;
&lt;br /&gt;
UAB IT Research Computing maintains high performance compute and storage resources for investigators. The Cheaha compute cluster provides approximately 3744 CPU cores and 80 accelerators (including 72 NVIDIA P100 GPUS's) interconnected via an InfiniBand network and provides over 572 TFLOP/s of aggregate theoretical peak performance. A high-performance, 12PB raw GPFS storage on DDN SFA12KX hardware is also connected to these compute nodes via the Infiniband fabric. An additional 20TB of traditional SAN storage is also available for home directories. This general access compute fabric is available to all UAB investigators.&lt;br /&gt;
&lt;br /&gt;
=== Description of Cheaha for Grants (Detailed) ===&lt;br /&gt;
&lt;br /&gt;
The Cyberinfrastructure supporting University of Alabama at Birmingham (UAB) investigators includes high performance computing clusters, storage, campus, statewide and regionally connected high-bandwidth networks, and conditioned space for hosting and operating HPC systems, research applications and network equipment. &lt;br /&gt;
&lt;br /&gt;
==== Cheaha HPC system ====&lt;br /&gt;
&lt;br /&gt;
Cheaha is a campus HPC resource dedicated to enhancing research computing productivity at UAB. Cheaha is managed by UAB Information Technology's Research Computing group (RC) and is available to members of the UAB community in need of increased computational capacity. Cheaha supports high-performance computing (HPC) and high throughput computing (HTC) paradigms. Cheaha is composed of resources that span data centers located in the UAB IT Data Centers in the 936 Building and the RUST Computer Center. Research Computing in open collaboration with the campus research community is leading the design and development of these resources.&lt;br /&gt;
&lt;br /&gt;
==== Compute Resources ====&lt;br /&gt;
&lt;br /&gt;
The UAB Cheaha High Performance Computing environment includes a high performance cluster with approximately 3744 CPU cores, 18 GPU nodes, and large memory nodes. The compute nodes combine to provide over 572 TFIops of dedicated computing power. The Ruffner OpenStack private cloud is available to develop and host scientific applications.&lt;br /&gt;
&lt;br /&gt;
==== Storage Resources ====&lt;br /&gt;
&lt;br /&gt;
The high performance compute nodes are backed by a replicated 6PB (12PB raw) high speed storage system with an Infiniband fabric. Additional storage tiers for project space and archive are also available.&lt;br /&gt;
&lt;br /&gt;
==== Network Resources ====&lt;br /&gt;
&lt;br /&gt;
The UAB Research Network is currently a dedicated 40Gbps optical link. The UAB LAN provides 1Gbs to the desktop and 10Gbs for instruments. &lt;br /&gt;
&lt;br /&gt;
The research network also includes a secure Science DMZ with data transfer nodes (DTNs) connected directly to the border router that provide a &amp;quot;friction-free&amp;quot; pathway to access external data repositories and other computational resources. &lt;br /&gt;
&lt;br /&gt;
UAB connects to the Internet2 high-speed research network at 100 Gbs via the University of Alabama System Regional Optical Network (UASRON). &lt;br /&gt;
&lt;br /&gt;
Globus technologies provide secure, reliable and fast data transfers.&lt;br /&gt;
&lt;br /&gt;
==== Personnel ====&lt;br /&gt;
&lt;br /&gt;
UAB IT Research Computing currently maintains a support staff of 10 lead by the Assistant Vice President for Research Computing and includes an HPC Architect-Manager, four Software developers, two Scientists, a system administrator and a project coordinator.&lt;br /&gt;
&lt;br /&gt;
=== Acknowledgment in Publications ===&lt;br /&gt;
&lt;br /&gt;
To acknowledge the use of Cheaha for compute time in published work, please consider adding the following to the acknowledgements section of your publication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
The authors gratefully acknowledge the resources provided by the University of Alabama at Birmingham IT-Research Computing group for high performance computing (HPC) support and CPU time on the Cheaha compute cluster.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If Globus was used to transfer data to/from Cheaha, please consider adding the following to the acknowledgements section of your publication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
This work was supported in part by the National Science Foundation under Grants Nos. OAC-1541310, the University of Alabama at Birmingham, and the Alabama Innovation Fund. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the University of Alabama at Birmingham.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== System Profile ==&lt;br /&gt;
&lt;br /&gt;
=== Performance ===&lt;br /&gt;
{{CheahaTflops}}&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
The Cheaha Compute Platform includes three generations of commodity compute hardware, totaling 868 compute cores, 2.8TB of RAM, and over 200TB of storage.&lt;br /&gt;
&lt;br /&gt;
The hardware is grouped into generations designated gen1, gen2, and gen3 (oldest to newest). The following descriptions highlight the hardware profile for each generation. &lt;br /&gt;
&lt;br /&gt;
* Generation 1 (gen1) -- 64 2-CPU AMD 1.6 GHz compute nodes with Gigabit interconnect. This is the original hardware collection purchased with NSF EPSCoR funds in 2005, approx $150K. These nodes are sometimes called the &amp;quot;Verari&amp;quot; nodes. These nodes are tagged as &amp;quot;verari-compute-#-#&amp;quot; in the ROCKS naming convention.&lt;br /&gt;
* Generation 2 (gen2) -- 24 2x4 core (196 cores total) Intel 3.0 GHz Intel compute nodes with dual data rate Infiniband interconnect and the initial high-perf storage implementation using 60TB DDN. This is the hardware collection purchased exclusively with the annual VPIT funds allocation, approx $150K/yr for the 2008 and 2009 fiscal years.  These nodes are sometimes confusingly called &amp;quot;cheaha2&amp;quot; or &amp;quot;cheaha&amp;quot; nodes. These nodes are tagged as &amp;quot;cheaha-compute-#-#&amp;quot; in the ROCKS naming convention. &lt;br /&gt;
* Generation 3 (gen3) -- 48 2x6 core (576 cores total) 2.66 GHz Intel compute nodes with quad data rate Infiniband, ScaleMP, and the high-perf storage build-out for capacity and redundancy with 120TB DDN. This is the hardware collection purchased with a combination of the NIH SIG funds and some of the 2010 annual VPIT investment. These nodes were given the code name &amp;quot;sipsey&amp;quot; and tagged as such in the node naming for the queue system. These nodes are tagged as &amp;quot;sipsey-compute-#-#&amp;quot; in the ROCKS naming convention. 16 of the gen3 nodes (sipsey-compute-0-1 thru sipsey-compute-0-16) were upgraded in 2014 from 48GB to 96GB of memory per node. &lt;br /&gt;
* Generation 4 (gen4) -- 3 16 core (48 cores total) compute nodes. This hardware collection purchase by [http://www.soph.uab.edu/ssg/people/tiwari Dr. Hemant Tiwari of SSG]. These nodes were given the code name &amp;quot;ssg&amp;quot; and tagged as such in the node naming for the queue system. These nodes are tagged as &amp;quot;ssg-compute-0-#&amp;quot; in the ROCKS naming convention. &lt;br /&gt;
* Generation 6 (gen6) -- &lt;br /&gt;
** 44 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 128GB DDR4 RAM, FDR InfiniBand and 10GigE network cards (4 nodes with NVIDIA K80 GPUs and 4 nodes with Intel Xeon Phi 7120P accelerators)&lt;br /&gt;
** 38 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 256GB DDR4 RAM, FDR InfiniBand and 10GigE network cards&lt;br /&gt;
** 14 Compute Nodes with two 12 core processors (Intel Xeon E5-2680 v3 2.5GHz) with 384GB DDR4 RAM, FDR InfiniBand and 10GigE network card&lt;br /&gt;
** FDR InfiniBand Switch&lt;br /&gt;
** 10Gigabit Ethernet Switch&lt;br /&gt;
** Management node and gigabit switch for cluster management&lt;br /&gt;
** Bright Advanced Cluster Management software licenses &lt;br /&gt;
&lt;br /&gt;
Summarized, Cheaha's compute pool includes:&lt;br /&gt;
* gen4 is 48 cores of [http://ark.intel.com/products/64583/Intel-Xeon-Processor-E5-2680-20M-Cache-2_70-GHz-8_00-GTs-Intel-QPI 2.70GHz eight-core Intel Xeon E5-2680 processors] with 24G of RAM per core or 384GB total&lt;br /&gt;
* gen3 is 192 cores of [http://ark.intel.com/products/47922/Intel-Xeon-Processor-X5650-12M-Cache-2_66-GHz-6_40-GTs-Intel-QPI?q=x5650 2.67GHz six-core Intel Xeon X5650 processors] with 8Gb RAM per core or 96GB total&lt;br /&gt;
* gen3 is 384 cores of [http://ark.intel.com/products/47922/Intel-Xeon-Processor-X5650-12M-Cache-2_66-GHz-6_40-GTs-Intel-QPI?q=x5650 2.67GHz six-core Intel Xeon X5650 processors] with 4Gb RAM per core or 48GB total&lt;br /&gt;
* gen2 is 192 cores of [http://ark.intel.com/products/33083/Intel-Xeon-Processor-E5450-12M-Cache-3_00-GHz-1333-MHz-FSB 3.0GHz quad-core Intel Xeon E5450 processors] with 2Gb RAM per core&lt;br /&gt;
* gen1 is 100 cores of 1.6GhZ AMD Opteron 242 processors with 1Gb RAM per core &lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ Physical Nodes&lt;br /&gt;
|- bgcolor=grey&lt;br /&gt;
!gen!!queue!!#nodes!!cores/node!!RAM/node&lt;br /&gt;
|-&lt;br /&gt;
|gen6|| default || 44 || 24 || 128G&lt;br /&gt;
|-&lt;br /&gt;
|gen6|| default || 38 || 24 || 256G&lt;br /&gt;
|-&lt;br /&gt;
|gen6|| default || 14 || 24 || 384G&lt;br /&gt;
|-&lt;br /&gt;
|gen5||Ceph/OpenStack|| 12 || 20 || 96G&lt;br /&gt;
|-&lt;br /&gt;
|gen4||ssg||3||16||385G&lt;br /&gt;
|-&lt;br /&gt;
|gen3||sipsey||16||12||96G&lt;br /&gt;
|-&lt;br /&gt;
|gen3||sipsey||32||12||48G&lt;br /&gt;
|-&lt;br /&gt;
|gen2||cheaha||24||8||16G&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Software ===&lt;br /&gt;
&lt;br /&gt;
Details of the software available on Cheaha can be found on the [https://docs.uabgrid.uab.edu/wiki/Cheaha_Software Installed software page], an overview follows.&lt;br /&gt;
&lt;br /&gt;
Cheaha uses [http://modules.sourceforge.net/ Environment Modules] to support account configuration. Please follow these [http://me.eng.uab.edu/wiki/index.php?title=Cheaha#Environment_Modules specific steps for using environment modules].&lt;br /&gt;
&lt;br /&gt;
Cheaha's software stack is built with the [http://www.brightcomputing.com Bright Cluster Manager]. Cheaha's operating system is CentOS with the following major cluster components:&lt;br /&gt;
* BrightCM 7.2&lt;br /&gt;
* CentOS 7.2 x86_64&lt;br /&gt;
* [[Slurm]] 15.08&lt;br /&gt;
&lt;br /&gt;
A brief summary of the some of the available computational software and tools available includes:&lt;br /&gt;
* Amber&lt;br /&gt;
* FFTW&lt;br /&gt;
* Gromacs&lt;br /&gt;
* GSL&lt;br /&gt;
* NAMD&lt;br /&gt;
* VMD&lt;br /&gt;
* Intel Compilers&lt;br /&gt;
* GNU Compilers&lt;br /&gt;
* Java&lt;br /&gt;
* R&lt;br /&gt;
* OpenMPI&lt;br /&gt;
* MATLAB&lt;br /&gt;
&lt;br /&gt;
=== Network ===&lt;br /&gt;
&lt;br /&gt;
Cheaha is connected to the UAB Research Network which provides a dedicated 10Gbs networking backplane between clusters located in the 936 data center and the campus network core.  Data transfers rates of almost 8Gbps between these hosts have been demonstrated using Grid FTP, a multi-channel file transfer service that is used to move data between clusters as part of the job management operations.  This performance promises very efficient job management and the seamless integration of other clusters as connectivity to the research network is expanded.&lt;br /&gt;
&lt;br /&gt;
=== Benchmarks ===&lt;br /&gt;
&lt;br /&gt;
The continuous resource improvement process involves collecting benchmarks of the system.  One of the measures of greatest interest to users of the system are benchmarks of specific application codes.  The following benchmarks have been performed on the system and will be further expanded as additional benchmarks are performed.&lt;br /&gt;
&lt;br /&gt;
* [[Cheaha-BGL_Comparison|Cheaha-BGL Comparison]]&lt;br /&gt;
&lt;br /&gt;
* [[Gromacs_Benchmark|Gromacs]]&lt;br /&gt;
&lt;br /&gt;
* [[NAMD_Benchmarks|NAMD]]&lt;br /&gt;
&lt;br /&gt;
=== Cluster Usage Statistics ===&lt;br /&gt;
&lt;br /&gt;
Cheaha uses Bright Cluster Manager to report cluster performance data. This information provides a helpful overview of the current and historical operating stats for Cheaha.  You can access the status monitoring page [https://cheaha-master01.rc.uab.edu/userportal/ here] (accessible only on the UAB network or through VPN).&lt;br /&gt;
&lt;br /&gt;
== Availability ==&lt;br /&gt;
&lt;br /&gt;
Cheaha is a general-purpose computer resource made available to the UAB community by UAB IT.  As such, it is available for legitimate research and educational needs and is governed by [http://www.uabgrid.uab.edu/aup UAB's Acceptable Use Policy (AUP)] for computer resources.  &lt;br /&gt;
&lt;br /&gt;
Many software packages commonly used across UAB are available via Cheaha.&lt;br /&gt;
&lt;br /&gt;
To request access to Cheaha, please send a request to [mailto:support@listserv.uab.edu send a request] to the cluster support group.&lt;br /&gt;
&lt;br /&gt;
Cheaha's intended use implies broad access to the community, however, no guarantees are made that specific computational resources will be available to all users.  Availability guarantees can only be made for reserved resources.&lt;br /&gt;
&lt;br /&gt;
=== Secure Shell Access ===&lt;br /&gt;
&lt;br /&gt;
Please configure you client secure shell software to use the official host name to access Cheaha:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Scheduling Framework ==&lt;br /&gt;
&lt;br /&gt;
[http://slurm.schedmd.com/ Slurm] is a queue management system and stands for Simple Linux Utility for Resource Management. Slurm was developed at the Lawrence Livermore National Lab and currently runs some of the largest compute clusters in the world. '''[[Slurm]]''' is now the primary job manager on Cheaha, it replaces SUN Grid Engine (SGE) the job manager used earlier.&lt;br /&gt;
&lt;br /&gt;
Slurm is similar in many ways to GridEngine or most other queue systems. You write a batch script then submit it to the queue manager (scheduler). The queue manager then schedules your job to run on the queue (or '''partition''' in Slurm parlance) that you designate. Below we will provide an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress.&lt;br /&gt;
&lt;br /&gt;
== Support ==&lt;br /&gt;
&lt;br /&gt;
Operational support for Cheaha is provided by the Research Computing group in UAB IT.  For questions regarding the operational status of Cheaha please send your request to [mailto:support@listserv.uab.edu support@listserv.uab.edu].  As a user of Cheaha you will automatically by subscribed to the hpc-announce email list.  This subscription is mandatory for all users of Cheaha.  It is our way of communicating important information regarding Cheaha to you.  The traffic on this list is restricted to official communication and has a very low volume.&lt;br /&gt;
&lt;br /&gt;
We have limited capacity, however, to support non-operational issue like &amp;quot;How do I write a job script&amp;quot; or &amp;quot;How do I compile a program&amp;quot;.  For such requests, you may find it more fruitful to send your questions to the hpc-users email list and request help from our peers in the HPC community at UAB.   As with all mailing lists, please observe [http://lifehacker.com/5473859/basic-etiquette-for-email-lists-and-forums common mailing etiquette].&lt;br /&gt;
&lt;br /&gt;
Finally, please remember that as you learned about HPC from others it becomes part of your responsibilty to help others on their quest.  You should update this documentation or respond to mailing list requests of others. &lt;br /&gt;
&lt;br /&gt;
You can subscribe to hpc-users by sending an email to:&lt;br /&gt;
&lt;br /&gt;
[mailto:sympa@vo.uabgrid.uab.edu?subject=subscribe%20hpc-users  sympa@vo.uabgrid.uab.edu with the subject ''subscribe hpc-users''].&lt;br /&gt;
&lt;br /&gt;
You can unsubribe from hpc-users by sending an email to:&lt;br /&gt;
&lt;br /&gt;
[mailto:sympa@vo.uabgrid.uab.edu?subject=unsubscribe%20hpc-users  sympa@vo.uabgrid.uab.edu with the subject ''unsubscribe hpc-users''].&lt;br /&gt;
&lt;br /&gt;
You can review archives of the list in the [http://vo.uabgrid.uab.edu/sympa/arc/hpc-users web hpc-archives].&lt;br /&gt;
&lt;br /&gt;
If you need help using the list service please send an email to:&lt;br /&gt;
&lt;br /&gt;
[mailto:sympa@vo.uabgrid.uab.edu?subject=help sympa@vo.uabgrid.uab.edu with the subject ''help'']&lt;br /&gt;
&lt;br /&gt;
If you have questions about the operation of the list itself, please send an email to the owners of the list:&lt;br /&gt;
&lt;br /&gt;
[mailto:hpc-users-request@vo.uabgrid.uab.edu sympa@vo.uabgrid.uab.edu with a subject relavent to your issue with the list]&lt;br /&gt;
&lt;br /&gt;
If you are interested in contributing to the enhancement of HPC features at UAB or would like to talk to other cluster administrators, [mailto:sympa@vo.uabgrid.uab.edu?subject=subscribe%20hpc-dev please join the hpc developers community at UAB].&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Anaconda&amp;diff=6112</id>
		<title>Anaconda</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Anaconda&amp;diff=6112"/>
		<updated>2020-11-23T21:37:08Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: update caveat on source/conda activate usage&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://conda.io/docs/user-guide/overview.html Conda] is a powerful package manager and environment manager. Conda allows you to maintain distinct environments for your different projects, with dependency packages defined and installed for each project.&lt;br /&gt;
&lt;br /&gt;
===Creating a Conda virtual environment===&lt;br /&gt;
First step, direct conda to store files in $USER_DATA to avoid filling up $HOME. Create the '''$HOME/.condarc''' file by running the following code:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;lt;&amp;lt; &amp;quot;EOF&amp;quot; &amp;gt; ~/.condarc&lt;br /&gt;
pkgs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/pkgs&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/envs&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Load one of the conda environments available on Cheaha (Note, starting with Anaconda 2018.12, Anaconda releases changed to using YYYY.MM format for version numbers):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module -t avail Anaconda&lt;br /&gt;
...&lt;br /&gt;
Anaconda3/5.3.0&lt;br /&gt;
Anaconda3/5.3.1&lt;br /&gt;
Anaconda3/2019.10&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load Anaconda3/2019.10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have loaded Anaconda, you can create an environment using the following command (change '''test_env''' to whatever you want to name your environment):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda create --name test_env&lt;br /&gt;
&lt;br /&gt;
Solving environment: done&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: /home/ravi89/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
  added / updated specs:&lt;br /&gt;
    - setuptools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be downloaded:&lt;br /&gt;
&lt;br /&gt;
    package                    |            build&lt;br /&gt;
    ---------------------------|-----------------&lt;br /&gt;
    python-3.7.0               |       h6e4f718_3        30.6 MB&lt;br /&gt;
    wheel-0.32.1               |           py37_0          35 KB&lt;br /&gt;
    setuptools-40.4.3          |           py37_0         556 KB&lt;br /&gt;
    ------------------------------------------------------------&lt;br /&gt;
                                           Total:        31.1 MB&lt;br /&gt;
&lt;br /&gt;
The following NEW packages will be INSTALLED:&lt;br /&gt;
&lt;br /&gt;
    ca-certificates: 2018.03.07-0&lt;br /&gt;
    certifi:         2018.8.24-py37_1&lt;br /&gt;
    libedit:         3.1.20170329-h6b74fdf_2&lt;br /&gt;
    libffi:          3.2.1-hd88cf55_4&lt;br /&gt;
    libgcc-ng:       8.2.0-hdf63c60_1&lt;br /&gt;
    libstdcxx-ng:    8.2.0-hdf63c60_1&lt;br /&gt;
    ncurses:         6.1-hf484d3e_0&lt;br /&gt;
    openssl:         1.0.2p-h14c3975_0&lt;br /&gt;
    pip:             10.0.1-py37_0&lt;br /&gt;
    python:          3.7.0-h6e4f718_3&lt;br /&gt;
    readline:        7.0-h7b6447c_5&lt;br /&gt;
    setuptools:      40.4.3-py37_0&lt;br /&gt;
    sqlite:          3.25.2-h7b6447c_0&lt;br /&gt;
    tk:              8.6.8-hbc83047_0&lt;br /&gt;
    wheel:           0.32.1-py37_0&lt;br /&gt;
    xz:              5.2.4-h14c3975_4&lt;br /&gt;
    zlib:            1.2.11-ha838bed_2&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&lt;br /&gt;
Downloading and Extracting Packages&lt;br /&gt;
python-3.7.0         | 30.6 MB   | ########################################################################### | 100%&lt;br /&gt;
wheel-0.32.1         | 35 KB     | ########################################################################### | 100%&lt;br /&gt;
setuptools-40.4.3    | 556 KB    | ########################################################################### | 100%&lt;br /&gt;
Preparing transaction: done&lt;br /&gt;
Verifying transaction: done&lt;br /&gt;
Executing transaction: done&lt;br /&gt;
#&lt;br /&gt;
# To activate this environment, use:&lt;br /&gt;
# &amp;gt; source activate test_env&lt;br /&gt;
#&lt;br /&gt;
# To deactivate an active environment, use:&lt;br /&gt;
# &amp;gt; source deactivate&lt;br /&gt;
#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also specify the packages that you want to install in the conda virtual environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda create --name test_env PACKAGE_NAME&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Listing all your conda virtual environments===&lt;br /&gt;
In case you forget the name of your virtual environments, you can list all your virtual environments by running '''conda env list'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda env list&lt;br /&gt;
# conda environments:&lt;br /&gt;
#&lt;br /&gt;
jupyter_test             /home/ravi89/.conda/envs/jupyter_test&lt;br /&gt;
modeller                 /home/ravi89/.conda/envs/modeller&lt;br /&gt;
psypy3                   /home/ravi89/.conda/envs/psypy3&lt;br /&gt;
test                     /home/ravi89/.conda/envs/test&lt;br /&gt;
test_env                 /home/ravi89/.conda/envs/test_env&lt;br /&gt;
test_pytorch             /home/ravi89/.conda/envs/test_pytorch&lt;br /&gt;
tomopy                   /home/ravi89/.conda/envs/tomopy&lt;br /&gt;
base                  *  /share/apps/rc/software/Anaconda3/5.2.0&lt;br /&gt;
DeepNLP                  /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
ubrite-jupyter-base-1.0     /share/apps/rc/software/Anaconda3/5.2.0/envs/ubrite-jupyter-base-1.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: Virtual environment with the asterisk(*) next to it is the one that's currently active.&lt;br /&gt;
&lt;br /&gt;
===Activating a conda virtual environment===&lt;br /&gt;
You can activate your virtual environment for use by running '''source activate''' followed by '''conda activate ENV_NAME'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ source activate&lt;br /&gt;
$ conda activate test_env&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE: Your shell prompt would also include the name of the virtual environment that you activated.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT!'''&lt;br /&gt;
&lt;br /&gt;
The following only applies to versions prior to 2019.10. '''source activate &amp;lt;env&amp;gt;''' is not idempotent. Using it twice with the same environment in a given session can lead to unexpected behavior. The recommended workflow is to use '''source activate''' to source the '''conda activate''' script, followed by '''conda activate &amp;lt;env&amp;gt;'''.&lt;br /&gt;
&lt;br /&gt;
From version 2019.10 and on, simply use '''conda activate &amp;lt;env&amp;gt;'''.&lt;br /&gt;
&lt;br /&gt;
===Locate and install packages===&lt;br /&gt;
Conda allows you to search for packages that you want to install:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ conda search BeautifulSoup4&lt;br /&gt;
Loading channels: done&lt;br /&gt;
# Name                  Version           Build  Channel&lt;br /&gt;
beautifulsoup4            4.4.0          py27_0  pkgs/free&lt;br /&gt;
beautifulsoup4            4.4.0          py34_0  pkgs/free&lt;br /&gt;
beautifulsoup4            4.4.0          py35_0  pkgs/free&lt;br /&gt;
...&lt;br /&gt;
beautifulsoup4            4.6.3          py35_0  pkgs/main&lt;br /&gt;
beautifulsoup4            4.6.3          py36_0  pkgs/main&lt;br /&gt;
beautifulsoup4            4.6.3          py37_0  pkgs/main&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: Search is case-insensitive&lt;br /&gt;
&lt;br /&gt;
You can install the packages in conda environment using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ conda install beautifulsoup4&lt;br /&gt;
Solving environment: done&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: /home/ravi89/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
  added / updated specs:&lt;br /&gt;
    - beautifulsoup4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be downloaded:&lt;br /&gt;
&lt;br /&gt;
    package                    |            build&lt;br /&gt;
    ---------------------------|-----------------&lt;br /&gt;
    beautifulsoup4-4.6.3       |           py37_0         138 KB&lt;br /&gt;
&lt;br /&gt;
The following NEW packages will be INSTALLED:&lt;br /&gt;
&lt;br /&gt;
    beautifulsoup4: 4.6.3-py37_0&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Downloading and Extracting Packages&lt;br /&gt;
beautifulsoup4-4.6.3 | 138 KB    | ########################################################################### | 100%&lt;br /&gt;
Preparing transaction: done&lt;br /&gt;
Verifying transaction: done&lt;br /&gt;
Executing transaction: done&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Deactivating your virtual environment===&lt;br /&gt;
You can deactivate your virtual environment using '''source deactivate'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ source deactivate&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sharing an environment===&lt;br /&gt;
You may want to share your environment with someone for testing or other purposes. Sharing the environemnt file for your virtual environment is the most starightforward metohd which allows other person to quickly create an environment identical to you.&lt;br /&gt;
====Export environment====&lt;br /&gt;
* Activate the virtual environment that you want to export.&lt;br /&gt;
* Export an environment.yml file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export -n test_env &amp;gt; environment.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Now you can send the recently created environment.yml file to the other person.&lt;br /&gt;
&lt;br /&gt;
====Create a virtual environment using environment.yml====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env create -f environment.yml -n test_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Delete a conda virtual environment===&lt;br /&gt;
You can use the '''remove''' parameter of conda to delete a conda virtual environment that you don't need:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda remove --name test_env --all&lt;br /&gt;
&lt;br /&gt;
Remove all packages in environment /home/ravi89/.conda/envs/test_env:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: /home/ravi89/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be REMOVED:&lt;br /&gt;
&lt;br /&gt;
    beautifulsoup4:  4.6.3-py37_0&lt;br /&gt;
    ca-certificates: 2018.03.07-0&lt;br /&gt;
    certifi:         2018.8.24-py37_1&lt;br /&gt;
    libedit:         3.1.20170329-h6b74fdf_2&lt;br /&gt;
    libffi:          3.2.1-hd88cf55_4&lt;br /&gt;
    libgcc-ng:       8.2.0-hdf63c60_1&lt;br /&gt;
    libstdcxx-ng:    8.2.0-hdf63c60_1&lt;br /&gt;
    ncurses:         6.1-hf484d3e_0&lt;br /&gt;
    openssl:         1.0.2p-h14c3975_0&lt;br /&gt;
    pip:             10.0.1-py37_0&lt;br /&gt;
    python:          3.7.0-h6e4f718_3&lt;br /&gt;
    readline:        7.0-h7b6447c_5&lt;br /&gt;
    setuptools:      40.4.3-py37_0&lt;br /&gt;
    sqlite:          3.25.2-h7b6447c_0&lt;br /&gt;
    tk:              8.6.8-hbc83047_0&lt;br /&gt;
    wheel:           0.32.1-py37_0&lt;br /&gt;
    xz:              5.2.4-h14c3975_4&lt;br /&gt;
    zlib:            1.2.11-ha838bed_2&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Moving conda directory===&lt;br /&gt;
As you build new conda environments, you may find that it is taking a lot of space in your $HOME directory. Here are 2 methods:&lt;br /&gt;
&lt;br /&gt;
Method 1: Move a pre-existing conda directory and create a symlink&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
mv ~/.conda $USER_DATA/&lt;br /&gt;
ln -s $USER_DATA/.conda .conda&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Method 2: Create a &amp;quot;$HOME/.condarc&amp;quot; file in the $HOME directory by running the following code&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;lt;&amp;lt; &amp;quot;EOF&amp;quot; &amp;gt; ~/.condarc&lt;br /&gt;
pkgs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/pkgs&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/envs&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Slurm&amp;diff=6073</id>
		<title>Slurm</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Slurm&amp;diff=6073"/>
		<updated>2020-04-16T23:18:25Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://slurm.schedmd.com/ Slurm] is a queue management system and stands for Simple Linux Utility for Resource Management. Slurm was developed at the Lawrence Livermore National Lab and currently runs some of the largest compute clusters in the world. Slurm is now the primary job manager on Cheaha, it replaces SUN Grid Engine (SGE) the job manager used earlier.&lt;br /&gt;
&lt;br /&gt;
Slurm is similar in many ways to GridEngine or most other queue systems. You write a batch script then submit it to the queue manager (scheduler). The queue manager then schedules your job to run on the queue (or '''partition''' in Slurm parlance) that you designate. Below we will provide an outline of how to submit jobs to Slurm, how Slurm decides when to schedule your job, and how to monitor progress.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General Slurm Documentation ==&lt;br /&gt;
The primary source for documentation on Slurm usage and commands can be found at the [http://slurm.schedmd.com/ Slurm] site. If you Google for Slurm questions, you'll often see the Lawrence Livermore pages as the top hits, but these tend to be outdated.&lt;br /&gt;
&lt;br /&gt;
The [https://slurm.schedmd.com/quickstart.html SLURM QuickStart Guide] provides a very useful overview of how SLURM treats a cluster as pool of resources which you can allocate to get your work done.  The Example section on that page is a very useful orientation to SLURM environments.&lt;br /&gt;
&lt;br /&gt;
The [http://www.ceci-hpc.be/slurm_tutorial.html SLURM Tutorial at CECI], a European Consortium of HPC sites, provides a very good introduction on submitting single threaded, multi-threaded, and MPI jobs. &lt;br /&gt;
&lt;br /&gt;
A great way to get details on the Slurm commands is the man pages available from the Cheaha cluster. For example, if you type the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
man sbatch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you'll get the manual page for the sbatch command.&lt;br /&gt;
&lt;br /&gt;
Cheatsheets for [https://github.com/wwarriner/slurm_cheatsheets/blob/master/sacct_cheat_sheet.pdf &amp;lt;code&amp;gt;sacct&amp;lt;/code&amp;gt;] and [https://github.com/wwarriner/slurm_cheatsheets/blob/master/sbatch_cheat_sheet.pdf &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;] are available at [https://github.com/wwarriner/slurm_cheatsheets GitHub]. These cheatsheets contain some of the more commonly used flags and parameters for the two commands.&lt;br /&gt;
&lt;br /&gt;
== Slurm Partitions ==&lt;br /&gt;
Cheaha has the following Slurm partitions (can also be thought of in terms of SGE queues) defined (the lower the number the higher the priority).&lt;br /&gt;
&lt;br /&gt;
'''Note:'''Jobs '''must request''' the appropriate partition (ex: ''--partition=short'') to satisfy the jobs resource request (maximum runtime, number of compute nodes, etc...)&lt;br /&gt;
{{Slurm_Partitions}}&lt;br /&gt;
&lt;br /&gt;
== Logging on and Running Jobs from the command line ==&lt;br /&gt;
Once you've gone through the [https://docs.uabgrid.uab.edu/wiki/Cheaha_GettingStarted#Access_.28Cluster_Account_Request.29 account setup procedure] and obtained a suitable [https://docs.uabgrid.uab.edu/wiki/Cheaha_GettingStarted#Client_Configuration terminal application], you can login to the Cheaha system via ssh&lt;br /&gt;
&lt;br /&gt;
  ssh '''BLAZERID'''@cheaha.rc.uab.edu&lt;br /&gt;
&lt;br /&gt;
Alternatively, '''existing users''' could follow these [https://docs.uabgrid.uab.edu/wiki/SSH_Key_Authentication instructions to add SSH keys] and access the new system.&lt;br /&gt;
&lt;br /&gt;
Cheaha (new hardware) run the CentOS 7 version of the Linux operating system and commands are run under the &amp;quot;bash&amp;quot; shell (the default shell). There are a number of Linux and [http://www.gnu.org/software/bash/manual/bashref.html bash references], [http://cli.learncodethehardway.org/bash_cheat_sheet.pdf cheat sheets] and [http://www.tldp.org/LDP/Bash-Beginners-Guide/html/ tutorials] available on the web.&lt;br /&gt;
&lt;br /&gt;
== Typical Workflow ==&lt;br /&gt;
* Stage data to $USER_SCRATCH (your scratch directory)&lt;br /&gt;
* Determine how to run your code in &amp;quot;batch&amp;quot; mode. Batch mode typically means the ability to run it from the command line without requiring any interaction from the user.&lt;br /&gt;
* Identify the appropriate resources needed to run the job. The following are mandatory resource requests for all jobs on Cheaha:&lt;br /&gt;
** Number of processor cores required by the job&lt;br /&gt;
** Maximum memory (RAM) required per core&lt;br /&gt;
** Maximum runtime&lt;br /&gt;
* Write a job script specifying queuing system parameters, resource requests, and commands to run program&lt;br /&gt;
* Submit script to queuing system (sbatch script.job)&lt;br /&gt;
* Monitor job (squeue)&lt;br /&gt;
* Review the results and resubmit as necessary&lt;br /&gt;
* Clean up the scratch directory by moving or deleting the data off of the cluster&lt;br /&gt;
&lt;br /&gt;
== Slurm Job Types ==&lt;br /&gt;
=== Jupyter Job ===&lt;br /&gt;
Cheaha can be used with [[Jupyter]] notebooks.&lt;br /&gt;
&lt;br /&gt;
=== Batch Job ===&lt;br /&gt;
'''TODO: ''' provide an explanation of what makes a batch job and why use that vs an interactive job&lt;br /&gt;
&lt;br /&gt;
For additional information on the '''sbatch''' command execute '''man sbatch''' at the command line to view the manual.&lt;br /&gt;
&lt;br /&gt;
==== Example Batch Job Script ====&lt;br /&gt;
A job consists of '''resource requests''' and '''tasks'''. The Slurm job scheduler interprets lines beginning with '''#SBATCH''' as Slurm arguments. In this example, the job is requesting to run 1 task&lt;br /&gt;
&lt;br /&gt;
'''Note:'''Jobs '''must request''' the appropriate partition (ex: ''--partition=short'') to satisfy the jobs resource request (maximum runtime, number of compute nodes, etc...)&lt;br /&gt;
&amp;lt;pre&amp;gt;#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=test&lt;br /&gt;
#SBATCH --output=res.out&lt;br /&gt;
#SBATCH --error=res.err&lt;br /&gt;
#&lt;br /&gt;
# Number of tasks needed for this job. Generally, used with MPI jobs&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --partition=express&lt;br /&gt;
#&lt;br /&gt;
# Time format = HH:MM:SS, DD-HH:MM:SS&lt;br /&gt;
#SBATCH --time=10:00&lt;br /&gt;
#&lt;br /&gt;
# Number of CPUs allocated to each task. &lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#&lt;br /&gt;
# Mimimum memory required per allocated  CPU  in  MegaBytes. &lt;br /&gt;
#SBATCH --mem-per-cpu=100&lt;br /&gt;
#&lt;br /&gt;
# Send mail to the email address when the job fails&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --mail-user=YOUR_EMAIL_ADDRESS&lt;br /&gt;
&lt;br /&gt;
#Set your environment here&lt;br /&gt;
&lt;br /&gt;
#Run your commands here&lt;br /&gt;
srun hostname&lt;br /&gt;
srun sleep 60&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[https://docs.uabgrid.uab.edu/wiki/Cheaha_GettingStarted#Sample_Job_Scripts Click here] for more example SLURM job scripts.&lt;br /&gt;
&lt;br /&gt;
=== Interactive Job ===&lt;br /&gt;
Login Node (the host that you connected to when you setup the SSH connection to Cheaha) is supposed to be used for submitting jobs and/or lighter prep work required for the job scripts. '''Do not run heavy computations on the login node'''. If you have a heavier workload to prepare for a batch job (eg. compiling code or other manipulations of data) or your compute application requires interactive control, you should request a dedicated interactive node for this work.&lt;br /&gt;
&lt;br /&gt;
Interactive resources are requested by submitting an &amp;quot;interactive&amp;quot; job to the scheduler. Interactive jobs will provide you a command line on a compute resource that you can use just like you would the command line on the login node. The difference is that the scheduler has dedicated the requested resources to your job and you can run your interactive commands without having to worry about impacting other users on the login node.&lt;br /&gt;
&lt;br /&gt;
Interactive jobs, that can be run on command line,  are requested with the '''srun''' command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command requests for 4 cores (--cpus-per-task) for a single task (--ntasks) with each cpu requesting size 4GB of RAM (--mem-per-cpu) for 8 hrs (--time).&lt;br /&gt;
&lt;br /&gt;
More advanced interactive scenarios to support graphical applications are available using [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session VNC] or X11 tunneling [http://www.uab.edu/it/software X-Win32 2014 for Windows]&lt;br /&gt;
&lt;br /&gt;
Interactive jobs that requires running a graphical application, are requested with the '''sinteractive''' command, via '''Terminal''' on your VNC window.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sinteractive --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Requesting for GPUs====&lt;br /&gt;
&lt;br /&gt;
To request for an interactive session on one of the GPU nodes (c0089-c0092 K80's and c0097-c0114 P100's), add --gres parameter to the 'srun' or 'sinteractive' command. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;&amp;quot; &amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;&amp;quot; &amp;gt;&lt;br /&gt;
sinteractive --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' &lt;br /&gt;
* If you want to use more then one GPU on the node, please increase the value in --gres=gpu:[1-4]&lt;br /&gt;
* If you want to use the P100s please use the partition as 'pascalnodes', wheres for K80s please use either of the express, short, medium or long as partitions.&lt;br /&gt;
* To request an interactive session using a single GPU, say for code development, you can use the following syntax&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 sinteractive --partition=pascalnodes --gres=gpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPI Job ===&lt;br /&gt;
'''TODO add MPI information and a job example'''&lt;br /&gt;
&lt;br /&gt;
=== OpenMP / SMP Job ===&lt;br /&gt;
[https://en.wikipedia.org/wiki/OpenMP OpenMP / SMP] jobs are those that use multiple CPU cores on a single compute node.&lt;br /&gt;
&lt;br /&gt;
It is very important to properly structure an SMP job to ensure that the requested CPU cores are assigned to the same compute node. The following example requests 4 CPU cores by setting the number of '''ntasks''' to '''1''' and '''cpus-per-tasks''' to '''4'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --partition=short \&lt;br /&gt;
        --ntasks=1 \&lt;br /&gt;
        --cpus-per-task=4 \&lt;br /&gt;
        --mem-per-cpu=1024 \&lt;br /&gt;
        --time=5:00:00 \&lt;br /&gt;
        --job-name=rsync \&lt;br /&gt;
        --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Dependencies ===&lt;br /&gt;
&lt;br /&gt;
It is also possible to link job scripts using job dependencies. Visit the following git repository for more detailed information and sample scripts: https://gitlab.rc.uab.edu/rc-training-sessions/job-dependency&lt;br /&gt;
&lt;br /&gt;
== Job Status ==&lt;br /&gt;
&lt;br /&gt;
=== SQUEUE ===&lt;br /&gt;
To check your job status, you can use the following command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
squeue -u $USER&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Following fields are displayed when you run '''squeue'''&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;&amp;quot;&amp;gt;&lt;br /&gt;
JOBID - ID assigned to your job by Slurm scheduler&lt;br /&gt;
PARTITION - Partition your job gets, depends upon time requested (express(max 2 hrs), short(max 12 hrs), medium(max 50 hrs), long(max 150 hrs), sinteractive(0-2 hrs))&lt;br /&gt;
NAME - JOB name given by user&lt;br /&gt;
USER - User who started the job&lt;br /&gt;
ST - State your job is in. The typical states are PENDING (PD), RUNNING(R), SUSPENDED(S), COMPLETING(CG), and COMPLETED(CD)&lt;br /&gt;
TIME - Time for which your job has been running&lt;br /&gt;
NODES - Number of nodes your job is running on&lt;br /&gt;
NODELIST - Node on which the job is running&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more details on '''squeue''', go [http://slurm.schedmd.com/squeue.html here].&lt;br /&gt;
&lt;br /&gt;
=== SSTAT ===&lt;br /&gt;
The '''sstat''' command shows status and metric information for a running job.&lt;br /&gt;
&lt;br /&gt;
'''NOTE: the job parts must be executed using ''srun'' otherwise ''sstat'' will not display useful output'''&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;&amp;quot;&amp;gt;&lt;br /&gt;
[rcs@login001 ~]$ sstat 256483&lt;br /&gt;
       JobID  MaxVMSize  MaxVMSizeNode  MaxVMSizeTask  AveVMSize     MaxRSS MaxRSSNode MaxRSSTask     AveRSS MaxPages MaxPagesNode   MaxPagesTask   AvePages     MinCPU MinCPUNode MinCPUTask     AveCPU   NTasks AveCPUFreq ReqCPUFreqMin ReqCPUFreqMax ReqCPUFreqGov ConsumedEnergy  MaxDiskRead MaxDiskReadNode MaxDiskReadTask  AveDiskRead MaxDiskWrite MaxDiskWriteNode MaxDiskWriteTask AveDiskWrite &lt;br /&gt;
------------ ---------- -------------- -------------- ---------- ---------- ---------- ---------- ---------- -------- ------------ -------------- ---------- ---------- ---------- ---------- ---------- -------- ---------- ------------- ------------- ------------- -------------- ------------ --------------- --------------- ------------ ------------ ---------------- ---------------- ------------ &lt;br /&gt;
256483.0       1962728K          c0043              1   1960633K     91920K      c0043          3     91867K      67K        c0043              3        50K  00:00.000      c0043          0  00:00.000        8      1.20G       Unknown       Unknown       Unknown              0           1M           c0043               5           1M        0.34M            c0043                5        0.34M &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more details on '''sstat''', go [http://slurm.schedmd.com/sstat.html here].&lt;br /&gt;
&lt;br /&gt;
=== SCONTROL ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ scontrol show jobid -dd 123&lt;br /&gt;
&lt;br /&gt;
JobId=123 JobName=SLI&lt;br /&gt;
   UserId=rcuser(1000) GroupId=rcuser(1000)&lt;br /&gt;
   Priority=4294898073 Nice=0 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   DerivedExitCode=0:0&lt;br /&gt;
   RunTime=06:27:02 TimeLimit=08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2016-09-12T14:40:20 EligibleTime=2016-09-12T14:40:20&lt;br /&gt;
   StartTime=2016-09-12T14:40:20 EndTime=2016-09-12T22:40:21&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=medium AllocNode:Sid=login001:123&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=c0003&lt;br /&gt;
   BatchHost=c0003&lt;br /&gt;
   NumNodes=1 NumCPUs=24 CPUs/Task=1 ReqB:S:C:T=0:0:*:*&lt;br /&gt;
   TRES=cpu=24,mem=10000,node=1&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*&lt;br /&gt;
     Nodes=c0003 CPU_IDs=0-23 Mem=10000&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=10000M MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/share/apps/rc/git/rc-sched-scripts/bin/_interactive&lt;br /&gt;
   WorkDir=/scratch/user/rcuser/work/other/rhea/Gray/MERGED&lt;br /&gt;
   StdErr=/dev/null&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/dev/null&lt;br /&gt;
   Power= SICP=0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Job History ==&lt;br /&gt;
TODO: Provide some examples of using the '''sacct''' or our wrapper '''rc-sacct''' to view historical information.&lt;br /&gt;
&lt;br /&gt;
This example uses the rc-sacct wrapper script, for comparison here is the equivalent sacct command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sacct --starttime 2016-08-30 \&lt;br /&gt;
      --allusers \&lt;br /&gt;
      --format=User,JobID,Jobname,partition,state,time,start,end,elapsed,MaxRss,MaxVMSize,nnodes,ncpus,nodelist&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;&amp;quot;&amp;gt;&lt;br /&gt;
$ rc-sacct --allusers --starttime 2016-08-30&lt;br /&gt;
&lt;br /&gt;
     User        JobID    JobName  Partition      State  Timelimit               Start                 End    Elapsed     MaxRSS  MaxVMSize   NNodes      NCPUS        NodeList&lt;br /&gt;
--------- ------------ ---------- ---------- ---------- ---------- ------------------- ------------------- ---------- ---------- ---------- -------- ---------- ---------------&lt;br /&gt;
 kxxxxxxx 34308        Connectom+ interacti+    PENDING   08:00:00             Unknown             Unknown   00:00:00                              1          4   None assigned&lt;br /&gt;
 kxxxxxxx 34310        Connectom+ interacti+    PENDING   08:00:00             Unknown             Unknown   00:00:00                              1          4   None assigned&lt;br /&gt;
 dxxxxxxx 35927         PK_htseq1     medium  COMPLETED 2-00:00:00 2016-08-30T09:21:33 2016-08-30T10:06:25   00:44:52                              1          4       c0005&lt;br /&gt;
          35927.batch       batch             COMPLETED            2016-08-30T09:21:33 2016-08-30T10:06:25   00:44:52    307704K    718152K        1          4       c0005&lt;br /&gt;
 bxxxxxxx 35928                SI     medium    TIMEOUT   12:00:00 2016-08-30T09:36:04 2016-08-30T21:36:42   12:00:38                              1          1       c0006&lt;br /&gt;
          35928.batch       batch                FAILED            2016-08-30T09:36:04 2016-08-30T21:36:43   12:00:39     31400K    286532K        1          1       c0006&lt;br /&gt;
          35928.0        hostname             COMPLETED            2016-08-30T09:36:16 2016-08-30T09:36:17   00:00:01      1112K    207252K        1          1       c0006&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional information about the sacct command can be found by running '''man sacct''' or [http://slurm.schedmd.com/sacct.html found here]&lt;br /&gt;
&lt;br /&gt;
The rc-sacct wrapper script supports the following arguments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ rc-sacct --help&lt;br /&gt;
&lt;br /&gt;
  Copyright (c) 2016 Mike Hanby, University of Alabama at Birmingham IT Research Computing.&lt;br /&gt;
&lt;br /&gt;
  rc-sacct - version 1.0.0&lt;br /&gt;
&lt;br /&gt;
  Run sacct to display history in a nicely formatted output.&lt;br /&gt;
&lt;br /&gt;
    -r, --starttime                  HH:MM[:SS] [AM|PM]&lt;br /&gt;
                                     MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]&lt;br /&gt;
                                     MM/DD[/YY]-HH:MM[:SS]&lt;br /&gt;
                                     YYYY-MM-DD[THH:MM[:SS]]&lt;br /&gt;
    -a, --allusers                   Dispay hsitory for all users)&lt;br /&gt;
    -u, --user user_list             Display hsitory for all users in the comma seperated user list&lt;br /&gt;
    -f, --format a,b,c               Comma separated list of columns: i.e. --format jobid,elapsed,ncpus,ntasks,state&lt;br /&gt;
        --debug                      Display additional output like internal structures&lt;br /&gt;
    -?, -h, --help                   Display this help message&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Slurm Variables ==&lt;br /&gt;
The following is a list of useful Slurm environment variables (click here for the [http://slurm.schedmd.com/srun.html full list]):&lt;br /&gt;
{{Slurm_Variables}}&lt;br /&gt;
&lt;br /&gt;
== SGE - Slurm ==&lt;br /&gt;
&lt;br /&gt;
This section shows Slurm and SGE equivalent commands&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   SGE                   Slurm  &lt;br /&gt;
---------             ------------&lt;br /&gt;
  qsub                  sbatch   &lt;br /&gt;
  qlogin                sinteractive&lt;br /&gt;
  qdel                   scancel&lt;br /&gt;
  qstat                  squeue&lt;br /&gt;
  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To get more info about individual commands, run : '''man SLURM_COMMAND''' . For an extensive list of Slurm-SGE equivalent commands, go [https://docs.uabgrid.uab.edu/wiki/SGE-SLURM here] or Slurm's official [http://slurm.schedmd.com/rosetta.pdf documentation]&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6071</id>
		<title>Remote Editing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6071"/>
		<updated>2020-04-01T20:21:10Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
= SSH =&lt;br /&gt;
&lt;br /&gt;
First, ssh into VM like normal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh blazerid@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are trying to edit file on VM, not Cheaha, make sure you have tmux installed on system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt install tmux&lt;br /&gt;
&lt;br /&gt;
# CentOS&lt;br /&gt;
sudo yum install tmux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, create a tmux session and detach right away.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tmux new&lt;br /&gt;
&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can exit this ssh connection now.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now ssh with remote command and special option &amp;lt;code&amp;gt; -t Force pseudo-terminal allocation.&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or you can setup in your ssh config(&amp;lt;code&amp;gt;~/.ssh/config&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
  RequestTTY force&lt;br /&gt;
  RemoteCommand tmux attach&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the setting you can ssh with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh cheaha&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Everytime you finish you editing on remote, either Cheaha or VM, use the detach command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== With tmux Control Mode==&lt;br /&gt;
&lt;br /&gt;
Note: It only works with iTerm2 on MacOS&lt;br /&gt;
&lt;br /&gt;
Use &amp;lt;code&amp;gt;-CC&amp;lt;/code&amp;gt; option with tmux&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux -CC attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will launch a new iTerm2 window contents your tmux session.&lt;br /&gt;
You can create window and panel with iTerm2 shotcuts.&lt;br /&gt;
&lt;br /&gt;
== [[Visual Studio Code]] ==&lt;br /&gt;
Visual Studio Code has official extensions available to enable and facilitate remote development. Three extensions are available from the Visual Studio marketplace. They support remote development over SSH, in containers, and using Windows Subsystem for Linux (WSL). More detailed information on their use can be found at the documentation. To use the the SSH extension to access files on Cheaha, please follow the instructions in the documentation for installation. Once it is installed, follow these instructions to set up the SSH config file. The following assumes SSH is installed locally, and that a config file exists in the usual location for your operating system. The commands assume you have an open Visual Studio Code window.&lt;br /&gt;
&lt;br /&gt;
# Open the command palette, default hotkey &amp;quot;Ctrl + Shift + P&amp;quot;.&lt;br /&gt;
# Locate &amp;quot;Remote-SSH: Connect to Host...&amp;quot; by typing part of the command at the prompt.&lt;br /&gt;
#: [[File:Remote-editing-vscode-palette.png]]&lt;br /&gt;
# Click the command in the palette prompt.&lt;br /&gt;
# Click &amp;quot;Configure SSH Hosts...&amp;quot;.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-choices.png]]&lt;br /&gt;
# Choose the location of the existing config file.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-file.png]]&lt;br /&gt;
# In the new editor window that opened, add the following lines then save the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text &amp;lt;username&amp;gt; should be replaced by the user name you use to access Cheaha. The &amp;quot;User&amp;quot; line is not necessary but can save some time.&lt;br /&gt;
&lt;br /&gt;
=== Important Notes on Proper Usage ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;background:#FF0000&amp;quot;&amp;gt;'''IMPORTANT!'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Remote-SSH extension of Visual Studio Code runs on the login node! All subprocesses spawned by Visual Studio Code will also run on the login node by default. As always, we should avoid using computation-heavy processes on the login node. The Visual Studio Code limitation is knonw, and due to the way the extension server code is deployed to Cheaha. [https://github.com/microsoft/vscode-remote-release/issues/1722 Issue #1722] has been opened on the relevant GitHub repository. Please click the thumbs-up emoji on the first post there to increase visibility and priority of the issue.&lt;br /&gt;
&lt;br /&gt;
It is possible to use the integrated terminal as normal with default hotkey &amp;quot;Ctrl + `&amp;quot; (backtick). This terminal behaves like the one on Open OnDemand, or other terminals, and can be used to start jobs as any other terminal.&lt;br /&gt;
&lt;br /&gt;
==== Things to Avoid ====&lt;br /&gt;
&lt;br /&gt;
To minimize impact of remote development, avoid any of the following, or anything else that spawns a process. This is not a comprehensive list!&lt;br /&gt;
* folders containing many files (&amp;gt;~1000 files)&lt;br /&gt;
* folders containing large files (&amp;gt;100 MB or so)&lt;br /&gt;
* debugging&lt;br /&gt;
* accessing or running code within Jupyter notebooks.&lt;br /&gt;
&lt;br /&gt;
==== Working with Many or Large Files ====&lt;br /&gt;
&lt;br /&gt;
If you must access a folder with many files or with large files, it is possible to have Visual Studio Code ignore those files or folders using a filter. Open &amp;quot;.vscode/settings.json&amp;quot; in the open folder and add an object like the following (remember the comma following the previous key-value pair, if any). Each key with the value &amp;quot;true&amp;quot; is ignored by Visual Studio Code when indexing or searching files. The keys use typical glob syntax. In the example below the following are excluded:&lt;br /&gt;
&lt;br /&gt;
* any folder containing a subfolder &amp;quot;datasets&amp;quot; is ignored, along with all contained subfolders and files.&lt;br /&gt;
* the folder &amp;quot;ignore_children&amp;quot; in the root folder along with all contained folders and files.&lt;br /&gt;
* the folder &amp;quot;node_modules&amp;quot; in the root folder along with all contained files.&lt;br /&gt;
* the file &amp;quot;LARGE_DATA.sql&amp;quot; in the root folder&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;files.watcherExclude&amp;quot;: {&lt;br /&gt;
    &amp;quot;**/datasets/**/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;ignore_children/**/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;node_modules/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;LARGE_DATA.sql&amp;quot;: true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6070</id>
		<title>Remote Editing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6070"/>
		<updated>2020-04-01T20:18:48Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Visual Studio Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== [[Visual Studio Code]] ==&lt;br /&gt;
Visual Studio Code has official extensions available to enable and facilitate remote development. Three extensions are available from the Visual Studio marketplace. They support remote development over SSH, in containers, and using Windows Subsystem for Linux (WSL). More detailed information on their use can be found at the documentation. To use the the SSH extension to access files on Cheaha, please follow the instructions in the documentation for installation. Once it is installed, follow these instructions to set up the SSH config file. The following assumes SSH is installed locally, and that a config file exists in the usual location for your operating system. The commands assume you have an open Visual Studio Code window.&lt;br /&gt;
&lt;br /&gt;
# Open the command palette, default hotkey &amp;quot;Ctrl + Shift + P&amp;quot;.&lt;br /&gt;
# Locate &amp;quot;Remote-SSH: Connect to Host...&amp;quot; by typing part of the command at the prompt.&lt;br /&gt;
#: [[File:Remote-editing-vscode-palette.png]]&lt;br /&gt;
# Click the command in the palette prompt.&lt;br /&gt;
# Click &amp;quot;Configure SSH Hosts...&amp;quot;.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-choices.png]]&lt;br /&gt;
# Choose the location of the existing config file.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-file.png]]&lt;br /&gt;
# In the new editor window that opened, add the following lines then save the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text &amp;lt;username&amp;gt; should be replaced by the user name you use to access Cheaha. The &amp;quot;User&amp;quot; line is not necessary but can save some time.&lt;br /&gt;
&lt;br /&gt;
=== Important Notes on Proper Usage ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;background:#FF0000&amp;quot;&amp;gt;'''IMPORTANT!'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Remote-SSH extension of Visual Studio Code runs on the login node! All subprocesses spawned by Visual Studio Code will also run on the login node by default. As always, we should avoid using computation-heavy processes on the login node. The Visual Studio Code limitation is knonw, and due to the way the extension server code is deployed to Cheaha. [https://github.com/microsoft/vscode-remote-release/issues/1722 Issue #1722] has been opened on the relevant GitHub repository. Please click the thumbs-up emoji on the first post there to increase visibility and priority of the issue.&lt;br /&gt;
&lt;br /&gt;
It is possible to use the integrated terminal as normal with default hotkey &amp;quot;Ctrl + `&amp;quot; (backtick). This terminal behaves like the one on Open OnDemand, or other terminals, and can be used to start jobs as any other terminal.&lt;br /&gt;
&lt;br /&gt;
==== Things to Avoid ====&lt;br /&gt;
&lt;br /&gt;
To minimize impact of remote development, avoid any of the following, or anything else that spawns a process. This is not a comprehensive list!&lt;br /&gt;
* folders containing many files (&amp;gt;~1000 files)&lt;br /&gt;
* folders containing large files (&amp;gt;100 MB or so)&lt;br /&gt;
* debugging&lt;br /&gt;
* accessing or running code within Jupyter notebooks.&lt;br /&gt;
&lt;br /&gt;
==== Working with Many or Large Files ====&lt;br /&gt;
&lt;br /&gt;
If you must access a folder with many files or with large files, it is possible to have Visual Studio Code ignore those files or folders using a filter. Open &amp;quot;.vscode/settings.json&amp;quot; in the open folder and add an object like the following (remember the comma following the previous key-value pair, if any). Each key with the value &amp;quot;true&amp;quot; is ignored by Visual Studio Code when indexing or searching files. The keys use typical glob syntax. In the example below the following are excluded:&lt;br /&gt;
&lt;br /&gt;
* any folder containing a subfolder &amp;quot;datasets&amp;quot; is ignored, along with all contained subfolders and files.&lt;br /&gt;
* the folder &amp;quot;ignore_children&amp;quot; in the root folder along with all contained folders and files.&lt;br /&gt;
* the folder &amp;quot;node_modules&amp;quot; in the root folder along with all contained files.&lt;br /&gt;
* the file &amp;quot;LARGE_DATA.sql&amp;quot; in the root folder&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;files.watcherExclude&amp;quot;: {&lt;br /&gt;
    &amp;quot;**/datasets/**/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;ignore_children/**/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;node_modules/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;LARGE_DATA.sql&amp;quot;: true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= SSH =&lt;br /&gt;
&lt;br /&gt;
First, ssh into VM like normal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh blazerid@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are trying to edit file on VM, not Cheaha, make sure you have tmux installed on system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt install tmux&lt;br /&gt;
&lt;br /&gt;
# CentOS&lt;br /&gt;
sudo yum install tmux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, create a tmux session and detach right away.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tmux new&lt;br /&gt;
&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can exit this ssh connection now.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now ssh with remote command and special option &amp;lt;code&amp;gt; -t Force pseudo-terminal allocation.&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or you can setup in your ssh config(&amp;lt;code&amp;gt;~/.ssh/config&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
  RequestTTY force&lt;br /&gt;
  RemoteCommand tmux attach&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the setting you can ssh with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh cheaha&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Everytime you finish you editing on remote, either Cheaha or VM, use the detach command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== With tmux Control Mode==&lt;br /&gt;
&lt;br /&gt;
Note: It only works with iTerm2 on MacOS&lt;br /&gt;
&lt;br /&gt;
Use &amp;lt;code&amp;gt;-CC&amp;lt;/code&amp;gt; option with tmux&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux -CC attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will launch a new iTerm2 window contents your tmux session.&lt;br /&gt;
You can create window and panel with iTerm2 shotcuts.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6069</id>
		<title>Remote Editing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6069"/>
		<updated>2020-04-01T20:17:49Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Visual Studio Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== [[Visual Studio Code]] ==&lt;br /&gt;
To use the the SSH extension to access files on Cheaha, please follow the instructions in the documentation for installation. Once it is installed, follow these instructions to set up the SSH config file. The following assumes SSH is installed locally, and that a config file exists in the usual location for your operating system. The commands assume you have an open Visual Studio Code window.&lt;br /&gt;
&lt;br /&gt;
# Open the command palette, default hotkey &amp;quot;Ctrl + Shift + P&amp;quot;.&lt;br /&gt;
# Locate &amp;quot;Remote-SSH: Connect to Host...&amp;quot; by typing part of the command at the prompt.&lt;br /&gt;
#: [[File:Remote-editing-vscode-palette.png]]&lt;br /&gt;
# Click the command in the palette prompt.&lt;br /&gt;
# Click &amp;quot;Configure SSH Hosts...&amp;quot;.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-choices.png]]&lt;br /&gt;
# Choose the location of the existing config file.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-file.png]]&lt;br /&gt;
# In the new editor window that opened, add the following lines then save the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text &amp;lt;username&amp;gt; should be replaced by the user name you use to access Cheaha. The &amp;quot;User&amp;quot; line is not necessary but can save some time.&lt;br /&gt;
&lt;br /&gt;
=== Important Notes on Proper Usage ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;background:#FF0000&amp;quot;&amp;gt;'''IMPORTANT!'''&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Remote-SSH extension of Visual Studio Code runs on the login node! All subprocesses spawned by Visual Studio Code will also run on the login node by default. As always, we should avoid using computation-heavy processes on the login node. The Visual Studio Code limitation is knonw, and due to the way the extension server code is deployed to Cheaha. [https://github.com/microsoft/vscode-remote-release/issues/1722 Issue #1722] has been opened on the relevant GitHub repository. Please click the thumbs-up emoji on the first post there to increase visibility and priority of the issue.&lt;br /&gt;
&lt;br /&gt;
It is possible to use the integrated terminal as normal with default hotkey &amp;quot;Ctrl + `&amp;quot; (backtick). This terminal behaves like the one on Open OnDemand, or other terminals, and can be used to start jobs as any other terminal.&lt;br /&gt;
&lt;br /&gt;
==== Things to Avoid ====&lt;br /&gt;
&lt;br /&gt;
To minimize impact of remote development, avoid any of the following, or anything else that spawns a process. This is not a comprehensive list!&lt;br /&gt;
* folders containing many files (&amp;gt;~1000 files)&lt;br /&gt;
* folders containing large files (&amp;gt;100 MB or so)&lt;br /&gt;
* debugging&lt;br /&gt;
* accessing or running code within Jupyter notebooks.&lt;br /&gt;
&lt;br /&gt;
==== Working with Many or Large Files ====&lt;br /&gt;
&lt;br /&gt;
If you must access a folder with many files or with large files, it is possible to have Visual Studio Code ignore those files or folders using a filter. Open &amp;quot;.vscode/settings.json&amp;quot; in the open folder and add an object like the following (remember the comma following the previous key-value pair, if any). Each key with the value &amp;quot;true&amp;quot; is ignored by Visual Studio Code when indexing or searching files. The keys use typical glob syntax. In the example below the following are excluded:&lt;br /&gt;
&lt;br /&gt;
* any folder containing a subfolder &amp;quot;datasets&amp;quot; is ignored, along with all contained subfolders and files.&lt;br /&gt;
* the folder &amp;quot;ignore_children&amp;quot; in the root folder along with all contained folders and files.&lt;br /&gt;
* the folder &amp;quot;node_modules&amp;quot; in the root folder along with all contained files.&lt;br /&gt;
* the file &amp;quot;LARGE_DATA.sql&amp;quot; in the root folder&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;files.watcherExclude&amp;quot;: {&lt;br /&gt;
    &amp;quot;**/datasets/**/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;ignore_children/**/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;node_modules/*&amp;quot;: true,&lt;br /&gt;
    &amp;quot;LARGE_DATA.sql&amp;quot;: true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= SSH =&lt;br /&gt;
&lt;br /&gt;
First, ssh into VM like normal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh blazerid@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are trying to edit file on VM, not Cheaha, make sure you have tmux installed on system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt install tmux&lt;br /&gt;
&lt;br /&gt;
# CentOS&lt;br /&gt;
sudo yum install tmux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, create a tmux session and detach right away.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tmux new&lt;br /&gt;
&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can exit this ssh connection now.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now ssh with remote command and special option &amp;lt;code&amp;gt; -t Force pseudo-terminal allocation.&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or you can setup in your ssh config(&amp;lt;code&amp;gt;~/.ssh/config&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
  RequestTTY force&lt;br /&gt;
  RemoteCommand tmux attach&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the setting you can ssh with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh cheaha&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Everytime you finish you editing on remote, either Cheaha or VM, use the detach command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== With tmux Control Mode==&lt;br /&gt;
&lt;br /&gt;
Note: It only works with iTerm2 on MacOS&lt;br /&gt;
&lt;br /&gt;
Use &amp;lt;code&amp;gt;-CC&amp;lt;/code&amp;gt; option with tmux&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux -CC attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will launch a new iTerm2 window contents your tmux session.&lt;br /&gt;
You can create window and panel with iTerm2 shotcuts.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6067</id>
		<title>Remote Editing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6067"/>
		<updated>2020-04-01T19:46:04Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== [[Visual Studio Code]] ==&lt;br /&gt;
To use the the SSH extension to access files on Cheaha, please follow the instructions in the documentation for installation. Once it is installed, follow these instructions to set up the SSH config file. The following assumes SSH is installed locally, and that a config file exists in the usual location for your operating system. The commands assume you have an open Visual Studio Code window.&lt;br /&gt;
&lt;br /&gt;
# Open the command palette, default hotkey &amp;quot;Ctrl + Shift + P&amp;quot;.&lt;br /&gt;
# Locate &amp;quot;Remote-SSH: Connect to Host...&amp;quot; by typing part of the command at the prompt.&lt;br /&gt;
#: [[File:Remote-editing-vscode-palette.png]]&lt;br /&gt;
# Click the command in the palette prompt.&lt;br /&gt;
# Click &amp;quot;Configure SSH Hosts...&amp;quot;.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-choices.png]]&lt;br /&gt;
# Choose the location of the existing config file.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-file.png]]&lt;br /&gt;
# In the new editor window that opened, add the following lines then save the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text &amp;lt;username&amp;gt; should be replaced by the user name you use to access Cheaha. The &amp;quot;User&amp;quot; line is not necessary but can save some time.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;background:#FF0000&amp;quot;&amp;gt;'''IMPORTANT!'''&amp;lt;/span&amp;gt; The Remote-SSH extension of Visual Studio Code runs on the login node! All subprocesses spawned by Visual Studio Code will also run on the login node by default. This is due to a limitation with the deployment of the extension server code used to transfer data from Cheaha to your local machine. As always, avoid using computation-heavy processes on the login node. This includes working within repositories with many files, with large files, debugging programs, or using the Python extension for accessing or running code within Jupyter notebooks. [https://github.com/microsoft/vscode-remote-release/issues/1722 Issue #1722] has been opened on the relevant GitHub repository. Please click the thumbs-up emoji on the first post there to increase visibility and priority of the issue.&lt;br /&gt;
&lt;br /&gt;
= SSH =&lt;br /&gt;
&lt;br /&gt;
First, ssh into VM like normal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh blazerid@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are trying to edit file on VM, not Cheaha, make sure you have tmux installed on system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt install tmux&lt;br /&gt;
&lt;br /&gt;
# CentOS&lt;br /&gt;
sudo yum install tmux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, create a tmux session and detach right away.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tmux new&lt;br /&gt;
&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can exit this ssh connection now.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now ssh with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or you can setup in your ssh config(&amp;lt;code&amp;gt;~/.ssh/config&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
  RequestTTY force&lt;br /&gt;
  RemoteCommand tmux attach&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the setting you can ssh with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh cheaha&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Everytime you finish you editing on remote, either Cheaha or VM, use the detach command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== With tmux Control Mode==&lt;br /&gt;
&lt;br /&gt;
Note: It only works with iTerm2 on MacOS&lt;br /&gt;
&lt;br /&gt;
ssh with remote command and special option &amp;lt;code&amp;gt; -t Force pseudo-terminal allocation.&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux -CC attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6065</id>
		<title>Remote Editing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6065"/>
		<updated>2020-04-01T19:33:43Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
== [[Visual Studio Code]] ==&lt;br /&gt;
To use the the SSH extension to access files on Cheaha, please follow the instructions in the documentation for installation. Once it is installed, follow these instructions to set up the SSH config file. The following assumes SSH is installed locally, and that a config file exists in the usual location for your operating system. The commands assume you have an open Visual Studio Code window.&lt;br /&gt;
&lt;br /&gt;
# Open the command palette, default hotkey &amp;quot;Ctrl + Shift + P&amp;quot;.&lt;br /&gt;
# Locate &amp;quot;Remote-SSH: Connect to Host...&amp;quot; by typing part of the command at the prompt.&lt;br /&gt;
#: [[File:Remote-editing-vscode-palette.png]]&lt;br /&gt;
# Click the command in the palette prompt.&lt;br /&gt;
# Click &amp;quot;Configure SSH Hosts...&amp;quot;.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-choices.png]]&lt;br /&gt;
# Choose the location of the existing config file.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-file.png]]&lt;br /&gt;
# In the new editor window that opened, add the following lines then save the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text &amp;lt;username&amp;gt; should be replaced by the user name you use to access Cheaha. The &amp;quot;User&amp;quot; line is not necessary but can save some time.&lt;br /&gt;
&lt;br /&gt;
== SSH ==&lt;br /&gt;
&lt;br /&gt;
=== With tmux ===&lt;br /&gt;
&lt;br /&gt;
First, ssh into VM like normal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh blazerid@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are trying to edit file on VM, not Cheaha, make sure you have tmux installed on system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt install tmux&lt;br /&gt;
&lt;br /&gt;
# CentOS&lt;br /&gt;
sudo yum install tmux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, create a tmux session and detach right away.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tmux new&lt;br /&gt;
&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can exit this ssh connection now.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This time, ssh with remote command and special option &amp;lt;code&amp;gt; -t  Force pseudo-terminal allocation.&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux -CC attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6063</id>
		<title>Remote Editing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6063"/>
		<updated>2020-04-01T19:31:28Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Visual Studio Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== [[Visual Studio Code]] ==&lt;br /&gt;
To use the the SSH extension to access files on Cheaha, please follow the instructions in the documentation for installation. Once it is installed, follow these instructions to set up the SSH config file. The following assumes SSH is installed locally, and that a config file exists in the usual location for your operating system. The commands assume you have an open Visual Studio Code window.&lt;br /&gt;
&lt;br /&gt;
# Open the command palette, default hotkey &amp;quot;Ctrl + Shift + P&amp;quot;.&lt;br /&gt;
# Locate &amp;quot;Remote-SSH: Connect to Host...&amp;quot; by typing part of the command at the prompt.&lt;br /&gt;
#: [[File:Remote-editing-vscode-palette.png]]&lt;br /&gt;
# Click the command in the palette prompt.&lt;br /&gt;
# Click &amp;quot;Configure SSH Hosts...&amp;quot;.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-choices.png]]&lt;br /&gt;
# Choose the location of the existing config file.&lt;br /&gt;
#: [[File:Remote-editing-vscode-ssh-file.png]]&lt;br /&gt;
# In the new editor window that opened, add the following lines then save the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text &amp;lt;username&amp;gt; should be replaced by the user name you use to access Cheaha. The &amp;quot;User&amp;quot; line is not necessary but can save some time.&lt;br /&gt;
&lt;br /&gt;
= SSH =&lt;br /&gt;
&lt;br /&gt;
== With tmux ==&lt;br /&gt;
&lt;br /&gt;
First, ssh into VM like normal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh blazerid@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are trying to edit file on VM, not Cheaha, make sure you have tmux installed on system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt install tmux&lt;br /&gt;
&lt;br /&gt;
# CentOS&lt;br /&gt;
sudo yum install tmux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, create a tmux session and detach right away.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tmux new&lt;br /&gt;
&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can exit this ssh connection now.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This time, ssh with remote command and special option &amp;lt;code&amp;gt; -t  Force pseudo-terminal allocation.&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux -CC attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:Remote-editing-vscode-ssh-file.png&amp;diff=6062</id>
		<title>File:Remote-editing-vscode-ssh-file.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:Remote-editing-vscode-ssh-file.png&amp;diff=6062"/>
		<updated>2020-04-01T19:30:49Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Example of ssh config file choices in Visual Studio Code.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Example of ssh config file choices in Visual Studio Code.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:Remote-editing-vscode-ssh-choices.png&amp;diff=6061</id>
		<title>File:Remote-editing-vscode-ssh-choices.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:Remote-editing-vscode-ssh-choices.png&amp;diff=6061"/>
		<updated>2020-04-01T19:27:39Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Example of choices available for Remote-SSH: Connect to Host... in Visual Studio Code.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Example of choices available for Remote-SSH: Connect to Host... in Visual Studio Code.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:Remote-editing-vscode-palette.png&amp;diff=6060</id>
		<title>File:Remote-editing-vscode-palette.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:Remote-editing-vscode-palette.png&amp;diff=6060"/>
		<updated>2020-04-01T19:21:48Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Example of finding Remote-SSH: Connecte to Host.. with Visual Studio Code command palette&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Example of finding Remote-SSH: Connecte to Host.. with Visual Studio Code command palette&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Visual_Studio_Code&amp;diff=6059</id>
		<title>Visual Studio Code</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Visual_Studio_Code&amp;diff=6059"/>
		<updated>2020-04-01T19:15:41Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Remote Development ==&lt;br /&gt;
&lt;br /&gt;
Visual Studio Code has official extensions available to enable and facilitate remote development. Three extensions are available from the [https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack Visual Studio marketplace]. They support remote development over SSH, in containers, and using Windows Subsystem for Linux (WSL). More detailed information on their use can be found at the [https://code.visualstudio.com/docs/remote/remote-overview documentation].&lt;br /&gt;
&lt;br /&gt;
To use the Remote SSH extension please see [[Remote Editing#Visual Studio Code|Remote Editing]].&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6058</id>
		<title>Remote Editing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6058"/>
		<updated>2020-04-01T19:13:35Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Visual Studio Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== [[Visual Studio Code]] ==&lt;br /&gt;
To use the the SSH extension to access files on Cheaha, please follow the instructions in the documentation for installation. Once it is installed, follow these instructions to set up the SSH config file. The following assumes SSH is installed locally, and that a config file exists in the usual location for your operating system. The commands assume you have an open Visual Studio Code window.&lt;br /&gt;
&lt;br /&gt;
# Open the command palette, default hotkey &amp;quot;Ctrl + Shift + P&amp;quot;.&lt;br /&gt;
# Locate &amp;quot;Remote-SSH: Connect to Host...&amp;quot; by typing part of the command at the prompt.&lt;br /&gt;
# Set up an SSH config file&lt;br /&gt;
# Click the command in the palette prompt.&lt;br /&gt;
# Click &amp;quot;Configure SSH Hosts...&amp;quot;.&lt;br /&gt;
# Choose the location of the existing config file.&lt;br /&gt;
# In the new editor window that opened, add the following lines then save the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text &amp;lt;username&amp;gt; should be replaced by the user name you use to access Cheaha. The &amp;quot;User&amp;quot; line is not necessary but can save some time.&lt;br /&gt;
&lt;br /&gt;
= SSH =&lt;br /&gt;
&lt;br /&gt;
== With tmux ==&lt;br /&gt;
&lt;br /&gt;
First, ssh into VM like normal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh blazerid@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are trying to edit file on VM, not Cheaha, make sure you have tmux installed on system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt install tmux&lt;br /&gt;
&lt;br /&gt;
# CentOS&lt;br /&gt;
sudo yum install tmux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, create a tmux session and detach right away.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tmux new&lt;br /&gt;
&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can exit this ssh connection now.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This time, ssh with remote command and special option &amp;lt;code&amp;gt; -t  Force pseudo-terminal allocation.&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux -CC attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6057</id>
		<title>Remote Editing</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Remote_Editing&amp;diff=6057"/>
		<updated>2020-04-01T19:12:48Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Visual Studio Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Visual Studio Code =&lt;br /&gt;
To use the the SSH extension to access files on Cheaha, please follow the instructions in the documentation for installation. Once it is installed, follow these instructions to set up the SSH config file. The following assumes SSH is installed locally, and that a config file exists in the usual location for your operating system. The commands assume you have an open Visual Studio Code window.&lt;br /&gt;
&lt;br /&gt;
# Open the command palette, default hotkey &amp;quot;Ctrl + Shift + P&amp;quot;.&lt;br /&gt;
# Locate &amp;quot;Remote-SSH: Connect to Host...&amp;quot; by typing part of the command at the prompt.&lt;br /&gt;
# Set up an SSH config file&lt;br /&gt;
# Click the command in the palette prompt.&lt;br /&gt;
# Click &amp;quot;Configure SSH Hosts...&amp;quot;.&lt;br /&gt;
# Choose the location of the existing config file.&lt;br /&gt;
# In the new editor window that opened, add the following lines then save the file.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host cheaha&lt;br /&gt;
  HostName cheaha.rc.uab.edu&lt;br /&gt;
  User &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text &amp;lt;username&amp;gt; should be replaced by the user name you use to access Cheaha. The &amp;quot;User&amp;quot; line is not necessary but can save some time.&lt;br /&gt;
&lt;br /&gt;
= SSH =&lt;br /&gt;
&lt;br /&gt;
== With tmux ==&lt;br /&gt;
&lt;br /&gt;
First, ssh into VM like normal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh blazerid@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are trying to edit file on VM, not Cheaha, make sure you have tmux installed on system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt install tmux&lt;br /&gt;
&lt;br /&gt;
# CentOS&lt;br /&gt;
sudo yum install tmux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, create a tmux session and detach right away.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tmux new&lt;br /&gt;
&lt;br /&gt;
# Ctrl+B then D&lt;br /&gt;
^B d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can exit this ssh connection now.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This time, ssh with remote command and special option &amp;lt;code&amp;gt; -t  Force pseudo-terminal allocation.&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -t blazerid@cheaha.rc.uab.edu &amp;quot;tmux -CC attach&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Vscode&amp;diff=6053</id>
		<title>Vscode</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Vscode&amp;diff=6053"/>
		<updated>2020-04-01T18:47:08Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Redirected page to Visual Studio Code&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Visual Studio Code]]&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Visual_Studio_Code&amp;diff=6052</id>
		<title>Visual Studio Code</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Visual_Studio_Code&amp;diff=6052"/>
		<updated>2020-04-01T18:45:10Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Created page with &amp;quot;Visual Studio Code  Stub...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Visual Studio Code&lt;br /&gt;
&lt;br /&gt;
Stub...&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Anaconda&amp;diff=6048</id>
		<title>Anaconda</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Anaconda&amp;diff=6048"/>
		<updated>2020-03-24T19:40:59Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Activating a conda virtual environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://conda.io/docs/user-guide/overview.html Conda] is a powerful package manager and environment manager. Conda allows you to maintain distinct environments for your different projects, with dependency packages defined and installed for each project.&lt;br /&gt;
&lt;br /&gt;
===Creating a Conda virtual environment===&lt;br /&gt;
First step, direct conda to store files in $USER_DATA to avoid filling up $HOME. Create the '''$HOME/.condarc''' file by running the following code:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;lt;&amp;lt; &amp;quot;EOF&amp;quot; &amp;gt; ~/.condarc&lt;br /&gt;
pkgs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/pkgs&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/envs&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Load one of the conda environments available on cheaha:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail Anaconda&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------- /share/apps/rc/modules/all ---------------------------------------------&lt;br /&gt;
Anaconda2/4.0.0 Anaconda2/4.2.0 Anaconda3/4.4.0 Anaconda3/5.0.1 Anaconda3/5.1.0 Anaconda3/5.2.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load Anaconda3/5.3.1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once you have loaded Anaconda, you can create an environment using the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda create --name test_env&lt;br /&gt;
Solving environment: done&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: /home/ravi89/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
  added / updated specs:&lt;br /&gt;
    - setuptools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be downloaded:&lt;br /&gt;
&lt;br /&gt;
    package                    |            build&lt;br /&gt;
    ---------------------------|-----------------&lt;br /&gt;
    python-3.7.0               |       h6e4f718_3        30.6 MB&lt;br /&gt;
    wheel-0.32.1               |           py37_0          35 KB&lt;br /&gt;
    setuptools-40.4.3          |           py37_0         556 KB&lt;br /&gt;
    ------------------------------------------------------------&lt;br /&gt;
                                           Total:        31.1 MB&lt;br /&gt;
&lt;br /&gt;
The following NEW packages will be INSTALLED:&lt;br /&gt;
&lt;br /&gt;
    ca-certificates: 2018.03.07-0&lt;br /&gt;
    certifi:         2018.8.24-py37_1&lt;br /&gt;
    libedit:         3.1.20170329-h6b74fdf_2&lt;br /&gt;
    libffi:          3.2.1-hd88cf55_4&lt;br /&gt;
    libgcc-ng:       8.2.0-hdf63c60_1&lt;br /&gt;
    libstdcxx-ng:    8.2.0-hdf63c60_1&lt;br /&gt;
    ncurses:         6.1-hf484d3e_0&lt;br /&gt;
    openssl:         1.0.2p-h14c3975_0&lt;br /&gt;
    pip:             10.0.1-py37_0&lt;br /&gt;
    python:          3.7.0-h6e4f718_3&lt;br /&gt;
    readline:        7.0-h7b6447c_5&lt;br /&gt;
    setuptools:      40.4.3-py37_0&lt;br /&gt;
    sqlite:          3.25.2-h7b6447c_0&lt;br /&gt;
    tk:              8.6.8-hbc83047_0&lt;br /&gt;
    wheel:           0.32.1-py37_0&lt;br /&gt;
    xz:              5.2.4-h14c3975_4&lt;br /&gt;
    zlib:            1.2.11-ha838bed_2&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&lt;br /&gt;
Downloading and Extracting Packages&lt;br /&gt;
python-3.7.0         | 30.6 MB   | ########################################################################### | 100%&lt;br /&gt;
wheel-0.32.1         | 35 KB     | ########################################################################### | 100%&lt;br /&gt;
setuptools-40.4.3    | 556 KB    | ########################################################################### | 100%&lt;br /&gt;
Preparing transaction: done&lt;br /&gt;
Verifying transaction: done&lt;br /&gt;
Executing transaction: done&lt;br /&gt;
#&lt;br /&gt;
# To activate this environment, use:&lt;br /&gt;
# &amp;gt; source activate test_env&lt;br /&gt;
#&lt;br /&gt;
# To deactivate an active environment, use:&lt;br /&gt;
# &amp;gt; source deactivate&lt;br /&gt;
#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also specify the packages that you want to install in the conda virtual environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda create --name test_env PACKAGE_NAME&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Listing all your conda virtual environments===&lt;br /&gt;
In case you forget the name of your virtual environments, you can list all your virtual environments by running '''conda env list'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda env list&lt;br /&gt;
# conda environments:&lt;br /&gt;
#&lt;br /&gt;
jupyter_test             /home/ravi89/.conda/envs/jupyter_test&lt;br /&gt;
modeller                 /home/ravi89/.conda/envs/modeller&lt;br /&gt;
psypy3                   /home/ravi89/.conda/envs/psypy3&lt;br /&gt;
test                     /home/ravi89/.conda/envs/test&lt;br /&gt;
test_env                 /home/ravi89/.conda/envs/test_env&lt;br /&gt;
test_pytorch             /home/ravi89/.conda/envs/test_pytorch&lt;br /&gt;
tomopy                   /home/ravi89/.conda/envs/tomopy&lt;br /&gt;
base                  *  /share/apps/rc/software/Anaconda3/5.2.0&lt;br /&gt;
DeepNLP                  /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
ubrite-jupyter-base-1.0     /share/apps/rc/software/Anaconda3/5.2.0/envs/ubrite-jupyter-base-1.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: Virtual environment with the asterisk(*) next to it is the one that's currently active.&lt;br /&gt;
&lt;br /&gt;
===Activating a conda virtual environment===&lt;br /&gt;
You can activate your virtual environment for use by running '''source activate''' followed by '''conda activate ENV_NAME'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ source activate&lt;br /&gt;
$ conda activate test_env&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE: Your shell prompt would also include the name of the virtual environment that you activated.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT!'''&lt;br /&gt;
&lt;br /&gt;
'''source activate &amp;lt;env&amp;gt;''' is not idempotent. Using it twice with the same environment in a given session can lead to unexpected behavior. The recommended workflow is to use '''source activate''' to source the '''conda activate''' script, followed by '''conda activate &amp;lt;env&amp;gt;'''.&lt;br /&gt;
&lt;br /&gt;
===Locate and install packages===&lt;br /&gt;
Conda allows you to search for packages that you want to install:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ conda search BeautifulSoup4&lt;br /&gt;
Loading channels: done&lt;br /&gt;
# Name                  Version           Build  Channel&lt;br /&gt;
beautifulsoup4            4.4.0          py27_0  pkgs/free&lt;br /&gt;
beautifulsoup4            4.4.0          py34_0  pkgs/free&lt;br /&gt;
beautifulsoup4            4.4.0          py35_0  pkgs/free&lt;br /&gt;
...&lt;br /&gt;
beautifulsoup4            4.6.3          py35_0  pkgs/main&lt;br /&gt;
beautifulsoup4            4.6.3          py36_0  pkgs/main&lt;br /&gt;
beautifulsoup4            4.6.3          py37_0  pkgs/main&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: Search is case-insensitive&lt;br /&gt;
&lt;br /&gt;
You can install the packages in conda environment using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ conda install beautifulsoup4&lt;br /&gt;
Solving environment: done&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: /home/ravi89/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
  added / updated specs:&lt;br /&gt;
    - beautifulsoup4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be downloaded:&lt;br /&gt;
&lt;br /&gt;
    package                    |            build&lt;br /&gt;
    ---------------------------|-----------------&lt;br /&gt;
    beautifulsoup4-4.6.3       |           py37_0         138 KB&lt;br /&gt;
&lt;br /&gt;
The following NEW packages will be INSTALLED:&lt;br /&gt;
&lt;br /&gt;
    beautifulsoup4: 4.6.3-py37_0&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Downloading and Extracting Packages&lt;br /&gt;
beautifulsoup4-4.6.3 | 138 KB    | ########################################################################### | 100%&lt;br /&gt;
Preparing transaction: done&lt;br /&gt;
Verifying transaction: done&lt;br /&gt;
Executing transaction: done&lt;br /&gt;
(test_env) $&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Deactivating your virtual environment===&lt;br /&gt;
You can deactivate your virtual environment using '''source deactivate'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(test_env) $ source deactivate&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sharing an environment===&lt;br /&gt;
You may want to share your environment with someone for testing or other purposes. Sharing the environemnt file for your virtual environment is the most starightforward metohd which allows other person to quickly create an environment identical to you.&lt;br /&gt;
====Export environment====&lt;br /&gt;
* Activate the virtual environment that you want to export.&lt;br /&gt;
* Export an environment.yml file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export -n test_env &amp;gt; environment.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Now you can send the recently created environment.yml file to the other person.&lt;br /&gt;
&lt;br /&gt;
====Create a virtual environment using environment.yml====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env create -f environment.yml -n test_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Delete a conda virtual environment===&lt;br /&gt;
You can use the '''remove''' parameter of conda to delete a conda virtual environment that you don't need:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ conda remove --name test_env --all&lt;br /&gt;
&lt;br /&gt;
Remove all packages in environment /home/ravi89/.conda/envs/test_env:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Package Plan ##&lt;br /&gt;
&lt;br /&gt;
  environment location: /home/ravi89/.conda/envs/test_env&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following packages will be REMOVED:&lt;br /&gt;
&lt;br /&gt;
    beautifulsoup4:  4.6.3-py37_0&lt;br /&gt;
    ca-certificates: 2018.03.07-0&lt;br /&gt;
    certifi:         2018.8.24-py37_1&lt;br /&gt;
    libedit:         3.1.20170329-h6b74fdf_2&lt;br /&gt;
    libffi:          3.2.1-hd88cf55_4&lt;br /&gt;
    libgcc-ng:       8.2.0-hdf63c60_1&lt;br /&gt;
    libstdcxx-ng:    8.2.0-hdf63c60_1&lt;br /&gt;
    ncurses:         6.1-hf484d3e_0&lt;br /&gt;
    openssl:         1.0.2p-h14c3975_0&lt;br /&gt;
    pip:             10.0.1-py37_0&lt;br /&gt;
    python:          3.7.0-h6e4f718_3&lt;br /&gt;
    readline:        7.0-h7b6447c_5&lt;br /&gt;
    setuptools:      40.4.3-py37_0&lt;br /&gt;
    sqlite:          3.25.2-h7b6447c_0&lt;br /&gt;
    tk:              8.6.8-hbc83047_0&lt;br /&gt;
    wheel:           0.32.1-py37_0&lt;br /&gt;
    xz:              5.2.4-h14c3975_4&lt;br /&gt;
    zlib:            1.2.11-ha838bed_2&lt;br /&gt;
&lt;br /&gt;
Proceed ([y]/n)? y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Moving conda directory===&lt;br /&gt;
As you build new conda environments, you may find that it is taking a lot of space in your $HOME directory. Here are 2 methods:&lt;br /&gt;
&lt;br /&gt;
Method 1: Move a pre-existing conda directory and create a symlink&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
mv ~/.conda $USER_DATA/&lt;br /&gt;
ln -s $USER_DATA/.conda&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Method 2: Create a &amp;quot;$HOME/.condarc&amp;quot; file in the $HOME directory by running the following code&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;lt;&amp;lt; &amp;quot;EOF&amp;quot; &amp;gt; ~/.condarc&lt;br /&gt;
pkgs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/pkgs&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - $USER_DATA/.conda/envs&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=MariaDB&amp;diff=6007</id>
		<title>MariaDB</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=MariaDB&amp;diff=6007"/>
		<updated>2020-01-10T16:13:54Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Optimization for Single User Access on Cheaha */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Summary=&lt;br /&gt;
&lt;br /&gt;
[https://mariadb.org/about/ MariaDB] is a free and open-source database engine intended to mirror the functionality of [https://www.mysql.com/ MySQL]. Server daemon entities are generally not allowed to be installed and run directly on Cheaha nodes. The proper way to run an SQL server is through a [[Singularity containers|container]]. The user can then connect to the server using an appropriate connection method.&lt;br /&gt;
&lt;br /&gt;
=Optimization for Single User Access on Cheaha=&lt;br /&gt;
&lt;br /&gt;
While MariaDB is a mature and useful system, the default configuration of MariaDB can be inefficient for large, not-in-memory databases, leading to poor performance and longer runtimes. The primary intended use-case for MariaDB is serving database information to many remote users, like with UAB Oracle AdminSys. It is assumed for this page that science-related SQL jobs on Cheaha serve only a single user. That assumption allows us to configure the system for higher performance when querying large databases. There are a few potential fixes for these issues that are available to Cheaha users. Options include changing the default buffer pool size to reduce paging, storing the buffer state on disk to reduce warmup time, leaving a MariaDB terminal open on a long-running job, and requesting access to the large memory nodes on Cheaha. The latter can be done by sending a ticket to [support@listserv.uab.edu].&lt;br /&gt;
&lt;br /&gt;
The flags listed below can also be modified at runtime at the MariaDB terminal. A flag formatted as &amp;lt;code&amp;gt;--flag-name=value&amp;lt;/code&amp;gt; can be changed using &amp;lt;code&amp;gt;SET GLOBAL flag_name=value&amp;lt;/code&amp;gt;. Note that the hyphens have changed to underscores. The information below is only helpful if the user intends to access the same large database multiple times over multiple sessions. If more complicated queries need to be run on large databases, it might be more helpful to think about using strategies and tools developed specifically for big data, such as [https://spark.apache.org/ Apache Spark], or a high-level language with SQL interfaces like [[Python]]. Python can be [https://mariadb.com/resources/blog/how-to-connect-python-programs-to-mariadb/ onnected to MariaDB] using the associated [https://anaconda.org/anaconda/mysql-connector-python MySQL Connector Conda package].&lt;br /&gt;
&lt;br /&gt;
Example setup and teardown shell scripts for this kind of access are available on out [https://gitlab.rc.uab.edu/louistw/rc-mysql-script GitLab].&lt;br /&gt;
&lt;br /&gt;
==Modifying the Buffer Pool==&lt;br /&gt;
&lt;br /&gt;
Having a small buffer pool means that flushing occurs more frequently. Flushing is when the machine moves data to and from the buffer, which involves moving data between memory and storage. Communication with storage is orders of magnitude slower than communication between processor and memory, so more frequent flushing can reduce performance. MariaDB uses a collection of variables to control how much of a database is in memory at one time, in a buffer pool. Fine control of the InnoDB buffer pool involves a number of variables. Two are relevant for the typical scientific use-case of MariaDB. The flags below are recommendations for use with the &amp;lt;code&amp;gt;mysqld&amp;lt;/code&amp;gt; command. More information can be found [https://mariadb.com/kb/en/innodb-buffer-pool/ here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--innodb_buffer_pool_instances=1&lt;br /&gt;
--innodb_buffer_pool_size=[80% of total allocated memory]&lt;br /&gt;
--innodb_buffer_pool_chunk_size=[80% of total allocated memory]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The number of instances is set to one using the &amp;lt;code&amp;gt;innodb-buffer-pool-instances&amp;lt;/code&amp;gt; flag because there is only one user. Multiple instances are useful when there is concurrent access by many users. The buffer pool size is set to 80% of total allocated memory using the &amp;lt;code&amp;gt;innodb-buffer-pool-size&amp;lt;/code&amp;gt; because MariaDB has an additional 10% for internal buffers. It is important to stay below the memory allocation to avoid accidentally killing the job and destroying hard work.&lt;br /&gt;
&lt;br /&gt;
The MariaDB server must be run inside of a container on Cheaha. If the container can't be modified, then the three above flags can't be used when starting the server daemon &amp;lt;code&amp;gt;mysqld&amp;lt;/code&amp;gt;. Instead they must be modified at runtime in the MariaDB terminal. See the note in the summary for how to change the flags into SQL queries that modify the global variables.&lt;br /&gt;
&lt;br /&gt;
==Storing the Buffer Pool State==&lt;br /&gt;
&lt;br /&gt;
The buffer pool is stored in memory, so it is destroyed when the MariaDB server is shut down. When it is restarted at a later time, the buffered indices stored in the pool must be rebuilt. The nature of shared computing resources means that the job, and the server, are shut down periodically. This means the cache must be rebuilt for each new job, even with the same database. For large databases, this can result in significant and repeated warmup time. It is possible to mitigate this by writing the buffer pool state to disk on shutdown and reading it on startup. The fraction of buffer pool to store may also be controlled, with a default of 40%. The following flags enable loading and saving of the buffer pool. Loading and saving are enabled by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--innodb_buffer_pool_load_at_startup=ON&lt;br /&gt;
--innodb_buffer_pool_dump_at_shutdown=ON&lt;br /&gt;
--innodb_buffer_pool_dump_pct=100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The buffer pool file may be a significant fraction of the size of the original database, which could increase storage requirements. Depending on the use-case and sizes involved, it may not be a good choice to store the entire buffer pool. An alternative is to simply request a VNC job on a partition with a long time, and then leave the MariaDB terminal open.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=MariaDB&amp;diff=6006</id>
		<title>MariaDB</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=MariaDB&amp;diff=6006"/>
		<updated>2020-01-10T16:13:35Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: /* Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Summary=&lt;br /&gt;
&lt;br /&gt;
[https://mariadb.org/about/ MariaDB] is a free and open-source database engine intended to mirror the functionality of [https://www.mysql.com/ MySQL]. Server daemon entities are generally not allowed to be installed and run directly on Cheaha nodes. The proper way to run an SQL server is through a [[Singularity containers|container]]. The user can then connect to the server using an appropriate connection method.&lt;br /&gt;
&lt;br /&gt;
=Optimization for Single User Access on Cheaha=&lt;br /&gt;
&lt;br /&gt;
While MariaDB is a mature and useful system, the default configuration of MariaDB can be inefficient for large, not-in-memory databases, leading to poor performance and longer runtimes. The primary intended use-case for MariaDB is serving database information to many remote users, like with UAB Oracle AdminSys. It is assumed for this page that science-related SQL jobs on Cheaha serve only a single user. That assumption allows us to configure the system for higher performance when querying large databases. There are a few potential fixes for these issues that are available to Cheaha users. Options include changing the default buffer pool size to reduce paging, storing the buffer state on disk to reduce warmup time, leaving a MariaDB terminal open on a long-running job, and requesting access to the large memory nodes on Cheaha. The latter can be done by sending a ticket to [support@listserv.uab.edu].&lt;br /&gt;
&lt;br /&gt;
The flags listed below can also be modified at runtime at the MariaDB terminal. A flag formatted as &amp;lt;code&amp;gt;--flag-name=value&amp;lt;/code&amp;gt; can be changed using &amp;lt;code&amp;gt;SET GLOBAL flag_name=value&amp;lt;/code&amp;gt;. Note that the hyphens have changed to underscores. The information below is only helpful if the user intends to access the same large database multiple times over multiple sessions. If more complicated queries need to be run on large databases, it might be more helpful to think about using strategies and tools developed specifically for big data, such as [https://spark.apache.org/ Apache Spark], or a high-level language with SQL interfaces like [[Python]]. Python can be [https://mariadb.com/resources/blog/how-to-connect-python-programs-to-mariadb/ onnected to MariaDB] using the associated [https://anaconda.org/anaconda/mysql-connector-python MySQL Connector Conda package].&lt;br /&gt;
&lt;br /&gt;
==Modifying the Buffer Pool==&lt;br /&gt;
&lt;br /&gt;
Having a small buffer pool means that flushing occurs more frequently. Flushing is when the machine moves data to and from the buffer, which involves moving data between memory and storage. Communication with storage is orders of magnitude slower than communication between processor and memory, so more frequent flushing can reduce performance. MariaDB uses a collection of variables to control how much of a database is in memory at one time, in a buffer pool. Fine control of the InnoDB buffer pool involves a number of variables. Two are relevant for the typical scientific use-case of MariaDB. The flags below are recommendations for use with the &amp;lt;code&amp;gt;mysqld&amp;lt;/code&amp;gt; command. More information can be found [https://mariadb.com/kb/en/innodb-buffer-pool/ here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--innodb_buffer_pool_instances=1&lt;br /&gt;
--innodb_buffer_pool_size=[80% of total allocated memory]&lt;br /&gt;
--innodb_buffer_pool_chunk_size=[80% of total allocated memory]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The number of instances is set to one using the &amp;lt;code&amp;gt;innodb-buffer-pool-instances&amp;lt;/code&amp;gt; flag because there is only one user. Multiple instances are useful when there is concurrent access by many users. The buffer pool size is set to 80% of total allocated memory using the &amp;lt;code&amp;gt;innodb-buffer-pool-size&amp;lt;/code&amp;gt; because MariaDB has an additional 10% for internal buffers. It is important to stay below the memory allocation to avoid accidentally killing the job and destroying hard work.&lt;br /&gt;
&lt;br /&gt;
The MariaDB server must be run inside of a container on Cheaha. If the container can't be modified, then the three above flags can't be used when starting the server daemon &amp;lt;code&amp;gt;mysqld&amp;lt;/code&amp;gt;. Instead they must be modified at runtime in the MariaDB terminal. See the note in the summary for how to change the flags into SQL queries that modify the global variables.&lt;br /&gt;
&lt;br /&gt;
==Storing the Buffer Pool State==&lt;br /&gt;
&lt;br /&gt;
The buffer pool is stored in memory, so it is destroyed when the MariaDB server is shut down. When it is restarted at a later time, the buffered indices stored in the pool must be rebuilt. The nature of shared computing resources means that the job, and the server, are shut down periodically. This means the cache must be rebuilt for each new job, even with the same database. For large databases, this can result in significant and repeated warmup time. It is possible to mitigate this by writing the buffer pool state to disk on shutdown and reading it on startup. The fraction of buffer pool to store may also be controlled, with a default of 40%. The following flags enable loading and saving of the buffer pool. Loading and saving are enabled by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--innodb_buffer_pool_load_at_startup=ON&lt;br /&gt;
--innodb_buffer_pool_dump_at_shutdown=ON&lt;br /&gt;
--innodb_buffer_pool_dump_pct=100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The buffer pool file may be a significant fraction of the size of the original database, which could increase storage requirements. Depending on the use-case and sizes involved, it may not be a good choice to store the entire buffer pool. An alternative is to simply request a VNC job on a partition with a long time, and then leave the MariaDB terminal open.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=MariaDB&amp;diff=6005</id>
		<title>MariaDB</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=MariaDB&amp;diff=6005"/>
		<updated>2020-01-10T15:59:03Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Moved from MySQL. MySQL now hard redirects here.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Summary=&lt;br /&gt;
&lt;br /&gt;
[https://mariadb.org/about/ MariaDB] is a free and open-source database engine intended to mirror the functionality of [https://www.mysql.com/ MySQL]. Server daemon entities are generally not allowed to be installed and run directly on Cheaha nodes. The proper way to run an SQL server is through a [[Singularity containers|container]]. The user can then connect to the server using an appropriate connection method. Example setup and teardown shell scripts for this kind of access are available on out [https://gitlab.rc.uab.edu/louistw/rc-mysql-script GitLab].&lt;br /&gt;
&lt;br /&gt;
=Optimization for Single User Access on Cheaha=&lt;br /&gt;
&lt;br /&gt;
While MariaDB is a mature and useful system, the default configuration of MariaDB can be inefficient for large, not-in-memory databases, leading to poor performance and longer runtimes. The primary intended use-case for MariaDB is serving database information to many remote users, like with UAB Oracle AdminSys. It is assumed for this page that science-related SQL jobs on Cheaha serve only a single user. That assumption allows us to configure the system for higher performance when querying large databases. There are a few potential fixes for these issues that are available to Cheaha users. Options include changing the default buffer pool size to reduce paging, storing the buffer state on disk to reduce warmup time, leaving a MariaDB terminal open on a long-running job, and requesting access to the large memory nodes on Cheaha. The latter can be done by sending a ticket to [support@listserv.uab.edu].&lt;br /&gt;
&lt;br /&gt;
The flags listed below can also be modified at runtime at the MariaDB terminal. A flag formatted as &amp;lt;code&amp;gt;--flag-name=value&amp;lt;/code&amp;gt; can be changed using &amp;lt;code&amp;gt;SET GLOBAL flag_name=value&amp;lt;/code&amp;gt;. Note that the hyphens have changed to underscores. The information below is only helpful if the user intends to access the same large database multiple times over multiple sessions. If more complicated queries need to be run on large databases, it might be more helpful to think about using strategies and tools developed specifically for big data, such as [https://spark.apache.org/ Apache Spark], or a high-level language with SQL interfaces like [[Python]]. Python can be [https://mariadb.com/resources/blog/how-to-connect-python-programs-to-mariadb/ onnected to MariaDB] using the associated [https://anaconda.org/anaconda/mysql-connector-python MySQL Connector Conda package].&lt;br /&gt;
&lt;br /&gt;
==Modifying the Buffer Pool==&lt;br /&gt;
&lt;br /&gt;
Having a small buffer pool means that flushing occurs more frequently. Flushing is when the machine moves data to and from the buffer, which involves moving data between memory and storage. Communication with storage is orders of magnitude slower than communication between processor and memory, so more frequent flushing can reduce performance. MariaDB uses a collection of variables to control how much of a database is in memory at one time, in a buffer pool. Fine control of the InnoDB buffer pool involves a number of variables. Two are relevant for the typical scientific use-case of MariaDB. The flags below are recommendations for use with the &amp;lt;code&amp;gt;mysqld&amp;lt;/code&amp;gt; command. More information can be found [https://mariadb.com/kb/en/innodb-buffer-pool/ here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--innodb_buffer_pool_instances=1&lt;br /&gt;
--innodb_buffer_pool_size=[80% of total allocated memory]&lt;br /&gt;
--innodb_buffer_pool_chunk_size=[80% of total allocated memory]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The number of instances is set to one using the &amp;lt;code&amp;gt;innodb-buffer-pool-instances&amp;lt;/code&amp;gt; flag because there is only one user. Multiple instances are useful when there is concurrent access by many users. The buffer pool size is set to 80% of total allocated memory using the &amp;lt;code&amp;gt;innodb-buffer-pool-size&amp;lt;/code&amp;gt; because MariaDB has an additional 10% for internal buffers. It is important to stay below the memory allocation to avoid accidentally killing the job and destroying hard work.&lt;br /&gt;
&lt;br /&gt;
The MariaDB server must be run inside of a container on Cheaha. If the container can't be modified, then the three above flags can't be used when starting the server daemon &amp;lt;code&amp;gt;mysqld&amp;lt;/code&amp;gt;. Instead they must be modified at runtime in the MariaDB terminal. See the note in the summary for how to change the flags into SQL queries that modify the global variables.&lt;br /&gt;
&lt;br /&gt;
==Storing the Buffer Pool State==&lt;br /&gt;
&lt;br /&gt;
The buffer pool is stored in memory, so it is destroyed when the MariaDB server is shut down. When it is restarted at a later time, the buffered indices stored in the pool must be rebuilt. The nature of shared computing resources means that the job, and the server, are shut down periodically. This means the cache must be rebuilt for each new job, even with the same database. For large databases, this can result in significant and repeated warmup time. It is possible to mitigate this by writing the buffer pool state to disk on shutdown and reading it on startup. The fraction of buffer pool to store may also be controlled, with a default of 40%. The following flags enable loading and saving of the buffer pool. Loading and saving are enabled by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--innodb_buffer_pool_load_at_startup=ON&lt;br /&gt;
--innodb_buffer_pool_dump_at_shutdown=ON&lt;br /&gt;
--innodb_buffer_pool_dump_pct=100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The buffer pool file may be a significant fraction of the size of the original database, which could increase storage requirements. Depending on the use-case and sizes involved, it may not be a good choice to store the entire buffer pool. An alternative is to simply request a VNC job on a partition with a long time, and then leave the MariaDB terminal open.&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=MySQL&amp;diff=6004</id>
		<title>MySQL</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=MySQL&amp;diff=6004"/>
		<updated>2020-01-10T15:58:46Z</updated>

		<summary type="html">&lt;p&gt;Wwarr@uab.edu: Redirected to MariaDB&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[MariaDB]]&lt;/div&gt;</summary>
		<author><name>Wwarr@uab.edu</name></author>
	</entry>
</feed>