<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://docs.uabgrid.uab.edu/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ozborn%40uab.edu</id>
	<title>Cheaha - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.uabgrid.uab.edu/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ozborn%40uab.edu"/>
	<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/wiki/Special:Contributions/Ozborn@uab.edu"/>
	<updated>2026-04-19T05:56:39Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.2</generator>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6103</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6103"/>
		<updated>2020-08-28T19:03:02Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: More information on custom building conda environments on cheahea&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [https://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [https://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
&lt;br /&gt;
See [https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf Conda Cheet Sheet] for some helpful Conda documentation. Additional info (some specific to cheaha) is below.&lt;br /&gt;
&lt;br /&gt;
== Preparing to build a conda env on cheaha==&lt;br /&gt;
Do NOT build your new conda env on the head node, use an interactive job to create the new environment by using the command below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=2 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file (Cheahea) ==&lt;br /&gt;
This tells conda where to look for the installed environments. This file should be in your home directory on Cheaha (/data/user/username/.condarc). Make sure to include that '.' in the beginning of the filename.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/username/python/env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large.&lt;br /&gt;
The file will be hidden and not visible in Jupyter environment. To view or edit it, start terminal and run 'nano ~/.condarc' or 'vi ~/.condarc'.&lt;br /&gt;
&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Once you are off the root node, you can start building your conda environment.&lt;br /&gt;
conda create --name mynlp python=3.7&lt;br /&gt;
&lt;br /&gt;
To update an existing environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env update --prefix ./scibert --file scibert.yml  --prune&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Add ipykernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks. Insert these lines in your .yml file. You can edit the file in Jupyter environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Add other packages of interest ==&lt;br /&gt;
Add additional package of interest to your conda environment, for example &amp;quot;torch&amp;quot;, &amp;quot;transformers&amp;quot;, etc.. Pip can be used as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install torch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see your conda environment packages using &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== (Optional) Export YAML file containing your environment to cheaha ==&lt;br /&gt;
If 8you already have a functional conda environment elsewhere (like your laptop) you can  export it to cheaha. The command below exports the &amp;quot;scibert&amp;quot; environment built on another machine. You can create any filename, but keep the extension .yml&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export --no-builds &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the file to Cheaha. You can use &amp;quot;Upload&amp;quot; button in Jupyter &lt;br /&gt;
&lt;br /&gt;
To replicate the new conda environment on cheaha execute the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env create --file scibert.yml  --prefix /data/user/username/python/env/scibert&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use --name argument instead to specify the environment name, conda will create a subdirectory with same name as the yml filename. That is confusing, so using --prefix makes the environment path cleaner&lt;br /&gt;
&lt;br /&gt;
== Load environment on Jupyter Notebook ==&lt;br /&gt;
[[File:JupyterCustomEnv.jpeg|pics500x]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=UAB_Openstack_Tutorial&amp;diff=6076</id>
		<title>UAB Openstack Tutorial</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=UAB_Openstack_Tutorial&amp;diff=6076"/>
		<updated>2020-04-21T16:41:51Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Create Application Credentials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= UAB Openstack Tutorial =&lt;br /&gt;
&lt;br /&gt;
This tutorial is meant as a guide for deploying VMs in UAB's OpenStack environment.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
1. Send email to support@listserv.uab.edu and request an account&lt;br /&gt;
&lt;br /&gt;
2. Log into ruffner.rc.uab.edu to get application credentials&lt;br /&gt;
&lt;br /&gt;
3. Use terraform to create and deploy a VM to UAB OpenStack&lt;br /&gt;
&lt;br /&gt;
4. Provision your VM with ansible&lt;br /&gt;
&lt;br /&gt;
== Steps ==&lt;br /&gt;
&lt;br /&gt;
=== Create Application Credentials ===&lt;br /&gt;
See https://gitlab.rc.uab.edu/rrand11/terraform-openstack&lt;br /&gt;
&lt;br /&gt;
1. In Openstack, go to '''Identity''' -&amp;gt; '''Application Credentials'''&lt;br /&gt;
&lt;br /&gt;
2. Click '''Create Application Credential'''&lt;br /&gt;
&lt;br /&gt;
3. Name the credential, add a description, and check the box making it unrestricted (Leave the rest blank. It is important not to add an expiration date.)&lt;br /&gt;
&lt;br /&gt;
4. Download the credentials as an RC file.&lt;br /&gt;
&lt;br /&gt;
5. Save credentials RC file in your terraform-first-instance directory (echo $TERRAFORM_DIR).&lt;br /&gt;
&lt;br /&gt;
=== Using Terraform to Create and Deploy VM ===&lt;br /&gt;
1. May be easiest to do from a cheaha node?&lt;br /&gt;
&lt;br /&gt;
2. Download Terraform https://www.terraform.io/downloads.html&lt;br /&gt;
&lt;br /&gt;
3. See https://gitlab.rc.uab.edu/jelaiw/ccts-bmi-incubator for some templates. May want to git clone.&lt;br /&gt;
&lt;br /&gt;
4. See /Users/ozborn/code/repo/ccts-bmi-incubator/openstack/mice.tf for an example, shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        # Test web server for CCTS Informatics&lt;br /&gt;
        resource &amp;quot;openstack_compute_instance_v2&amp;quot; &amp;quot;mice&amp;quot; {&lt;br /&gt;
            name = &amp;quot;mice&amp;quot;&lt;br /&gt;
            image_name = &amp;quot;CentOS-7-x86_64-GenericCloud-1905&amp;quot;&lt;br /&gt;
            flavor_name = &amp;quot;m1.medium&amp;quot;&lt;br /&gt;
            key_pair = var.admin_key_pair&lt;br /&gt;
            security_groups = [&amp;quot;default&amp;quot;, &amp;quot;web&amp;quot;]&lt;br /&gt;
        # Work around race condition.&lt;br /&gt;
        # See https://github.com/terraform-providers/terraform-provider-openstack/issues/775.&lt;br /&gt;
            network {&lt;br /&gt;
                uuid = openstack_networking_subnet_v2.foo_subnet.network_id&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        resource &amp;quot;openstack_compute_floatingip_associate_v2&amp;quot; &amp;quot;mice_fip&amp;quot; {&lt;br /&gt;
            floating_ip = var.mice_floating_ip&lt;br /&gt;
            instance_id = openstack_compute_instance_v2.mice.id&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        # See https://www.terraform.io/docs/providers/openstack/r/networking_secgroup_rule_v2.html.&lt;br /&gt;
        resource &amp;quot;openstack_networking_secgroup_v2&amp;quot; &amp;quot;web&amp;quot; {&lt;br /&gt;
            name = &amp;quot;web&amp;quot;&lt;br /&gt;
            description = &amp;quot;A security group for managing rules and access to a standard HTTP web server.&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        resource &amp;quot;openstack_networking_secgroup_rule_v2&amp;quot; &amp;quot;web_std_port&amp;quot; {&lt;br /&gt;
            direction = &amp;quot;ingress&amp;quot;&lt;br /&gt;
            ethertype = &amp;quot;IPv4&amp;quot;&lt;br /&gt;
            protocol = &amp;quot;tcp&amp;quot;&lt;br /&gt;
            port_range_min = 80&lt;br /&gt;
            port_range_max = 80&lt;br /&gt;
            remote_ip_prefix = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
            security_group_id = openstack_networking_secgroup_v2.web.id&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===  Using Ansible to Provision VM ===&lt;br /&gt;
1. Take a look at /Users/ozborn/code/repo/ccts-bmi-incubator/openstack/setup-httpd.yml for Apache web server provisioning&lt;br /&gt;
&lt;br /&gt;
2. ansible-configure after doing ansible-lint to verify setup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Current Bugs Feb 2020 (per JPR) ==&lt;br /&gt;
There are two bugs that we are working on in the web ui:&lt;br /&gt;
 &lt;br /&gt;
1. launching instances fails unless &amp;quot;Create new volume&amp;quot; is set to &amp;quot;no&amp;quot; on the source tab&lt;br /&gt;
&lt;br /&gt;
2. attaching volumes to instances fails in both the ui and cli. &lt;br /&gt;
&lt;br /&gt;
== Networking Setup (per JPR) ==&lt;br /&gt;
The assigned IP addresses (floating public) will be in the range 192.168.16.128-250.  These should be mapped to 164.111.161.x where x repeats the last octet of the assigned 192 number.    The networking is set up to allow ingres from on-campus only.  The instances can go out to anywhere though. &lt;br /&gt;
&lt;br /&gt;
== References (per JPR) ==&lt;br /&gt;
We don't have official getting started docs for now We pretty much just follows the Bright Cluster Manager docs for testing the openstack API from the cli.  (Section 1.4 of https://support.brightcomputing.com/manuals/8.2/openstack-deployment-manual.pdf).  &lt;br /&gt;
&lt;br /&gt;
Louis has created some notes in a readme that may be useful as well:&lt;br /&gt;
&lt;br /&gt;
https://gitlab.rc.uab.edu/louistw/cluster-installation-note/blob/master/openstack.md&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=UAB_Openstack_Tutorial&amp;diff=6035</id>
		<title>UAB Openstack Tutorial</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=UAB_Openstack_Tutorial&amp;diff=6035"/>
		<updated>2020-02-28T17:29:54Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Add info on getting creds&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= UAB Openstack Tutorial =&lt;br /&gt;
&lt;br /&gt;
This tutorial is meant as a guide for deploying VMs in UAB's OpenStack environment.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
1. Send email to support@listserv.uab.edu and request an account&lt;br /&gt;
&lt;br /&gt;
2. Log into ruffner.rc.uab.edu to get application credentials&lt;br /&gt;
&lt;br /&gt;
3. Use terraform to create and deploy a VM to UAB OpenStack&lt;br /&gt;
&lt;br /&gt;
4. Provision your VM with ansible&lt;br /&gt;
&lt;br /&gt;
== Steps ==&lt;br /&gt;
&lt;br /&gt;
=== Create Application Credentials ===&lt;br /&gt;
See https://gitlab.rc.uab.edu/rrand11/terraform-openstack&lt;br /&gt;
&lt;br /&gt;
1. In Openstack, go to '''Identity''' -&amp;gt; '''Application Credentials'''&lt;br /&gt;
&lt;br /&gt;
2. Click '''Create Application Credential'''&lt;br /&gt;
&lt;br /&gt;
3. Name the credential, add a description, and check the box making it unrestricted (Leave the rest blank. It is important not to add an expiration date.)&lt;br /&gt;
&lt;br /&gt;
4. Download the credentials as an RC file.&lt;br /&gt;
&lt;br /&gt;
5. Save credentials RC file in your terraform-first-instance directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Using Terraform to Create and Deploy VM ===&lt;br /&gt;
1. May be easiest to do from a cheaha node?&lt;br /&gt;
&lt;br /&gt;
2. Download Terraform https://www.terraform.io/downloads.html&lt;br /&gt;
&lt;br /&gt;
3. See https://gitlab.rc.uab.edu/jelaiw/ccts-bmi-incubator for some templates. May want to git clone.&lt;br /&gt;
&lt;br /&gt;
4. See /Users/ozborn/code/repo/ccts-bmi-incubator/openstack/mice.tf for an example, shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        # Test web server for CCTS Informatics&lt;br /&gt;
        resource &amp;quot;openstack_compute_instance_v2&amp;quot; &amp;quot;mice&amp;quot; {&lt;br /&gt;
            name = &amp;quot;mice&amp;quot;&lt;br /&gt;
            image_name = &amp;quot;CentOS-7-x86_64-GenericCloud-1905&amp;quot;&lt;br /&gt;
            flavor_name = &amp;quot;m1.medium&amp;quot;&lt;br /&gt;
            key_pair = var.admin_key_pair&lt;br /&gt;
            security_groups = [&amp;quot;default&amp;quot;, &amp;quot;web&amp;quot;]&lt;br /&gt;
        # Work around race condition.&lt;br /&gt;
        # See https://github.com/terraform-providers/terraform-provider-openstack/issues/775.&lt;br /&gt;
            network {&lt;br /&gt;
                uuid = openstack_networking_subnet_v2.foo_subnet.network_id&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        resource &amp;quot;openstack_compute_floatingip_associate_v2&amp;quot; &amp;quot;mice_fip&amp;quot; {&lt;br /&gt;
            floating_ip = var.mice_floating_ip&lt;br /&gt;
            instance_id = openstack_compute_instance_v2.mice.id&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        # See https://www.terraform.io/docs/providers/openstack/r/networking_secgroup_rule_v2.html.&lt;br /&gt;
        resource &amp;quot;openstack_networking_secgroup_v2&amp;quot; &amp;quot;web&amp;quot; {&lt;br /&gt;
            name = &amp;quot;web&amp;quot;&lt;br /&gt;
            description = &amp;quot;A security group for managing rules and access to a standard HTTP web server.&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        resource &amp;quot;openstack_networking_secgroup_rule_v2&amp;quot; &amp;quot;web_std_port&amp;quot; {&lt;br /&gt;
            direction = &amp;quot;ingress&amp;quot;&lt;br /&gt;
            ethertype = &amp;quot;IPv4&amp;quot;&lt;br /&gt;
            protocol = &amp;quot;tcp&amp;quot;&lt;br /&gt;
            port_range_min = 80&lt;br /&gt;
            port_range_max = 80&lt;br /&gt;
            remote_ip_prefix = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
            security_group_id = openstack_networking_secgroup_v2.web.id&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===  Using Ansible to Provision VM ===&lt;br /&gt;
1. Take a look at /Users/ozborn/code/repo/ccts-bmi-incubator/openstack/setup-httpd.yml for Apache web server provisioning&lt;br /&gt;
&lt;br /&gt;
2. ansible-configure after doing ansible-lint to verify setup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Current Bugs Feb 2020 (per JPR) ==&lt;br /&gt;
There are two bugs that we are working on in the web ui:&lt;br /&gt;
 &lt;br /&gt;
1. launching instances fails unless &amp;quot;Create new volume&amp;quot; is set to &amp;quot;no&amp;quot; on the source tab&lt;br /&gt;
&lt;br /&gt;
2. attaching volumes to instances fails in both the ui and cli. &lt;br /&gt;
&lt;br /&gt;
== Networking Setup (per JPR) ==&lt;br /&gt;
The assigned IP addresses (floating public) will be in the range 192.168.16.128-250.  These should be mapped to 164.111.161.x where x repeats the last octet of the assigned 192 number.    The networking is set up to allow ingres from on-campus only.  The instances can go out to anywhere though. &lt;br /&gt;
&lt;br /&gt;
== References (per JPR) ==&lt;br /&gt;
We don't have official getting started docs for now We pretty much just follows the Bright Cluster Manager docs for testing the openstack API from the cli.  (Section 1.4 of https://support.brightcomputing.com/manuals/8.2/openstack-deployment-manual.pdf).  &lt;br /&gt;
&lt;br /&gt;
Louis has created some notes in a readme that may be useful as well:&lt;br /&gt;
&lt;br /&gt;
https://gitlab.rc.uab.edu/louistw/cluster-installation-note/blob/master/openstack.md&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=UAB_Openstack_Tutorial&amp;diff=6034</id>
		<title>UAB Openstack Tutorial</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=UAB_Openstack_Tutorial&amp;diff=6034"/>
		<updated>2020-02-28T17:17:28Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Initial Page Creation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= UAB Openstack Tutorial =&lt;br /&gt;
&lt;br /&gt;
This tutorial is meant as a guide for deploying VMs in UAB's OpenStack environment.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
1. Send email to support@listserv.uab.edu and request an account&lt;br /&gt;
&lt;br /&gt;
2. Log into ruffner.rc.uab.edu to get application credentials&lt;br /&gt;
&lt;br /&gt;
3. Use terraform to create and deploy a VM to UAB OpenStack&lt;br /&gt;
&lt;br /&gt;
4. Provision your VM with ansible&lt;br /&gt;
&lt;br /&gt;
== Using Terraform to Create and Deploy VM ==&lt;br /&gt;
1. May be easiest to do from a cheaha node&lt;br /&gt;
2. Download Terraform https://www.terraform.io/downloads.html&lt;br /&gt;
3. See https://gitlab.rc.uab.edu/jelaiw/ccts-bmi-incubator for some templates. May want to git clone.&lt;br /&gt;
4. See /Users/ozborn/code/repo/ccts-bmi-incubator/openstack/mice.tf for an example&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        # Test web server for CCTS Informatics&lt;br /&gt;
        resource &amp;quot;openstack_compute_instance_v2&amp;quot; &amp;quot;mice&amp;quot; {&lt;br /&gt;
            name = &amp;quot;mice&amp;quot;&lt;br /&gt;
            image_name = &amp;quot;CentOS-7-x86_64-GenericCloud-1905&amp;quot;&lt;br /&gt;
            flavor_name = &amp;quot;m1.medium&amp;quot;&lt;br /&gt;
            key_pair = var.admin_key_pair&lt;br /&gt;
            security_groups = [&amp;quot;default&amp;quot;, &amp;quot;web&amp;quot;]&lt;br /&gt;
        # Work around race condition.&lt;br /&gt;
        # See https://github.com/terraform-providers/terraform-provider-openstack/issues/775.&lt;br /&gt;
            network {&lt;br /&gt;
                uuid = openstack_networking_subnet_v2.foo_subnet.network_id&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        resource &amp;quot;openstack_compute_floatingip_associate_v2&amp;quot; &amp;quot;mice_fip&amp;quot; {&lt;br /&gt;
            floating_ip = var.mice_floating_ip&lt;br /&gt;
            instance_id = openstack_compute_instance_v2.mice.id&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        # See https://www.terraform.io/docs/providers/openstack/r/networking_secgroup_rule_v2.html.&lt;br /&gt;
        resource &amp;quot;openstack_networking_secgroup_v2&amp;quot; &amp;quot;web&amp;quot; {&lt;br /&gt;
            name = &amp;quot;web&amp;quot;&lt;br /&gt;
            description = &amp;quot;A security group for managing rules and access to a standard HTTP web server.&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        resource &amp;quot;openstack_networking_secgroup_rule_v2&amp;quot; &amp;quot;web_std_port&amp;quot; {&lt;br /&gt;
            direction = &amp;quot;ingress&amp;quot;&lt;br /&gt;
            ethertype = &amp;quot;IPv4&amp;quot;&lt;br /&gt;
            protocol = &amp;quot;tcp&amp;quot;&lt;br /&gt;
            port_range_min = 80&lt;br /&gt;
            port_range_max = 80&lt;br /&gt;
            remote_ip_prefix = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
            security_group_id = openstack_networking_secgroup_v2.web.id&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==  Using Ansible to Provision VM ==&lt;br /&gt;
1. Take a look at /Users/ozborn/code/repo/ccts-bmi-incubator/openstack/setup-httpd.yml for Apache web server provisioning&lt;br /&gt;
&lt;br /&gt;
2. ansible-configure after doing ansible-lint to verify setup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Current Bugs Feb 2020 (per JPR) ==&lt;br /&gt;
There are two bugs that we are working on in the web ui:&lt;br /&gt;
 &lt;br /&gt;
1. launching instances fails unless &amp;quot;Create new volume&amp;quot; is set to &amp;quot;no&amp;quot; on the source tab&lt;br /&gt;
&lt;br /&gt;
2. attaching volumes to instances fails in both the ui and cli. &lt;br /&gt;
&lt;br /&gt;
== Networking Setup (per JPR) ==&lt;br /&gt;
The assigned IP addresses (floating public) will be in the range 192.168.16.128-250.  These should be mapped to 164.111.161.x where x repeats the last octet of the assigned 192 number.    The networking is set up to allow ingres from on-campus only.  The instances can go out to anywhere though. &lt;br /&gt;
&lt;br /&gt;
== References (per JPR) ==&lt;br /&gt;
We don't have official getting started docs for now We pretty much just follows the Bright Cluster Manager docs for testing the openstack API from the cli.  (Section 1.4 of https://support.brightcomputing.com/manuals/8.2/openstack-deployment-manual.pdf).  &lt;br /&gt;
&lt;br /&gt;
Louis has created some notes in a readme that may be useful as well:&lt;br /&gt;
&lt;br /&gt;
https://gitlab.rc.uab.edu/louistw/cluster-installation-note/blob/master/openstack.md&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6020</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6020"/>
		<updated>2020-02-08T01:19:40Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Build your conda environment */  How to update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha. Below exports the scibert environment which I set up on a different server. You can create any filename, but keep the extension .yml&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export --no-builds &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
This file should be in your home directory on cheaha.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
&lt;br /&gt;
==  Add ipykernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To build a new environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env create --file scibert.yml --name scibert&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
TO update an existing environment:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env update --prefix ./scibert --file scibert.yml  --prune&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Load environment on Jupyter Notebook ==&lt;br /&gt;
[[File:JupyterCustomEnv.jpeg|pics500x]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6019</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6019"/>
		<updated>2020-02-07T02:33:32Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: --no-builds option&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha. Below exports the scibert environment which I set up on a different server. You can create any filename, but keep the extension .yml&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export --no-builds &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
This file should be in your home directory on cheaha.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
&lt;br /&gt;
==  Add ipykernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env create --file scibert.yml --name scibert&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Load environment on Jupyter Notebook ==&lt;br /&gt;
[[File:JupyterCustomEnv.jpeg|pics500x]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6016</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6016"/>
		<updated>2020-01-15T03:26:45Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Build your conda environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha. Below exports the scibert environment which I set up on a different server.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
This file should be in your home directory on cheaha.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
&lt;br /&gt;
==  Add ipykernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env create --file scibert.yml --name scibert&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Load environment on Jupyter Notebook ==&lt;br /&gt;
[[File:JupyterCustomEnv.jpeg|pics500x]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6015</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6015"/>
		<updated>2020-01-15T03:25:04Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Build your conda environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha. Below exports the scibert environment which I set up on a different server.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
This file should be in your home directory on cheaha.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
&lt;br /&gt;
==  Add ipykernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Load environment on Jupyter Notebook ==&lt;br /&gt;
[[File:JupyterCustomEnv.jpeg|pics500x]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:JupyterCustomEnv.jpeg&amp;diff=6014</id>
		<title>File:JupyterCustomEnv.jpeg</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:JupyterCustomEnv.jpeg&amp;diff=6014"/>
		<updated>2020-01-15T03:22:35Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: File showing what a custom conda environment looks like in Jupyter On Demand&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File showing what a custom conda environment looks like in Jupyter On Demand&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6013</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6013"/>
		<updated>2020-01-15T03:21:49Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Set up your cheaha .condarc file */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha. Below exports the scibert environment which I set up on a different server.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
This file should be in your home directory on cheaha.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
&lt;br /&gt;
==  Add ipykernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6012</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6012"/>
		<updated>2020-01-15T03:21:10Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Add ipkernel to your conda environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha. Below exports the scibert environment which I set up on a different server.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
==  Add ipykernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6011</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6011"/>
		<updated>2020-01-15T03:20:47Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Export YAML file containing your environment to cheaha */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha. Below exports the scibert environment which I set up on a different server.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
==  Add ipkernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6010</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6010"/>
		<updated>2020-01-15T03:20:27Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Export YAML file containing your environment to cheaha */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha. Below exports the scibert environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda env export &amp;gt; scibert.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
==  Add ipkernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6009</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6009"/>
		<updated>2020-01-15T03:19:02Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Adding Custom Conda Environments to Jupyter */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha.&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
==  Add ipkernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Build your conda environment ==&lt;br /&gt;
Do NOT run it on the head node, use an interactive job to create the new environment.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6008</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=6008"/>
		<updated>2020-01-15T03:17:20Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Added Custom Jupyter Conda Environments&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Adding Custom Conda Environments to Jupyter = &lt;br /&gt;
== Export YAML file containing your environment to cheaha ==&lt;br /&gt;
Wherever your working environment is, export it to cheaha.&lt;br /&gt;
== Set up your cheaha .condarc file ==&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
envs_dirs:&lt;br /&gt;
  - /data/user/ozborn/Conda_Env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use the data directory rather than the home directory for your conda environment as it can get quite large&lt;br /&gt;
==  Add ipkernel to your conda environment ==&lt;br /&gt;
Add this  conda module is what gets your environment to show up in Jupyter notebooks&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
name: scibert&lt;br /&gt;
channels:&lt;br /&gt;
  - defaults&lt;br /&gt;
dependencies:&lt;br /&gt;
  - ipykernel=5.1.2&lt;br /&gt;
  - _libgcc_mutex=0.1=main&lt;br /&gt;
  - alabaster=0.7.12=py37_0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5998</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5998"/>
		<updated>2019-10-28T02:08:41Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
[[File:LaunchJupyter.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a blue Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:LaunchJupyter.png&amp;diff=5997</id>
		<title>File:LaunchJupyter.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:LaunchJupyter.png&amp;diff=5997"/>
		<updated>2019-10-28T02:07:11Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Button to press.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Button to press.&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5996</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5996"/>
		<updated>2019-10-28T02:02:24Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Adding numbering&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
== 1. Click [http://rc.uab.edu On Demand] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 2. Select Interactive App and pick Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 3. Load in Anaconda ==&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
== 4. If you require running on a '''''GPU''''', please add the following to your environment. ==&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
== 5. Click Launch ==&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a green Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
== 6. Connect to Jupyter Notebook ==&lt;br /&gt;
&lt;br /&gt;
== 7. Test Pytorch with a new notebook == &lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5995</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5995"/>
		<updated>2019-10-28T01:57:22Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
3. Load in Anaconda&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
4. If you require running on a '''''GPU''''', please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
5. Click Launch&lt;br /&gt;
&lt;br /&gt;
Wait until you receive an email or get a green Launch button. This can happen in about 10-20 seconds or may take much longer depending on the resources (CPU count and memory requested).&lt;br /&gt;
&lt;br /&gt;
6. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
7. Test Pytorch with a new notebook&lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5994</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5994"/>
		<updated>2019-10-28T00:05:30Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Updated GPU instructions&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
3. Load in Anaconda&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
4. If you require running on a '''''GPU''''', please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the pascalenodes argument:&lt;br /&gt;
&lt;br /&gt;
[[File:PascalNodes.png|500px]]&lt;br /&gt;
&lt;br /&gt;
5. Click Launch&lt;br /&gt;
&lt;br /&gt;
6. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
7. Test Pytorch with a new notebook&lt;br /&gt;
&lt;br /&gt;
[[File:TestPytorch.png|500px]]&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:PascalNodes.png&amp;diff=5993</id>
		<title>File:PascalNodes.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:PascalNodes.png&amp;diff=5993"/>
		<updated>2019-10-28T00:03:36Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Arguments for getting a GPU&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Arguments for getting a GPU&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:TestPytorch.png&amp;diff=5992</id>
		<title>File:TestPytorch.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:TestPytorch.png&amp;diff=5992"/>
		<updated>2019-10-28T00:03:18Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Shows successful pytorch run on pascalnodes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shows successful pytorch run on pascalnodes&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:PascalNodes.pdf&amp;diff=5991</id>
		<title>File:PascalNodes.pdf</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:PascalNodes.pdf&amp;diff=5991"/>
		<updated>2019-10-27T23:57:35Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Shows arguments for starting a job using PascaleNodes to get a GPU&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shows arguments for starting a job using PascaleNodes to get a GPU&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:TestPytorch.pdf&amp;diff=5990</id>
		<title>File:TestPytorch.pdf</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:TestPytorch.pdf&amp;diff=5990"/>
		<updated>2019-10-27T23:57:03Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Shows success test of PyTorch on Jupter Notebook Oct 2019&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shows success test of PyTorch on Jupter Notebook Oct 2019&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5989</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5989"/>
		<updated>2019-10-24T16:19:56Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
3. Load in Anaconda&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
4. If you require running on a '''''GPU''''', please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the GPU argument:&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvironment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
5. Click Launch&lt;br /&gt;
&lt;br /&gt;
6. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = &lt;br /&gt;
'''''(not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)'''''&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5988</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5988"/>
		<updated>2019-10-24T16:17:27Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter by Proxy */  Warning added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
3. Load in Anaconda&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
4. If you require running on a **GPU**, please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the GPU argument:&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvironment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
5. Click Launch&lt;br /&gt;
&lt;br /&gt;
6. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy = (not longer required as of August 2019, use OnDemand option instead and this only as a fallback option)&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5987</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5987"/>
		<updated>2019-10-24T16:16:28Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */  Cleanup&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
3. Load in Anaconda&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
The following should also work for an updated version of Anaconda.&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
4. If you require running on a **GPU**, please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the GPU argument:&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvironment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
5. Click Launch&lt;br /&gt;
&lt;br /&gt;
6. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5986</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5986"/>
		<updated>2019-10-24T16:14:41Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
3. Load in Anaconda&lt;br /&gt;
'''&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. If running on a GPU, please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need to request a GPU as shown below by&lt;br /&gt;
including the GPU argument:&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvironment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
4. Click Launch&lt;br /&gt;
&lt;br /&gt;
5. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5985</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5985"/>
		<updated>2019-10-22T16:31:02Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. If running on a GPU, please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
 module load Anaconda3/5.3.1&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
An example screenshot is shown below, including adding the GPU argument.&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvironment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
4. Click Launch&lt;br /&gt;
&lt;br /&gt;
5. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5984</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5984"/>
		<updated>2019-10-22T16:26:33Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. If running on a GPU, please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
 module load Anaconda3/5.3.0&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
An example screenshot is shown below, including adding the GPU argument.&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvironment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
4. Click Launch&lt;br /&gt;
&lt;br /&gt;
5. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5983</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5983"/>
		<updated>2019-10-22T16:15:08Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Updated Anaconda environment&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. If running on a GPU, please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
 module load cuda92/toolkit/9.2.88&lt;br /&gt;
 module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
 module load Anaconda/5.3.0&lt;br /&gt;
&lt;br /&gt;
'''&lt;br /&gt;
&lt;br /&gt;
An example screenshot is shown below, including adding the GPU argument.&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvironment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
4. Click Launch&lt;br /&gt;
&lt;br /&gt;
5. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5932</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5932"/>
		<updated>2019-09-17T15:31:11Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: GPU environment description&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. If running on a GPU, please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
'''module load cuda92/toolkit/9.2.88&lt;br /&gt;
&lt;br /&gt;
module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
&lt;br /&gt;
module load Anaconda3'''&lt;br /&gt;
&lt;br /&gt;
An example screenshot is shown below, including adding the GPU argument.&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvironment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
4. Click Launch&lt;br /&gt;
&lt;br /&gt;
5. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:JupyterNotebookGPUEnvironment.png&amp;diff=5931</id>
		<title>File:JupyterNotebookGPUEnvironment.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:JupyterNotebookGPUEnvironment.png&amp;diff=5931"/>
		<updated>2019-09-17T15:29:21Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Arguments for using GPUs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Arguments for using GPUs&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5930</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5930"/>
		<updated>2019-09-17T15:29:00Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. If running on a GPU, please add the following to your environment.&lt;br /&gt;
&lt;br /&gt;
module load cuda92/toolkit/9.2.88&lt;br /&gt;
&lt;br /&gt;
module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
&lt;br /&gt;
module load Anaconda3&lt;br /&gt;
&lt;br /&gt;
Example shown below.&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookGPUEnvireonment.png|500px]]&lt;br /&gt;
&lt;br /&gt;
4. Click Launch&lt;br /&gt;
&lt;br /&gt;
5. Connect to Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5929</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5929"/>
		<updated>2019-09-17T15:24:11Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */  Cleanup starting Jupyter&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via [http://rc.uab.edu On Demand]. To access.&lt;br /&gt;
&lt;br /&gt;
1. Click [http://rc.uab.edu On Demand]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select Interactive App and pick Jupyter Notebook&lt;br /&gt;
&lt;br /&gt;
[[File:JupyterNotebookStart.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. If running on a GPU, please add the following to your environment:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
module load cuda92/toolkit/9.2.88&lt;br /&gt;
&lt;br /&gt;
module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
&lt;br /&gt;
module load Anaconda3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=File:JupyterNotebookStart.png&amp;diff=5928</id>
		<title>File:JupyterNotebookStart.png</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=File:JupyterNotebookStart.png&amp;diff=5928"/>
		<updated>2019-09-17T15:15:56Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: Image of starting Jupyter Notebook on cheaha&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Image of starting Jupyter Notebook on cheaha&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5927</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5927"/>
		<updated>2019-09-17T15:13:02Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Jupyter On Demand */ - environment arguments&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via click [http://rc.uab.edu On Demand].&lt;br /&gt;
&lt;br /&gt;
If running on a GPU, please add the following to your environment:&lt;br /&gt;
```&lt;br /&gt;
module load cuda92/toolkit/9.2.88&lt;br /&gt;
module load CUDA/9.2.88-GCC-7.3.0-2.30&lt;br /&gt;
module load Anaconda3&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5923</id>
		<title>Jupyter</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=Jupyter&amp;diff=5923"/>
		<updated>2019-08-31T22:21:47Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: On Demand added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://jupyter.org/ Jupyter Notebook]  is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. For more information on jupyter notebook, click [http://jupyter.org/documentation here].&lt;br /&gt;
&lt;br /&gt;
= Jupyter On Demand =&lt;br /&gt;
As of 2019, UAB Research Computing allows access to cheaha via click [http://rc.uab.edu On Demand].&lt;br /&gt;
= Jupyter by Proxy =&lt;br /&gt;
&lt;br /&gt;
The cheaha cluster supports Jupyter notebooks for data analysis, but such jobs should be running using the SLURM job submission system to avoid overloading the head node. To run a Jupyter Notebook on cheaha, login to cheaha from your client machine and start an [https://docs.uabgrid.uab.edu/wiki/Slurm#Interactive_Job interactive job]. &lt;br /&gt;
&lt;br /&gt;
One important note is that cheaha only supports openssh, you should be able to use native ssh from Mac or Linux machines. Windows 10 supports openssh as well, but it is not enabled by default. On updated Windows 10 machines, '''a Developers Command Prompt''' (available via searching from the Start Menu) is able to run openssh via the ssh command similar to Mac and Linux users. Another option for Windows machines is the installation of Cygwin. Putty has been  [[Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems|tested]], but does not work reliably on cheaha for proxying connections.&lt;br /&gt;
&lt;br /&gt;
The Jupyter notebooks is built with [[Anaconda]],a free and open source distribution of python and R for scientific computing. If you need additional packages, you can create your own [[Python_Virtual_Environment]] just for that purpose.&lt;br /&gt;
&lt;br /&gt;
== 1. Start the Jupyter Notebook ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME --pty /bin/bash&lt;br /&gt;
module load Anaconda3/5.2.0&lt;br /&gt;
unset XDG_RUNTIME_DIR&lt;br /&gt;
jupyter notebook --no-browser --ip=$host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A headless Jupyter notebook should now be running on a compute node. The next step is to proxy this connection to your local machine.&lt;br /&gt;
&lt;br /&gt;
== 2. Proxy Connection Locally ==&lt;br /&gt;
Now, start up a '''new''' tab/terminal/window on your client machine and relogin to cheaha, using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -L 88XX:c00XX:88XX BLAZERID@cheaha.rc.uab.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Note:'''&lt;br /&gt;
* '''c00XX''' is the compute node where you started the jupyter notebook, for example c0047&lt;br /&gt;
* '''88XX''' is the port that the notebook is running, for example 8888&lt;br /&gt;
* For windows users, you can find instructions for port forwarding, [https://docs.uabgrid.uab.edu/wiki/Setting_Up_VNC_Session#Port-forwarding_from_Windows_Systems here]&lt;br /&gt;
&lt;br /&gt;
== 3. Copy notebook URL ==&lt;br /&gt;
After running the jupyter notebook command the server should start running in headless mode and provide you with a URL including a port # (typically but not always 8888) and a compute node on cheaha (for example C0047) that looks something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Copy/paste this URL into your browser when you connect for the first time,&lt;br /&gt;
    to login with a token:&lt;br /&gt;
        http://c0047:8888/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy the URL shown below into you clipboard/buffer for pasting into the browser as shown in step 4).&lt;br /&gt;
&lt;br /&gt;
== 4. Access Notebook through Local Browser via Proxy Connection ==&lt;br /&gt;
Now access the link on your client machine browser locally using the link generated by jupyter notebook by '''substituting in localhost instead of c00XX'''. Make sure you have the correct port as well.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://localhost:88XX/?token=73da89e0eabdeb9d6dc1241a55754634d4e169357f60626c&amp;amp;token=73da89e0eabdeb7d6dc1241a55754634d4e169357f60626c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A Jupyter notebook should then open in your browser connected to the compute node.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Options ==&lt;br /&gt;
&lt;br /&gt;
=== DeepNLP option (development in progress) ===&lt;br /&gt;
For the use of additional libraries (pytorch, spacy) related to Deep Learning and/or NLP after loading Anaconda3/5.2.0 run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda activate /share/apps/rc/software/Anaconda3/5.2.0/envs/DeepNLP&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Heavy Data IO option ===&lt;br /&gt;
Additionally, if anticipating large IO data transfer adjust the run command to set a higher data rate limit as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jupyter notebook --no-browser --ip=$host --NotebookApp.iopub_data_rate_limit=1.0e10 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Heavy option ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=4 --mem-per-cpu=16384 --time=08:00:00 --partition=medium --job-name=POSTag --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPU Option ===&lt;br /&gt;
Finally, if your job requires a GPU then add the [https://docs.uabgrid.uab.edu/wiki/Slurm#Requesting_for_GPUs gres and partition arguments] as shown below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4096 --time=08:00:00 --partition=pascalnodes --job-name=JOB_NAME --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5887</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5887"/>
		<updated>2019-01-30T17:38:14Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* NLP Discovered Patients */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
&lt;br /&gt;
==== Data Loading Vocabulary ====&lt;br /&gt;
* '''Code''' - Structured data in the ''REGISTRY_PATIENT_CODES'' table including ICD-10-CM codes, DRG codes, MULTUM medication codes, etc...&lt;br /&gt;
* '''NLP''' - UMLS CUIs stored (currently) in the medics schema &lt;br /&gt;
* '''Encounter''' - Encounter data stored in ''REGISTRY_ENCOUNTER''&lt;br /&gt;
* '''Detection''' - Indicates that detected patient will be added as candidates to a registry&lt;br /&gt;
* '''Intervention''' - Like detection, but the Intervention property will be set on the patient regardless of status&lt;br /&gt;
* '''Collection''' - The data type will be copied to the ''REGISTRY_PATIENT_CODES'' table or ''REGISTRY_ENCOUNTER'' from the appropriate ETL table&lt;br /&gt;
&lt;br /&gt;
==== Data Loading Process ====&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY _PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY _PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT is set to a non-zero value || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP (which contains WHERE clause SQL) are loaded into REGISTRY_ENCOUNTERS  || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading (deprecated) || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Evidence Code Ontology ==&lt;br /&gt;
The evidence code ontology is used to describe how a patient's status in the registry (ACCEPTED, REJECTED, CANDIDATE OR PROVISIONAL) has been decided.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Evidence Code&lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Short form&lt;br /&gt;
! Interpretation&lt;br /&gt;
|-&lt;br /&gt;
| Referred&lt;br /&gt;
| REF&lt;br /&gt;
| Current patient registry status has been assigned manually&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm Discovered Patient&lt;br /&gt;
| ALG&lt;br /&gt;
| Current patient registry status assigned by an algorithm&lt;br /&gt;
|-&lt;br /&gt;
| NLP Discovered Patient&lt;br /&gt;
| NLP&lt;br /&gt;
| Current patient registry status has been assigned on the basis of NLP processed text.&lt;br /&gt;
|-&lt;br /&gt;
| Code discovered patient&lt;br /&gt;
| DIAG&lt;br /&gt;
| Current patient registry status has been assigned on the basis of a structured code, ex) ICD-10-CM&lt;br /&gt;
|-&lt;br /&gt;
| Encounter Discovered Patient&lt;br /&gt;
| ENC&lt;br /&gt;
| Current patient registry status has been assigned on the basis of an encounter&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Registry Cvtermprop Data Loading Values for Evidence Updating ===&lt;br /&gt;
==== Code Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop must have property in cvtermprop starting with &amp;quot;Code&amp;quot; and ending with &amp;quot;Detection&amp;quot;&lt;br /&gt;
* Registry cvtermprop must have a type_id with appropriate ontology and a value that ends with &amp;quot;diagnosis criteria&amp;quot; or &amp;quot;inclusion criteria&amp;quot;&lt;br /&gt;
==== Encounter Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop value must have criteria for &amp;quot;Encounter Intervention&amp;quot; or &amp;quot;Encounter Detection&amp;quot; and non-null SQL clause in the value field&lt;br /&gt;
==== NLP Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop must have property in cvtermprop starting with &amp;quot;NLP&amp;quot; AND ending with (&amp;quot;Detection&amp;quot; or &amp;quot;Intervention&amp;quot; or &amp;quot;Inclusion&amp;quot;)&lt;br /&gt;
* Registry cvtermprop must have a type_id with appropriate ontology AND a value that ends with (&amp;quot;detection NLP criteria&amp;quot; or &amp;quot;inclusion NLP criteria&amp;quot; or &amp;quot;intervention NLP criteria&amp;quot;)&lt;br /&gt;
TODO - make separate CV for this&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5886</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5886"/>
		<updated>2019-01-30T17:31:06Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Evidence Code Ontology */ Updating NLP&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
&lt;br /&gt;
==== Data Loading Vocabulary ====&lt;br /&gt;
* '''Code''' - Structured data in the ''REGISTRY_PATIENT_CODES'' table including ICD-10-CM codes, DRG codes, MULTUM medication codes, etc...&lt;br /&gt;
* '''NLP''' - UMLS CUIs stored (currently) in the medics schema &lt;br /&gt;
* '''Encounter''' - Encounter data stored in ''REGISTRY_ENCOUNTER''&lt;br /&gt;
* '''Detection''' - Indicates that detected patient will be added as candidates to a registry&lt;br /&gt;
* '''Intervention''' - Like detection, but the Intervention property will be set on the patient regardless of status&lt;br /&gt;
* '''Collection''' - The data type will be copied to the ''REGISTRY_PATIENT_CODES'' table or ''REGISTRY_ENCOUNTER'' from the appropriate ETL table&lt;br /&gt;
&lt;br /&gt;
==== Data Loading Process ====&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY _PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY _PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT is set to a non-zero value || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP (which contains WHERE clause SQL) are loaded into REGISTRY_ENCOUNTERS  || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading (deprecated) || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Evidence Code Ontology ==&lt;br /&gt;
The evidence code ontology is used to describe how a patient's status in the registry (ACCEPTED, REJECTED, CANDIDATE OR PROVISIONAL) has been decided.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Evidence Code&lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Short form&lt;br /&gt;
! Interpretation&lt;br /&gt;
|-&lt;br /&gt;
| Referred&lt;br /&gt;
| REF&lt;br /&gt;
| Current patient registry status has been assigned manually&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm Discovered Patient&lt;br /&gt;
| ALG&lt;br /&gt;
| Current patient registry status assigned by an algorithm&lt;br /&gt;
|-&lt;br /&gt;
| NLP Discovered Patient&lt;br /&gt;
| NLP&lt;br /&gt;
| Current patient registry status has been assigned on the basis of NLP processed text.&lt;br /&gt;
|-&lt;br /&gt;
| Code discovered patient&lt;br /&gt;
| DIAG&lt;br /&gt;
| Current patient registry status has been assigned on the basis of a structured code, ex) ICD-10-CM&lt;br /&gt;
|-&lt;br /&gt;
| Encounter Discovered Patient&lt;br /&gt;
| ENC&lt;br /&gt;
| Current patient registry status has been assigned on the basis of an encounter&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Registry Cvtermprop Data Loading Values for Evidence Updating ===&lt;br /&gt;
==== Code Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop must have property in cvtermprop starting with &amp;quot;Code&amp;quot; and ending with &amp;quot;Detection&amp;quot;&lt;br /&gt;
* Registry cvtermprop must have a type_id with appropriate ontology and a value that ends with &amp;quot;diagnosis criteria&amp;quot; or &amp;quot;inclusion criteria&amp;quot;&lt;br /&gt;
==== Encounter Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop value must have criteria for &amp;quot;Encounter Intervention&amp;quot; or &amp;quot;Encounter Detection&amp;quot; and non-null SQL clause in the value field&lt;br /&gt;
==== NLP Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop must have property in cvtermprop starting with &amp;quot;NLP&amp;quot; AND ending with (&amp;quot;Detection&amp;quot; or &amp;quot;Intervention&amp;quot;)&lt;br /&gt;
* Registry cvtermprop must have a type_id with appropriate ontology AND a value that ends with (&amp;quot;NLP criteria&amp;quot; or &amp;quot;inclusion NLP criteria&amp;quot; or &amp;quot;intervention NLP criteria&amp;quot;)&lt;br /&gt;
TODO - make separate CV for this&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5885</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5885"/>
		<updated>2019-01-30T17:19:45Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Evidence Code Ontology */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
&lt;br /&gt;
==== Data Loading Vocabulary ====&lt;br /&gt;
* '''Code''' - Structured data in the ''REGISTRY_PATIENT_CODES'' table including ICD-10-CM codes, DRG codes, MULTUM medication codes, etc...&lt;br /&gt;
* '''NLP''' - UMLS CUIs stored (currently) in the medics schema &lt;br /&gt;
* '''Encounter''' - Encounter data stored in ''REGISTRY_ENCOUNTER''&lt;br /&gt;
* '''Detection''' - Indicates that detected patient will be added as candidates to a registry&lt;br /&gt;
* '''Intervention''' - Like detection, but the Intervention property will be set on the patient regardless of status&lt;br /&gt;
* '''Collection''' - The data type will be copied to the ''REGISTRY_PATIENT_CODES'' table or ''REGISTRY_ENCOUNTER'' from the appropriate ETL table&lt;br /&gt;
&lt;br /&gt;
==== Data Loading Process ====&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY _PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY _PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT is set to a non-zero value || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP (which contains WHERE clause SQL) are loaded into REGISTRY_ENCOUNTERS  || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading (deprecated) || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Evidence Code Ontology ==&lt;br /&gt;
The evidence code ontology is used to describe how a patient's status in the registry (ACCEPTED, REJECTED, CANDIDATE OR PROVISIONAL) has been decided.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Evidence Code&lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Short form&lt;br /&gt;
! Interpretation&lt;br /&gt;
|-&lt;br /&gt;
| Referred&lt;br /&gt;
| REF&lt;br /&gt;
| Current patient registry status has been assigned manually&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm Discovered Patient&lt;br /&gt;
| ALG&lt;br /&gt;
| Current patient registry status assigned by an algorithm&lt;br /&gt;
|-&lt;br /&gt;
| NLP Discovered Patient&lt;br /&gt;
| NLP&lt;br /&gt;
| Current patient registry status has been assigned on the basis of NLP processed text.&lt;br /&gt;
|-&lt;br /&gt;
| Code discovered patient&lt;br /&gt;
| DIAG&lt;br /&gt;
| Current patient registry status has been assigned on the basis of a structured code, ex) ICD-10-CM&lt;br /&gt;
|-&lt;br /&gt;
| Encounter Discovered Patient&lt;br /&gt;
| ENC&lt;br /&gt;
| Current patient registry status has been assigned on the basis of an encounter&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Registry Cvtermprop Data Loading Values for Evidence Updating ===&lt;br /&gt;
==== Code Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop value must end with &amp;quot;diagnosis criteria&amp;quot; or &amp;quot;inclusion criteria&amp;quot;&lt;br /&gt;
==== Encounter Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop value must have criteria for &amp;quot;Encounter Intervention&amp;quot; or &amp;quot;Encounter Detection&amp;quot; and non-null SQL clause in the value field&lt;br /&gt;
==== NLP Discovered Patients ====&lt;br /&gt;
* Registry cvtermprop value must end with &amp;quot;NLP criteria&amp;quot; or &amp;quot;inclusion NLP criteria&amp;quot;&lt;br /&gt;
TODO - make separate CV for this&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5884</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5884"/>
		<updated>2019-01-30T16:59:05Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Developer Guide to PheDRS Data Loading Criteria */ Updating vocabulary meaning.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
&lt;br /&gt;
==== Data Loading Vocabulary ====&lt;br /&gt;
* '''Code''' - Structured data in the ''REGISTRY_PATIENT_CODES'' table including ICD-10-CM codes, DRG codes, MULTUM medication codes, etc...&lt;br /&gt;
* '''NLP''' - UMLS CUIs stored (currently) in the medics schema &lt;br /&gt;
* '''Encounter''' - Encounter data stored in ''REGISTRY_ENCOUNTER''&lt;br /&gt;
* '''Detection''' - Indicates that detected patient will be added as candidates to a registry&lt;br /&gt;
* '''Intervention''' - Like detection, but the Intervention property will be set on the patient regardless of status&lt;br /&gt;
* '''Collection''' - The data type will be copied to the ''REGISTRY_PATIENT_CODES'' table or ''REGISTRY_ENCOUNTER'' from the appropriate ETL table&lt;br /&gt;
&lt;br /&gt;
==== Data Loading Process ====&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY _PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY _PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT is set to a non-zero value || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP (which contains WHERE clause SQL) are loaded into REGISTRY_ENCOUNTERS  || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading (deprecated) || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Evidence Code Ontology ==&lt;br /&gt;
The evidence code ontology is used to describe how a patient's status in the registry (ACCEPTED, REJECTED, CANDIDATE OR PROVISIONAL) has been decided.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Evidence Code&lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Short form&lt;br /&gt;
! Interpretation&lt;br /&gt;
|-&lt;br /&gt;
| Referred&lt;br /&gt;
| REF&lt;br /&gt;
| Current patient registry status has been assigned manually&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm Discovered Patient&lt;br /&gt;
| ALG&lt;br /&gt;
| Current patient registry status assigned by an algorithm&lt;br /&gt;
|-&lt;br /&gt;
| NLP Discovered Patient&lt;br /&gt;
| NLP&lt;br /&gt;
| Current patient registry status has been assigned on the basis of NLP processed text.&lt;br /&gt;
|-&lt;br /&gt;
| Code discovered patient&lt;br /&gt;
| DIAG&lt;br /&gt;
| Current patient registry status has been assigned on the basis of a structured code, ex) ICD-10-CM&lt;br /&gt;
|-&lt;br /&gt;
| Encounter Discovered Patient&lt;br /&gt;
| ENC&lt;br /&gt;
| Current patient registry status has been assigned on the basis of an encounter&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5875</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5875"/>
		<updated>2019-01-24T20:39:55Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Phenotype Detection Registry System (PheDRS) User Documentation */  Evidence Codes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY _PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY _PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT is set to a non-zero value || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP (which contains WHERE clause SQL) are loaded into REGISTRY_ENCOUNTERS  || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading (deprecated) || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Evidence Code Ontology ==&lt;br /&gt;
The evidence code ontology is used to describe how a patient's status in the registry (ACCEPTED, REJECTED, CANDIDATE OR PROVISIONAL) has been decided.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Evidence Code&lt;br /&gt;
|-&lt;br /&gt;
! Name&lt;br /&gt;
! Short form&lt;br /&gt;
! Interpretation&lt;br /&gt;
|-&lt;br /&gt;
| Referred&lt;br /&gt;
| REF&lt;br /&gt;
| Current patient registry status has been assigned manually&lt;br /&gt;
|-&lt;br /&gt;
| Algorithm Discovered Patient&lt;br /&gt;
| ALG&lt;br /&gt;
| Current patient registry status assigned by an algorithm&lt;br /&gt;
|-&lt;br /&gt;
| NLP Discovered Patient&lt;br /&gt;
| NLP&lt;br /&gt;
| Current patient registry status has been assigned on the basis of NLP processed text.&lt;br /&gt;
|-&lt;br /&gt;
| Code discovered patient&lt;br /&gt;
| DIAG&lt;br /&gt;
| Current patient registry status has been assigned on the basis of a structured code, ex) ICD-10-CM&lt;br /&gt;
|-&lt;br /&gt;
| Encounter Discovered Patient&lt;br /&gt;
| ENC&lt;br /&gt;
| Current patient registry status has been assigned on the basis of an encounter&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5874</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5874"/>
		<updated>2019-01-24T20:28:59Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Installation / Setup */ TODO&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY _PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY _PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT is set to a non-zero value || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP (which contains WHERE clause SQL) are loaded into REGISTRY_ENCOUNTERS  || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading (deprecated) || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5870</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5870"/>
		<updated>2019-01-18T20:19:00Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Developer Guide to PheDRS Data Loading Criteria */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY _PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY _PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT is set to a non-zero value || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP (which contains WHERE clause SQL) are loaded into REGISTRY_ENCOUNTERS  || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading (deprecated) || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5869</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5869"/>
		<updated>2019-01-18T19:54:48Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Developer Guide to PheDRS Data Loading Criteria */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY _PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY _PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  || MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  || is set to a non-zero value || &lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || (which contains WHERE clause SQL) are loaded into || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || REGISTRY_ENCOUNTERS || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading (deprecated) || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5868</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5868"/>
		<updated>2019-01-18T19:51:27Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Developer Guide for Registry Configuration Properties */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide to PheDRS Data Loading Criteria ===&lt;br /&gt;
This table details how the PheDRS Data Loading Criteria ontology is used to by backend clients (controlled by loadAll.bsh) to set patient status and review status. Like other registry configuration criteria, the configuration information is stored in the cvterm and cvtermprop for that registry.&lt;br /&gt;
&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY_PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY_PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  || MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  || is set to a non-zero value || &lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || (which contains WHERE clause SQL) are loaded into || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || REGISTRY_ENCOUNTERS || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5867</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5867"/>
		<updated>2019-01-18T19:44:47Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Developer Guide for Registry Configuration Properties */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide for Registry Configuration Properties ===&lt;br /&gt;
This table details the relationship between the registry PHEDRS back end processing script and registry specific information in the PHEDRS schema (encoded mostly in cvterm and cvtermprop tables)&lt;br /&gt;
The goal is to have the back-end script reading the database to get registry specific parameters to detect patients and flag events that require registry review.&lt;br /&gt;
These scripts detect patients for the registry (updating status_id) and flag patients for review (updating review_status_id) when registry specific events occur based on registry configuration.&lt;br /&gt;
The server side process will never set the patient registry status to 'Under Review&amp;quot; or &amp;quot;Reviewed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY_PATIENT _CVTERM !!  Output Patient Registry Status Change !! Output Patient Review Status !! Output REGISTRY_PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  || MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  || is set to a non-zero value || &lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  |||| ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || (which contains WHERE clause SQL) are loaded into || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || REGISTRY_ENCOUNTERS || &lt;br /&gt;
|-&lt;br /&gt;
|  || || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||&lt;br /&gt;
|-&lt;br /&gt;
| Document Loading || Existence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5866</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5866"/>
		<updated>2019-01-18T19:38:32Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Developer Guide for Registry Configuration Properties */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide for Registry Configuration Properties ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Developer Guide for Registry Configuration Properties&lt;br /&gt;
|-&lt;br /&gt;
! CRITERIA NAME&lt;br /&gt;
! EVENT DESCRIPTION&lt;br /&gt;
! Input Patient Registry Status&lt;br /&gt;
! Input Patient Review Status&lt;br /&gt;
! Input Required Registry Patient Cvterm/s&lt;br /&gt;
! Outputput Patient Registry Status&lt;br /&gt;
! Output Patient Review Status&lt;br /&gt;
! Output Change to Registry Patient Cvterm/s&lt;br /&gt;
! Action Description&lt;br /&gt;
! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
|Diagnosis (Code_Detection)&lt;br /&gt;
|PATIENT_CODE entry that matches diagnosis criteria in CVTERMPROP&lt;br /&gt;
|NULL	&lt;br /&gt;
|NULL	&lt;br /&gt;
|NULL	&lt;br /&gt;
|CANDIDATE	&lt;br /&gt;
|NEVER REVIEWED	&lt;br /&gt;
|ANY	&lt;br /&gt;
|Add candidate	&lt;br /&gt;
|CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY_PATIENT _CVTERM !!  Output Patient Registry Status !! Output Patient Review Status !! Output REGISTRY_PATIENT _CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry hat matches diagnosis criteria in CVTERMPROP || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate t || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention ||  NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) ||  PATIENT_CODE entry that matches inclusion criteria in CVTERMPROP|| All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  || MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  || is set to a non-zero value || &lt;br /&gt;
|-&lt;br /&gt;
|  ||   ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection ||  None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  || || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  || ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection || COPD,CRCP(?) || None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || COPD || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || (which contains WHERE clause SQL) are loaded into || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || REGISTRY_ENCOUNTERS || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Document Loading || COPD || Exisitence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
	<entry>
		<id>https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5865</id>
		<title>PheDRS</title>
		<link rel="alternate" type="text/html" href="https://docs.uabgrid.uab.edu/w/index.php?title=PheDRS&amp;diff=5865"/>
		<updated>2019-01-18T19:33:32Z</updated>

		<summary type="html">&lt;p&gt;Ozborn@uab.edu: /* Developer Guide for Registry Configuration Properties */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phenotype Detection Registry System (PheDRS) User Documentation =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The Phenotype Detection Registry System (PheDRS) is a tool at UAB used to assist researchers and clinicians identify populations of interest at UAB matching clinical criteria from both structured data (billing codes, medications, labs) and unstructured documents using Natural Language Processing. The user documentation described here is intended for managers and users of the PheDRS system.&lt;br /&gt;
&lt;br /&gt;
Additional documentation for PheDRS is found in the following locations:&lt;br /&gt;
&lt;br /&gt;
1. Project specific information is found in the appropriate git repo wikis and markdown text&lt;br /&gt;
&lt;br /&gt;
2. UAB specific developer documentation is found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
== Installation / Setup ==&lt;br /&gt;
Currently this is very site specific and depends on various backend systems, EHR vendors, etc... UAB specific site setup can be found in the UAB_BioMedInformatics\PHEDRS Box directory&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
PheDRS supports the following user roles:&lt;br /&gt;
&lt;br /&gt;
=== Manager ===&lt;br /&gt;
The manager of a registry. Full read and write access to the registry. Can add new users as registrars or managers to the registry.&lt;br /&gt;
=== Registrar ===&lt;br /&gt;
A user with read and write access to the registry.&lt;br /&gt;
=== Administrator ===&lt;br /&gt;
System administrator responsible for maintaining the entire PheDRS system including all registries. &lt;br /&gt;
=== Viewer ===&lt;br /&gt;
Read only access to the registry.&lt;br /&gt;
=== Inactive ===&lt;br /&gt;
Former registrars no longer active.&lt;br /&gt;
=== De-ID Viewer ===&lt;br /&gt;
Viewer allowed to view de-identified data (18 HIPAA Safe Harbor items removed) data.&lt;br /&gt;
&lt;br /&gt;
== Registry Configuration ==&lt;br /&gt;
The registry is configured by the registry manager, often in conjunction with the system administrator.&lt;br /&gt;
&lt;br /&gt;
=== Registry Property Descriptions and Examples ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Registry Configuration: Cvtermprop parameters&lt;br /&gt;
|-&lt;br /&gt;
! CV NAME&lt;br /&gt;
! CV ID&lt;br /&gt;
! Example of cvterm name for TYPE_ID&lt;br /&gt;
! Meaning and Use of VALUE field for this type&lt;br /&gt;
! Example of Value Field&lt;br /&gt;
|-&lt;br /&gt;
| UAB ICD-10-CM Codes&lt;br /&gt;
| 9&lt;br /&gt;
| Panlobular emphysema&lt;br /&gt;
| If patient has this billing code, assign as candidate to registry&lt;br /&gt;
| UAB COPD Registry ICD-10-CM diagnosis criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB MS DRG Codes&lt;br /&gt;
| 12&lt;br /&gt;
| COPD w CC&lt;br /&gt;
| If patient has this DRG code, then add anchor cvterm property to this code&lt;br /&gt;
| DRG Codes COPD Anchor Encounter criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| COPD w MCC&lt;br /&gt;
| If patient has this DRG code, then add patient to the registry as &amp;quot;accepted&amp;quot;&lt;br /&gt;
| UAB COPD Registry DRG Codes inclusion criteria&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Data Loading Criteria&lt;br /&gt;
| 23&lt;br /&gt;
| Default NLP Pipeline Collection&lt;br /&gt;
| This is the ID# of the default pipeline for displaying documents in this registry. Unenforced Foreign Key to db_id=14&lt;br /&gt;
| 2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Collection&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry&lt;br /&gt;
|  AND (SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-730)) AND FORMATTED_MEDICAL_RECORD_NBR IN (SELECT FORMATTED_UAB_MRN FROM REGISTRY_PATIENT WHERE REGISTRY_ID=2861)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Encounter Intervention&lt;br /&gt;
| Encounters matching the appended SQL criteria are added to the registry AND candidate is flagged for review&lt;br /&gt;
| AND SRC_ADMIT_DT_TM &amp;gt;= (SYSDATE-30) AND (   (reason_for_visit_txt LIKE '%COPD%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHITIS%') OR (UPPER(reason_for_visit_txt) LIKE '%BRONCHIECTASIS%') OR (UPPER(reason_for_visit_txt) LIKE '%EMPHYSEMA%')  )&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS MetaData CV&lt;br /&gt;
| 24&lt;br /&gt;
| Patient Review Cutoff Date&lt;br /&gt;
| Do not retrieve any data (codes, encounters, documents) from source system before this date&lt;br /&gt;
| 1/1/17&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Registry Application Title&lt;br /&gt;
| Use this text here for the name of the registry&lt;br /&gt;
| COPD Registry Control Panel&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Display Codes Database Identifier&lt;br /&gt;
| DB_ID permissable to display in &amp;quot;Diagnoses and Drgs&amp;quot; Panel&lt;br /&gt;
| 8&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| PHEDRS Tab CV&lt;br /&gt;
| 25&lt;br /&gt;
| REGISTRY_INPATIENT_REVIEW&lt;br /&gt;
| Indicates that this tab should be displayed in this registry. Value indicates display order from LEFT to RIGHT (1 for leftmost tab)&lt;br /&gt;
| 1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Optum UAB APR DRG Codes 2017&lt;br /&gt;
| 26&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| UAB WH CLN REF Codeset 69&lt;br /&gt;
| 27&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Developer Guide for Registry Configuration Properties ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Developer Guide for Registry Configuration Properties&lt;br /&gt;
|-&lt;br /&gt;
! CRITERIA NAME&lt;br /&gt;
! EVENT DESCRIPTION&lt;br /&gt;
! Input Patient Registry Status&lt;br /&gt;
! Input Patient Review Status&lt;br /&gt;
! Input Required Registry Patient Cvterm/s&lt;br /&gt;
! Outputput Patient Registry Status&lt;br /&gt;
! Output Patient Review Status&lt;br /&gt;
! Output Change to Registry Patient Cvterm/s&lt;br /&gt;
! Action Description&lt;br /&gt;
! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
|Diagnosis (Code_Detection)&lt;br /&gt;
|PATIENT_CODE entry that matches diagnosis criteria in CVTERMPROP&lt;br /&gt;
|NULL	&lt;br /&gt;
|NULL	&lt;br /&gt;
|NULL	&lt;br /&gt;
|CANDIDATE	&lt;br /&gt;
|NEVER REVIEWED	&lt;br /&gt;
|ANY	&lt;br /&gt;
|Add candidate	&lt;br /&gt;
|CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Criteria Name !! Event Description !!  Input Patient Registry Status !! Input Patient Review Status !! Input REGISTRY_PATIENT _CVTERM !!  Output Patient Registry Status !! Output Patient Review Status !! Output REGISTRY_PATIENT_CVTERM !! Action Description !! Implementation Notes&lt;br /&gt;
|-&lt;br /&gt;
| Diagnosis (Code_Detection) ||  PATIENT_CODE entry || NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate that matches diagnosis criteria in CVTERMPROP || CVTERMs corresponding to codes have a CVTERMPROP set on them of type registry_name (from cv_id 11) with a value containing text &amp;quot;diagnosis criteria&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|  ||   || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Plan is to swap CVTERMs such that CVTERM_ID is the registry and the TYPE_ID is the code&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore diagnosis criteria on accepted patients || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Detection ||  NLP_HITS_EXTENDED contains CUI with detection criteria|| NULL || NULL || NULL || CANDIDATE || NEVER REVIEWED || ANY || Add candidate || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || Rejected|Provisional || ANY || ANY || NO CHANGE || NEEDS REVIEW || ANY || Flag rejected and provisional candidates for review || Value Field Example: nlp_detection|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  || || Accepted|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || ANY || Ignore detection criteria on accepted patients || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| NLP_Intervention || None || NLP_HITS_EXTENDED || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || CVTERMS corresponding to registries have a CVTERMPROP set on them with cvterm_id (registry_id), type_id (cvterm in NLP Pipeline Parameter Ontology) with value containing a UMLS CUI&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || contains CUI with || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Value Field Example: nlp_intervention|C98764323,C0012345,C0654321&lt;br /&gt;
|-&lt;br /&gt;
|  ||  || intervention criteria || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention ||  || Multiple CUIs are supported as additional rows in CVTERMPROP. Contextual CUIs are supported via commas&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE ||  || Compare date of code/encounter with the end(if not null) or start_date of the REGISTRY_PATIENT_CVTERM entry&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Inclusion (Code Inclusion) || COPD || PATIENT_CODE entry || All except Accepted || ANY || ANY || ACCEPTED || NEEDS REVIEW ||  || DRG Codes generate automatic registry inclusion || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || that matches inclusion || Accepted || ANY || ANY || NO CHANGE || NEEDS REVIEW ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  || criteria in CVTERMPROP ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Eligibility || CRCP || Custom || ANY || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Machine learning of patient eligibility || Custom registry specific implementation that should result in a publication… :)&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || MACHINE_ACCEPTANCE_SCORE in REGISTRY_PATIENT || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  || is set to a non-zero value || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Code_Collection || CRCP, MM || None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Specify data to collect for the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (data source) and value which is a regular expression specifying code_values to collect from REGISTRY_PATIENT_CODES&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Do not collect data unless patient is in registry || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Collection || COPD,CRCP(?) || None (daily) || Rejected|Candidate || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Collect specified encounter data for people in the registry || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in the format below&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE ||  || REGISTRY_ENCOUNTER COLUMN_NAME|REGEX||REGISTRY_ENCOUNTER_COLUMN_NAME|REGEX….&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Intervention || COPD || Encounter || ANY || ANY || Requires Intervention || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Encounters matching criteria specified in CVTERM_PROP || CVTERMPROP has cvterm_id (registry_id) and type_id (sourcesystem_cd) and value in SQL format&lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || ANY || ANY || Intervention Complete &amp;lt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || (which contains WHERE clause SQL) are loaded into || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || ANY || ANY || Any other current valid cvterms || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || Requires Intervention || REGISTRY_ENCOUNTERS || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  || ANY || ANY || Intervention Complete &amp;gt; Encounter Date || NO CHANGE (CANDIDATE IF NULL) || NO CHANGE || NO CHANGE || Flag patients for intervention || &lt;br /&gt;
|-&lt;br /&gt;
| Encounter Detection || None || Encounter ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
|  ||  ||  ||  ||  ||  ||  ||  ||  ||  || &lt;br /&gt;
|-&lt;br /&gt;
| Document Loading || COPD || Exisitence of documents || Provisional|Accepted || ANY || ANY || NO CHANGE || NO CHANGE || NO CHANGE || Load NLP_DOCS from source systems for registry patients || Should be a default analysis type that is used for CVTERMPROP&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Back End Workflow / Architecture ==&lt;br /&gt;
&lt;br /&gt;
== Web Service Documentation ==&lt;br /&gt;
=== RegistryWS ===&lt;br /&gt;
=== UDAS ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper References ==&lt;/div&gt;</summary>
		<author><name>Ozborn@uab.edu</name></author>
	</entry>
</feed>