HPCS Biweekly Meeting with Bob 7 Feb 2009

Agenda for HPCS Meeting with Bob Feb 7, 2008, 9am

1.	UABgrid development o	Compute power expansion - this includes the 10GigE network interfaces and the ASA effort. It also includes access to resources like ODU and LSU.

o	Application migration - clearly this is the r-group effort for now.

o	The survey you are doing of CTSA participants probably could be counted under this track, though it's more user and needs identification. Moving forward -- DLS schedule meeting with rest of faculty bioinformatics experts – Tony, Elliott, David Allison, etc (from CTSA table) – include Bob and/or Keith(?)

2.	UAB Research IT Infrastructure Improvements Categories can be derived from looking at the clusters-networks <-> uabgrid <-> user-applications services stack that needs to be developed. o	10 GE Research Network i.	Host Interfaces-- Scheduling Research Network people including Doug and Phil to order NIC for each head node ASAP…JPR to finalize meeting and to include DLS and Bob (or Keith?) in list of required attendees.

ii. Software Development and documentation efforts

o	Next Steps for configuring and ordering new HPC cluster to be located in either Rust or BEC?

3.	JPR Update on of r-group work (see attachment at end): o	Tapan, Jelai, and JPR are planning the next steps and timeline. o	The goal for the r-group effort to be "running SSG methodological workflow at capacity". o	This means being able to run their workflow via the grid with enough assembled compute power (UAB, ASA, LSU) to meet their demand. o	There should be plenty of measurable outcomes. o	The most important next step is to begin migrating one of their methodological analysis workflows. o	SSG is willing to commit more effort on this (see my email from earlier) but an exact timeframe isn't available yet. o	I estimating 2 months to reach the first example workflow run milestone, based on the pace of previous work.

4.	ASA – UABgrid operational status? What next and what is timeline? o	I've contacted ASA for an update, but we've hit a limitation of Globus with ASA's new configuration: running a the Globus software on a separate node the scheduler. o	ASA has moved to having all their clusters managed under a common scheduler and, understandably, wants to keep general accounts off that node. o	There are notes from the Globus project but we'll need to adapt that. o	This may require a test config locally to better understand the config. Clearly this is time involved. o	I'd like to discuss if bringing Mike Hanby into the loop, might be a good way to add man power, since ENG has expressed interest in a similar config. o	There is still effort in coordinating all this and then there are project trust issues, eg. ENG strategic goals which may conflict. o	From the ASA side, I haven't sensed any lack of interest in enabling this, they just seem time strapped like our end.

5.	Upcoming Activities: o	OGF22 (Feb 25-28, Boston, http://www.ogf.org/OGF22/program.php) Registration Discount Period Closes This Friday-- Event Registration: http://www.ogf.org/OGF22/registration.php Hotel Reservations: http://www.ogf.org/OGF22/lodging.php

o	Propose supporting CIS graduate student (Enis – co-author on several UABgrid projects) to attend CGF22 for UAB and to present to Grid Meta-Scheduling group session the UAB experiences of implementing meta-scheduling on UABgrid.

o	IT Intern working in HPCS will graduate in spring with MS in computer science, and she would like to continue working on UABgrid development and applications. Can IS support her for 1 year on practical training visa – OPT?

Subject: Re: R on the Grid presentation Date: Wed, 06 Feb 2008 13:40:53 -0600 From: John-Paul Robinson  To: Tapan Mehta  CC: Jelai Wang 

Thanks for the update from your end Tapan. I think your email is a good overview of our collaboration opportunities. I've organized my thoughts on these below.

The presentation went well and the conference offered a lot of good dialogs and opportunities for further exploration.

Regarding the Confluence/Jira work. I think it would be easiest to focus on Confluence to start. The wiki is used by the Internet2 community and has received the most attention on integration with Shibboleth (the authn/z infrastructure component used by UAB and UABgrid). The level of integration has been basic so far, but the vendor is in dialog with others from Internet2 about additional features. We should be able to demonstrate functionality and base additional work on the results.

You had mentioned that your group could likely set up a distinct instance of Confluence to explore this configuration. I think this is the best approach near term. I don't have experience running/installing Confluence. There is some Apache-side configuration needed in addition to Confluence configuration. We can discuss this further as we refine a test plan.

As part of the r-group presentation. I had hoped to have the project web site updated. I still need to update our project site with our progress to date and outline our steps going forward. My goal is to use this site more heavily to coordinate and report progress. I had hoped to complete this work prior to the presentation, but I'll be taking care of it now that I'm back in town. This should be a good processes for us to contribute to the project effectively in the near term.

As an overview, I see several collaboration opportunities:

1) Running R on the grid -- this is the most important issue. Based on our discussions prior to the presentation, I think our best goal for this effort is to make it possible for SSG to run the methodological analysis workflow at full capacity.  This seems to be the area where you could most benefit from more compute power.  It is a goal that is easily measured and has a good scope.  From there we can branch to wall-clock performance increases and other workflow tuning.  Is this understanding of your problem space accurate? The next step here will be to take one of the methodological analysis workflows and see if we can run it via the grid.

2) Making R more accessible -- this is an important issue but is harder to scope than the first one. There are multiple solutions and it will take more of an effort to define "accessible".  I'm operating on the assumption that improved access means a non-commandline interface for launching workflows used by a larger set of the community.  I'm assuming it would be either a desktop or web-based interface launching cluster jobs, ideally across the grid.  We are looking at GridSphere, a popular portal used by grid communities, as a potential solution, but we'd need to discuss this more before persuing any particular approach.  There are many potential directions, and we'd be in a better position to pursue them if issue #1 is a success.  This is not to say we can't start discussing potential solutions and exploring options.

3) Creating a useful project infrastructure -- also an important issue, but has longer term factors. The UABgrid is intended to be a collaboration environment where groups can use default services or incorporate their own technology components. I've been using Trac for my projects because of source code access and cost.  The source code access gives me unlimited freedom for exploring integration.  The zero cost allows me to not worry about licensing issues impacting deployment. As is typical for any application environment, and almost to an extreme case for web-applications, many groups will prefer different tools. Most often, the first best tool that they find which has decent representation in their larger community.  The goal of UABgrid's authn/z infrastructure is to support the use of whatever tool is preferred by a group.  I think Confluence is a strong contender for a large set of users. I'd like to see if it can be integrated with UABgrid based on the work already done by folks in Internet2. I'd also like to have a good example of how a group like SSG can choose a tool they prefer and have it be integrated with other elements of UABgrid (eg. tool choice shouldn't be an either/or proposition). Finally, because Confluence will likely be popular with others, understanding how it could be deployed so it can be offered as a common solution (much like Trac) to other groups would be helpful.

I hope this overview has been useful. I'll be working to update the r-goup site related to topic #1 this week. The overview above should also provide us with enough material to discuss and some loose categories to sort in.

Let me know if you have any questions about my comments. I think this is a great start for our planning.

Thanks,

~jpr