UABgrid Documentation:Community Portal
From UABgrid Documentation
HPC Services Plans
HPC Services is the division within the IT Infrastructure Services organization with a focus on HPC support for research and other HPC activities. HPC Services support includes HPC Cluster Support, Networking & Infrastructure, Middleware, and Academic Research Support. By Research, it is meant specifically to assist or collaborate with grant activities that require IT resources. In addition, it may also include acquiring and managing high performance computing resources, such as Beowulf clusters and network storage arrays. HPC Services participates in institutional strategic planning and self-study as related to academic IT. HPC Services represents the Office of Vice-President of Information Technology to IT-related academic campus committees, regional / national technology research organizations and/or committees as requested.
Note: The term HPC is used to mean high performance computing, which has many definitions available on the web. At UAB, HPC generally refer to “computational facilities substantially more powerful than current desktops computers (PCs and workstations) …by an order of magnitude or better.” See http://parallel.hpc.unsw.edu.au/rks/docs/hpc-intro/node3.html for more description of this usage of HPC.
HPC Project Five Year Plan as of Summer 2006
As a result of discussions between IT, CIS, and ETL to determine the best methods and associated costs to interconnect HPC clusters in campus buildings BEC and CH, a preliminary draft of scope and five year plan for HPC at UAB was prepared. In order to ensure growth and stability of IT support for research computing and to obtain wide support for academic researchers for a workable model the mission of IT Academic Computing has been revised and merged into a more focused unit within IT Network & Infrastructure Services under the name of HPC Services, which is the division within the IT Infrastructure Services. See Office of VP of IT Organization Chart.
- Scope: Building upon the exiting UAB HPC resources in CIS and ETL, IT and campus researchers are setting a goal to establish a UAB HPC data center, whose operations will be managed by IT Infrastructure and which will include additional machine room space designed for HPC and equipped with a new cluster. The UAB HPC Data Center and HPC resource will be used by researchers throughout UAB, the UAS system, and other State of Alabama Universities and research entities in conjunction with the Alabama Supercomputer Authority. Oversight of the UAB HPC resources will be provided by a committee made up of UAB Deans, Department Heads, Faculty, and the VPIT. Daily administration of this shared resource will be provided by the Department of Network and Infrastructure Services.
- Integrate the design, construction, and staffing of an HPC Data Center with overall IT plans.
- Secure funding for a new xxxxTeraFlop HPC Cluster. For example, HPCS will continue working with campus researchers in submitting proposals.
- Preliminary Timeline
- FY2007: Rename Academic Computing, HPCS, and merge HPCS with Network and Infrastructure, to leverage the HPC related talents, and resources of both organizations.
- FY2007: Connect existing HPC Clusters to each other and 10Gig backbone.
- FY2007: Bring up pilot grid identity management system – GridShib (HPCS, Network/Services)
- FY2007: Enable Grid Meta Scheduling (HPCS, CIS, ETL)
- FY2007: Establish Grid connectivity with SURA, UAS, and, ASA.
- FY2007: Develop shared HPC resource policies.
- FY2008: Increase support staff as needed by reassigning legacy Mainframe technical resources
- FY2008: Develop requirements for expansion or replacement of older HPC’s. xxxxTeraFlops.
- FY2008: Using HPC requirements (xxxx TeraFlops) for Data Center Design, begin design of HPC Data Center.
- FY2009: Secure Funding for new HPC Cluster xxxxTera Flops
- FY2010: Complete HPC Data Center Infrastructure.
- FY2010: Secure final funding for expansion or replacement of older HPC’s.
- FY2011: Procure and deploy new HPC cluster. xxxxTeraFlops.
HPC Services Goals for FY2007
- GOAL 1: UAB Grid Computing Project
- Bring up pilot of grid identity management based on using GridShib software which incorporate Shibboleth in the core grid software Globus;
- Enable a grid meta-scheduling capability in collaboration with CIS and ETL so that UAB users will see a single interface for submission of HPC jobs running on primary clusters in ETL and CIS;
- Explore expanding the campus model for HPC to other campuses of UA System and to the Alabama Supercomputing Center.
- GOAL 2: InCommon / Shibboleth Project
- Work with Infrastructure and Network Services to coordinate new and expanding campus applications using Shibboleth;
- Evaluate establishing a second pilot Shibboleth application with other members of InCommon;
- Establish UAB grid as a UAB application offered to InCommon members; and
- Evaluate establishing pilot Shibboleth applications as an advanced technology demonstration of capabilities for inter-institutional user authentication and authorization for access to common workspace supporting calendar, document sharing, data sharing, and communication technologies for desktop.
- GOAL 3:
- Participation in External IT Groups within Alabama, Region and US, such as, UA System Collaborative Technology activities, Alabama Regional Optical Network, Internet2, SURA grid, EDUCAUSE, Global Grid Forum, and Super-Computing