Slides from Jisc panel session at HPC & Big Data 2016 with contributions from the Francis Crick Institute, QMUL and King's College London covering their use of the Jisc shared data centre and the eMedLab project
Shared services - the future of HPC and big data facilities for UK research
1. Shared services – the future of HPC & big data facilities for UK research?
Martin Hamilton, Jisc
David Fergusson & Bruno Silva, Francis Crick Institute
Andreas Biternas, King’s College london
Thomas King, Queen Mary University of London
Photo credit: CC-BY-NC-ND JiscHPC & Big Data 2016
2. Shared services for HPC & big data
1. About Jisc
– Who, why and what?
– Success stories
2. Recent developments
3. Personal perspectives & panel discussion
– David Fergusson & Bruno Silva, Francis Crick Institute
– Andreas Biternas, King’sCollege London
– Thomas King, Queen Mary University of London
4. 1. About Jisc
Jisc is the UK higher education, further education and
skills sectors’ not-for-profit organisation for digital
services and solutions.This is what we do:
› Operate shared digital infrastructure and services
for universities and colleges
› Negotiate sector-wide deals, e.g. with IT vendors
and commercial publishers
› Provide trusted advice and practical assistance
8. 1. About Jisc
Netflix
Voicenet
Akamai
Virgin Radio
Bogons
Logicalis UK
Pipex / GXN
BBC
Datahop
InTechnology
INUK
Simplecall
LINX multicast
Gamma
Google
Simplecall
Redstone
Updata
aql
Voicenet
Google
Limelight
Limelight
Akamai
BTnet
Init7
Amazon
Microsoft EU (viaTN)
Telekom Malaysia
Globelynx
10Gbit/s
1Gbit/s
100Gbit/s
GÉANT
GÉANT+
LINX
Microsoft EU (viaTW)
Total external connectivity ≈ 1Tbit/s
Leeds
Akamai
Google
VM for LGfLInTechnology
NHS N3
Exa Networks
Synetrix BBC (HD 4K pilots)
One Connect
Glasgow
&
Edinburgh
HEAnet
BBC (Pacific Quay)
Gamma
BBC (HD 4K pilots)
NHS N3
SWAN (Glas)
SWAN (Edin)
Manchester
Telecity
Harbour
Exch.
Telehouse
North &
West
VM for LGfL
RM for Schools
VM for LGfL
RM for Schools
GlobalTransit
Tata
IXManchester
IXLeeds
GlobalTransit
Level3
GlobalTransit
Level3
12. 2. Recent developments
www.jisc.ac.uk/financial-x-ray
Financial X-Ray
› Easily understand and compare
overall costs for services
› Develop business cases for
changes to IT infrastructure
› Mechanism for dialogue
between finance and IT
departments
› Highlight comparative cost of
shared and commercial third
party services
13. 2. Recent developments
Assent (formerly Project
Moonshot)
› Single, unifying technology that enables
you to effectively manage and control
access to a wide range of web and non-
web services and applications.
› These include cloud infrastructures, High
Performance Computing, Grid Computing
and commonly deployed services such as
email, file store, remote access and
instant messaging
www.jisc.ac.uk/assent
14. 2. Recent developments
Equipment sharing
› Brokered industry access to £60m
public investment in HPC
› Piloting the Kit-Catalogue software,
helping institutions to share details
of high value equipment
› Newcastle University alone is
sharing £16m+ of >£20K value
equipment via Kit-Catalogue
Photo credit: HPC Midlands
http://bit.ly/jiscsharing
15. 2. Recent developments
http://bit.ly/jiscsharing
Equipment sharing
› Working with EPSRC and University of
Southampton to operationalise
equipment.data as a national service
› 45 organisations sharing details of over
12,000 items of equipment
› Conservative estimate: £240m value
› Evidencing utilisation & sharing?
16. 2. Recent developments
Janet Reach:
› £4M funding from BIS to work
towards a Janet which is "open and
accessible" to industry
› Provides industry access to university
e-infrastructure facilities to facilitate
further investment in science,
engineering and technology with the
active participation of business and
industry
› Modelled on Innovate UK
competition process
bit.ly/janetreach
17. 2. Recent developments
Janet Reach:
› £4M funding from BIS to work
towards a Janet which is "open and
accessible" to industry
› Provides industry access to university
e-infrastructure facilities to facilitate
further investment in science,
engineering and technology with the
active participation of business and
industry
› Modelled on Innovate UK
competition process
bit.ly/jisc-hpc
18. 2. Recent developments
Research Data
Management Shared
Service
› Procurement under way
› Aiming to pilot for 24 months
starting this summer
› 13 pilot institutions
› Research Data Network
› Find out more:
researchdata.jiscinvolve.org
19. 2. Recent developments
Research Data
Discovery Service
› Alpha!
› Uses CKAN to aggregate
research data from institutions
› Test system has 16.7K datasets
from 14 organisations so far
› Search and browse:
ckan.data.alpha.jisc.ac.uk
23. 3. Personal perspectives
› David Fergusson
› Head of Scientific Computing
› Bruno Silva
› HPC Lead
› Francis Crick Institute
24. eMedLab:
Merging HPC and Cloud for
Biomedical Research
Dr Bruno Silva
eMedLab Service Operations Manager
HPC Lead - The Francis Crick Institute
bruno.silva@crick.ac.uk 01/12/2015
30. Winning bid
• 6048 cores (E5-2695v2)
• 252 IBM Flex servers, each with
• 24 cores
• 512GB RAM per compute server
• 240GB SSD (2x120GB RAID0)
• 2x10Gb Ethernet
• 3:1 Mellanox Ethernet fabric
• IBM GSS26 – Scratch 1.2PB
• IBM GSS24 – General Purpose (Bulk) 4.3PB
• Cloud OS – OpenStack
31.
32. Benchmark results
preliminary
• Aggregate HPL (one run per server – embarrassingly parallel)
• Peak 460Gflops*252 = 116Tflops
• Max – 94%
• Min – 84%
• VM ≈ Bare metal HPL runs (16 core)
33. Benchmark results
preliminary – bare metal only
• Storage throughput
Bulk File System (gpfsperf GB/s) Scratch File System (gpfsperf GB/s)
Create Read Write Create Read Write
Sequential Sequential Random Sequential Random Sequential Sequential Random Sequential Random
16M 16M 512K 16M 512K 16M 512K 16M 512K 16M 16M 512K 16M 512K 16M 512K 16M 512K
100 88 86 131 22 96 97 89 60 141 84 83 107 20 137 137 125 28
37. Projects
• Principal Investigator / Project lead
• Reports to eMedLab governance
• Controls who has access to project resources
• Project Systems Administrator
• Institutional resource and / or
• Specialised research team member(s)
• Works closely with eMedLab support
• Researchers
• Those who utilise the software and data available in eMedLab for the project
39. Federated Institutional support
Operations Team Support
(Support to facilitators and Systems Administrators)
Institutional Support
(direct support to research)
Tickets
Training
Documentation
elasticluster
44. Pilot Projects
• Chela James - Gene discovery, rapid genome sequencing,
somatic mutation analysis and high-definition phenotyping
VM
Image
Installing OS
CPU RAM Disk
“Flavours”
VM
Instanc
e
1
VM
Instanc
e
N
Network
Start/Stop/Hold/Checkpoint
Instance
Horizon Console
SSH - External IP
SSH – Tunnel
Web interface, etc…
45. Pilot Projects
• Peter Van Loo – Scalable, Collaborative, Cancer Genomics
Cluster
elasticluster
50. Challenges - Support
• High Barrier to entry
• Provide environments that resemble HPC or Desktop, or more intuitive interfaces
• Engender new thinking about workflows
• Promote Planning and Resource management
• Train support staff as well as researchers
• Resource-intensive support
• Promote community-based support and documentation
• Provide basic common tools and templates
• Upskill and mobilise local IT staff in departments
• Move IT support closer to the research project – Research Technologist
52. Challenges - Integration
• Suitability of POSIX Parallel file systems for Cloud Storage
• Working closely with IBM
• Copy-on-write feature of SS (GPFS) is quite useful for fast instance creation
• SS has actually quite a lot of the scaffolding required for a good object store
• Presentation SS or NAS to VMs requires additional AAAI layer
• Working closely with Red Hat and OCF to deliver IdM
• Presentation of SS to VMs introduces stability problems that could be worked-
around with additional SS licenses and some bespoke scripting
• Non-standard Network and Storage architecture
• Additional effort by vendors to ensure stable and performant infrastructure up-to-
date infrastructure – great efforts by everyone involved!
• Network re-design
54. Challenges - Performance
• File System Block Re-Mapping
• SS performs extremely well with 16MB blocks – we want to leverage this
• Hypervisor overhead (not all cores used for compute)
• Minimise number of cores “wasted” on cloud management
• On the other hand fewer cores means more memory bandwidth
• VM IO performance potentially affected by virtual network stack
• Leverage features available in the Mellanox NICs such as RoCE, SR-IOV, and
offload capabilities
55. Challenges – Performance
Block Re-Mapping
• SS (GPFS) is very good at handling many small files – by design
• VMs perform random IO reads and a few writes with their storage
• VM storage (and Cinder storage pools) are very large files on top of GPFS
• VM block size does not match SS (GPFS) block size
Bulk File System (gpfsperf GB/s) Scratch File System (gpfsperf GB/s)
Create Read Write Create Read Write
Sequential Sequential Random Sequential Random Sequential Sequential Random Sequential Random
16M 16M 512K 16M 512K 16M 512K 16M 512K 16M 16M 512K 16M 512K 16M 512K 16M 512K
100 88 86 131 22 96 97 89 60 141 84 83 107 20 137 137 125 28
56. Challenges – Performance
Block Re-Mapping
• Idea: turn random into sequential IO
• Have a GPFS standing
Bulk File System (gpfsperf GB/s) Scratch File System (gpfsperf GB/s)
Create Read Write Create Read Write
Sequential Sequential Random Sequential Random Sequential Sequential Random Sequential Random
16M 16M 512K 16M 512K 16M 512K 16M 512K 16M 16M 512K 16M 512K 16M 512K 16M 512K
100 88 86 131 22 96 97 89 60 141 84 83 107 20 137 137 125 28
58. Challenges - Presentation
• Access to eMedLab through VPN only
• Increases security
• Limits upload throughput
• Rigid, non-standard networking
• Immediately provides a secure environment with complete separation
• Projects only need to add VMs to the existing network
• Very inflexible, limits the possibility of a shared ecosystem of “public”
services
• Introduces great administration overheads when creating new projects –
space for improvement
62. Challenges - Security
• Presentation of SS shared storage to VMs raises security concerns
• VMs will have root access – even with squash, user can sidestep identity
• Re-export SS with a server-side authentication NAS protocol
• Alternatively, abstract shared storage with another service such as iRODS
• Ability of OpenStack users to maintain security of VMs
• Particularly problematic when deploying “from scratch” systems
• A competent, dedicated PSA mitigates this
64. Challenges - Allocation
• Politics and Economics of “unscheduled” cloud
• Resource allocation in rigid portions of infrastructure (large, medium, small)
• Onus of resource utilisation is with Project team
• A charging model may have to be introduced to promote good behaviour
• The infrastructure supplier does not care about efficiency, as long as cost is recovered
• Scheduling over unallocated portions of infrastructure may help maximise utilisation
• Benefits applications that function as Direct Acyclid Graphs (DAGs)
• Private cloud is finite and limited
• Once it is fully allocated, projects will be on a waiting list, rather than a queue
• Cloud bursting can “de-limit” the cloud, if funding permits it
• This would be a talk on its own.
66. Future Developments
• VM and Storage performance analysis
• Create optimal settings recommendations for Project Systems Administrators and Ops team
• Revisit Network configuration
• Provide a simpler, more standard OpenStack environment
• Simplify service delivery, account creation, other administrative tasks
• Research Data Management for Shared Data
• Could be a service within the VM services ecosystem
• IRODS is a possibility
• Explore potential of Scratch
• Integration with Assent (Moonshot tech)
• Access to infrastructure through remote credentials and local authorisation
• First step to securely sharing data across sites (Safe Share project)
67. Conclusions
• eMedLab is ground breaking in terms
• Institutional collaboration around a shared infrastructure
• Federated support model
• Large scale High Performance Computing Cloud (it can be done!)
• Enabling a large scale highly customisable workloads for Biomedical research
• Linux cluster still required (POSIX legacy applications)
• SS guarantees this flexibility at very high performance
• We can introduce Bare Metal (Ironic) if needed for a highly versatile platform
• Automated scheduling of granular workloads
• Can be done inside the Cloud
• True Parnership - OCF, Red Hat, IBM, Lenovo, and Mellanox
• Partnership working very well
• All vendors highly invested in eMedLab’s success
68. The Technical Design Group
• Mike Atkins – UCL (Project Manager)
• Andy Cafferkey – EBI
• Richard Christie – QMUL (Chair)
• Pete Clapham – Sanger
• David Fergusson – the Crick
• Thomas King – QMUL
• Richard Passey – UCL
• Bruno Silva – the Crick
69. Institutional Support Teams
UCL:
Facilitator: David Wong
PSA: Faruque Sarker
Crick:
Facilitator: David Fergusson/Bruno Silva
PSA: Adam Huffman, Luke Raimbach, John Bouquiere
LSHTM:
Facilitator: Jackie Stewart
PSA: Steve Whitbread, Kuba Purebski
70. Institutional Support Teams
Sanger:
Facilitator: Tim Cutts, Josh Randall
PSA: Peter Clapham, James Beal
EMBL-EBI:
Facilitator: Steven Newhouse/Andy Cafferkey
PSA: Gianni Dalla Torre
QMUL:
Tom King
71. Operations Team
Thomas Jones (UCL) Pete Clapham (Sanger)
William Hay (UCL) James Beale (Sanger)
Luke Sudbery (UCL)
Tom King (QMUL)
Bruno Silva (Ops Manager, Crick)
Adam Huffman (Crick) Andy Cafferkey (EMBL-EBI)
Luke Raimbach (Crick) Rich Boyce (EMBL-EBI)
Stefan Boeing (Data Manager, Crick) David Ocana (EMBL-EBI)
73. VM
Image
Installing OS
CPU RAM Disk
“Flavours”
VM
Instanc
e
1
VM
Instanc
e
N
Network
Start/Stop/Hold/Checkpoint
Instance
Horizon Console
SSH - External IP
SSH – Tunnel
Web interface, etc…
76. example research themes to be studied in the Academy Labs; by exploiting the commonalities underl
the datasets, we shall build tools and algorithms that cut across the spectrum of diseases.
Storage,)Compute,)Security,)Networking)
Access)to)Infrastructure)
Tools)&)analy<cs)
Genomic,)imaging,)clinical)datasets)
Cancer,)rare)and)cardiovascular)diseases)
GSK,)Saran)Cannon,)
DDN,)Intel,)IBM,)
Aridhia))
Farr)Ins<tute,)
Genomics)England,)
UCLH)BRC)
Informa<on)flow)
links)
ELIXIR,)ENCODE,)
1000)Genomes,)
Ensembl)
Proposed)funding)
External)funding)
Fig#1.
77. users to sh
within mul
will also
services a
developed
systems.
programme
infrastructu
resources
technologie
parts of th
others. Ea
guaranteed
Private(
Secure(
Collabora0ve(
Space(
Partner(
projects(
eMedLab(
Partner(
projects(
Partner(
projects(
EBI(
Partner(
projects(
FARR@UCLP(
Kings((
Health(
Partners(
Fig#3.!The!co
shared!data
resources!al
78. Winning bid
• Standard Compute cluster
• Ethernet network fabric
• Spectrum Scale storage
• Cloud OS
79. Initial requirements
• Hardware geared towards very high data throughput work – capable
for running an HPC cluster and a Cloud based on VMs
• Cloud OS (open source and commercial option)
• Tiered storage system for:
• High performance data processing
• Data Sharing
• Project storage
• VM storage
80. Bid responses – interesting facts
• Majority providing OpenStack as the Cloud OS
• Half included an HPC and a Cloud environment
• One provided a Vmware-based solution
• One provided a OpenStack-only solution
• Half tender responses offered Lustre
• One provided Ceph for VM storage
82. CHALLENGES HAVING A SERVER FARM IN THE CENTER OF
LONDON
A N D R EAS B I T ER NAS
F A CU LTY OF N A TU RAL A N D M A T HEMATICAL S C I ENCES
King’s College HPC
infrastructure in JISC
DC
83. • Cost of Space: Roughly £25k per square meter in Strand;
• Power:
• Expensive switches and UPS which require annual maintenance;
• Unreliable power supply due to high demand in center of London;
• Cooling:
• Expensive cooling system similar to one in Virtus DC;
• High cost for running and maintenance of the system;
• Weight: Due to the oldness of the building, there are strict weight restrictions
as an auditorium is below the server farm(!);
• Noise pollution: There is strong noise from the server farm up to 2 floors
below;
Problems and costs of having server farm in
Strand campus
84. King’s college infrastructure in Virtus DC
• Total 25 cabinets with ~200 racks in Data Hall 1:
• 16 cabinets HPC cluster ADA+Rosalind;
• Rest King’s Central IT infrastructure: fileservers, firewalls etc.;
• Rosalind, a consortium between Faculty of Natural and
Mathematical Sciences, South London and Maudsley NHS
Foundation Trust BRC( Biomedical Research Centre) and
Guy’s and St Thomas’ NHS Foundation Trust BRC;
• Rosalind has around 5000 cores, ~150 Teraflops,
HPC and Cloud part using OpenStack;
85. Features of Virtus Datacentre
• Power:
• Two Redundant central power connections;
• UPS & onsite power generator;
• Two redundant PSU in each rack ;
• Cooling:
• Chilled water system cooled via fresh air;
• Configures as hot and cold aisles;
• Services:
• Remote hands;
• Installation and maintenance;
• Office, storing spaces and wifi;
• Secure access control environment;
86. • Better internet connection;
• No “single” connections;
• Fully resilient network;
• The bandwidth requirements of
large data sets were being met;
Connectivity with Virtus Datacentre
87. • Due to the contract with JISC, tenants(Francis Crick Institute,
Queen Mary University, King’s College etc.) have special rates;
• Costs:
• Standard fee for each rack which includes costs of space, cooling,
connectivity etc.;
• Power consumed form each rack in normal market(education) prices;
Costs of Virtus Datacentre
88. 3. Personal perspectives
› Thomas King
› Head of Research Infrastructure
› Queen Mary University of London
90. Who are we?
20,000 students and 4,000 staff
5 campuses in London
3 faculties
Humanities & Social Sciences
Science & Engineering
Barts & the London School of Medicine and Dentistry
92. Old World IT
Small central provision
Lots of independent teams offering a lot of overlap in services and
bespoke solutions
21 machine rooms
93. IT Transformation Programme 2012-15
Centralisation of staff and services ~200 people
Consolidation into two data centres
On-site ~20 racks
Off-site facility within fibre channel latency distances
Highly virtualised environment
Enterprise services run in active-active
JISC Janet6 upgrades
94. Research IT
Services we support –
HPC
Research Storage
Hardware hosting
Clinical and secure systems
Enterprise virtualisation is not what we’re after
Five nines is not our issue – bang for buck
No room at the inn
Build our own on-site?
The OAP home
95. Benefits of shared data centre
Buying power and tenant’s association
Better PUE than smaller on-site DC
contribution to sustainability commitment
Transparent costing for power use
Network redundancy – L2 and L3 of JISC network
Collaboration – it’s all about the data
Cloudier projects
Emotional detachment from blinking LEDs
Direction of funding – GridPP, Environmental omics Cloud
96. That’s all folks…
Except where otherwise noted, this
work is licensed under CC-BY
Martin Hamilton
Futurist, Jisc, London
@martin_hamilton
martin.hamilton@jisc.ac.uk
HPC & Big Data 2016