SlideShare uma empresa Scribd logo
1 de 25
Baixar para ler offline
Record-breaking Petascale CycleCloud
HPC Production Run
156,000-core Cluster (1.21PetaFLOPS) Accelerates
Schrödinger Materials Science and Green Energy
November 2013
Cycle Computing
Cycle Computing believes
utility high performance computing
accelerates invention
Records broken, Science done
On November 3rd, ran a “MegaRun” cluster that had:
• 156,314 cores and 1.21 PetaFLOPS of theoretical peak compute power
• Ran 2.3 Million hours, totaling 264 years of computing, in 18 hours
• Executed world-wide, across all 8 public AWS Regions (5 continents)
• Compared to $68Million to purchase – done on CycleCloud with Spot Instances for just $33K
THE SCIENCE
• Finding Organic Photovoltaic Compounds that are more efficient, easier to manufacture to help remove
the US’s reliance on fossil fuels.
• Designing, synthesizing, and experimenting with a new material can take 1 year of a scientists time
requiring hundreds of thousands of dollars in equipment, chemicals, etc. or
With Schrödinger Materials Science’s tools, on Cycle and AWS Spot Instances, it cost $0.16 per molecule
• The run analyzed 205,000 compounds in total
• This is the exact kind of science being outlined in the Materials Genome initiative from the White House
Challenge of Materials Science
Traditional Materials Design
•  Design, Synthesis, Analysis are challenging
for an arbitrary material
• 

Low hit rate for viable materials

• 

Total Molecule Cost:
• 

Time: A year for a grad student

• 

$100,000s in equipment, chemicals, etc.

With Schrödinger Computational Chemistry &
Cycle
•  Schrödinger Materials Science tools simulate
accurate properties in hours
• 

Simulation guides the researcher’s intuition

• 

Focus physical analysis on promising
materials

• 

Total cost:
• 

Time to enumerate molecules: Minutes/
hours

• 

$0.16 per molecule in infrastructure using
AWS Spot Instances
Designing Solar Materials
The Challenge is efficiency
•  Need to efficiently turn photons from the sun to Electricity
The number of possible materials is limitless
•  Need to separate the right compounds from the useless ones
•  If the 20th century was the century of silicon, the 21st will be all
organic
How do we find the right material,
without spending the entire 21st century looking for it?
The Challenge for the Scientist
Dr. Mark Thompson
Professor of Chemistry, USC
“Solar energy has the potential to replace
some of our dependence on fossil fuels,
but only if the solar panels can be made
very inexpensively and have
reasonable to high efficiencies. Organic
solar cells have this potential.”
Challenge: run a virtual screen of 205,000
molecules in continuing analysis of
possible materials for organic solar
cells
The right needle in the right hay stack
Before: Trade-off between
compute time vs. sampling

Coarse
screen,
Small
samples

Now: Better analysis, more
materials
è Better results

Higher
Quality
Analysis,
More
materials

More
Materials

More
Materials
Solution: Utility HPC
On-demand compute power is transformative for users, but hard to make production
—  Big Opportunity to help Manufacturing, Life Science, Energy, Financial

companies:

—  Rise of BigData, compute, Monte Carlo problems that power modern business and

science

—  Applications, like Schrödinger Materials Science tools, offer a compelling alternative

to physically testing products

—  Amazon Web Services makes infrastructure easily accessible
—  AWS Spot instances decrease the cost of compute
—  Science & engineering face faster time-to-market, increased agility requirements
—  Capital efficiency (OpEx replacing CapEx) are organizational goals
Why isn’t everyone doing this?
Because it is really complicated, and really hard to orchestrate
technical applications, securely, at scale
We’re the first and only ones doing this including the wellpublicized:
2000, 4000, 10000, 30000, and 50000 core clusters in
2010-2013
Clients including: Johnson & Johnson, Schrödinger, Pfizer,
Novartis, Genentech, HGST, Pacific Life Insurance, Hartford
Insurance Group …
Cycle Computing Makes Utility HPC a Reality
Easily orchestrates complex workloads
and data access to local and Cloud HPC
—  Scales from 100-1,000,000 cores
—  Handles errors, reliability
—  Schedules data movement
—  Secures, encrypts and audits
—  Provides reporting and chargeback
—  Automates spot bidding
—  Supports Enterprise operations
Challenge:
205,000 compounds
totaling 2,312,959 core-hours,
or 264 core-years
Solution: “MegaRun” Cluster
New record: MegaRun is the largest dedicated Cloud
HPC Cluster to date on Public Cloud
Tool	
  

Description	
  

Schrödinger	
  Materials	
  Science	
  
tools	
  

Set	
  of	
  automated	
  workflows	
  that	
  enable	
  organic	
  semiconductor	
  
materials	
  to	
  be	
  simulated	
  accurately	
  

CycleCloud	
  

HPC	
  clusters	
  at	
  small	
  to	
  massive	
  scale:	
  application	
  deployment,	
  
job/data	
  aware	
  routing,	
  error-­‐handling	
  

Jupiter	
  

Cycle’s	
  massively	
  scalable,	
  resilient	
  cloud	
  scheduler	
  	
  

Chef	
  

Automated	
  configuration	
  at	
  scale	
  

Multi-­‐Region	
  AWS	
  Spot	
  Instances	
   Massive	
  server	
  resource	
  capacity	
  across	
  all	
  public	
  regions	
  of	
  AWS	
  
205,000 molecules
264 years of computing

16,788 Spot Instances,
156,314 cores!
205,000 molecules
264 years of computing

156,314 cores =
1.21 PetaFLOPS (Rpeak)
Equivalent to Top500 Jun2013 #29
205,000 molecules
264 years of computing

Done in 18 hours
Access to $68M system
for $33k
8-Region Deployment
US-West-2
US-East

EU
Tokyo

US-West-1
Brazil

Singapore
Australia
Jupiter Scheduler
—  Make large cloud regions work together
—  Spans many regions/datacenters to resiliently route

work with minimal scheduling overhead

—  Batch/MPI Schedulers get 10k cores doing 100k jobs
—  Jupiter seeks to get Millions of cores doing 10Ms tasks
—  Currently 100k’s cores doing 1M tasks on large runs

—  Can survive machine, availability zone, and region

failure while still executing the full workload
Resilient Workload Scheduling
MegaRun – Facts and Figures
Metric

Count

Compute Hours of Work

2,312,959 hours

Compute Days of Work

96,373 days

Compute Years of Work

264 years

Molecule Count

205,000 materials

Run Time

< 18 hours

Max Scale (cores)

156,314 cores across 8 regions

Max Scale (instances)

16,788 instances
Accelerated Time to Result
Cluster Scale

Cost

Run-time

156,000 core CycleCloud

$33,000

~ 18 hours

300-core Internal cluster
(stopping all other work)

$132,000

~ 10.5 months
CycleCloud–156,000 cores
CycleCloud –
16,788 instances
8 Public Regions across AWS
Ramping up to
full capacity
Solution:
205,000 compounds, 264 core years,
156k core Utility HPC cluster
in 18 hours
for $0.16/molecule using
Schrödinger Materials Science tools,
Cycle & AWS Spot Instances

Mais conteúdo relacionado

Mais procurados

Foster CRA March 2022.pptx
Foster CRA March 2022.pptxFoster CRA March 2022.pptx
Foster CRA March 2022.pptx
Ian Foster
 
NASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & EngineeringNASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & Engineering
inside-BigData.com
 

Mais procurados (20)

NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...
 NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic... NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...
NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...
 
Data-intensive IceCube Cloud Burst
Data-intensive IceCube Cloud BurstData-intensive IceCube Cloud Burst
Data-intensive IceCube Cloud Burst
 
Burst data retrieval after 50k GPU Cloud run
Burst data retrieval after 50k GPU Cloud runBurst data retrieval after 50k GPU Cloud run
Burst data retrieval after 50k GPU Cloud run
 
Creating A Multi-wavelength Galactic Plane Atlas With Amazon Web Services
Creating A Multi-wavelength Galactic Plane Atlas With Amazon Web ServicesCreating A Multi-wavelength Galactic Plane Atlas With Amazon Web Services
Creating A Multi-wavelength Galactic Plane Atlas With Amazon Web Services
 
Nasa HPC in the Cloud
Nasa HPC in the CloudNasa HPC in the Cloud
Nasa HPC in the Cloud
 
13 helioscope pvpmc 2017v4
13 helioscope pvpmc 2017v413 helioscope pvpmc 2017v4
13 helioscope pvpmc 2017v4
 
Analyzing Larger RasterData in a Jupyter Notebook with GeoPySpark on AWS - FO...
Analyzing Larger RasterData in a Jupyter Notebook with GeoPySpark on AWS - FO...Analyzing Larger RasterData in a Jupyter Notebook with GeoPySpark on AWS - FO...
Analyzing Larger RasterData in a Jupyter Notebook with GeoPySpark on AWS - FO...
 
How to Prepare Weather and Climate Models for Future HPC Hardware
How to Prepare Weather and Climate Models for Future HPC HardwareHow to Prepare Weather and Climate Models for Future HPC Hardware
How to Prepare Weather and Climate Models for Future HPC Hardware
 
Implementing Workload Postponing In Cloudsim to Maximize Renewable Energy Uti...
Implementing Workload Postponing In Cloudsim to Maximize Renewable Energy Uti...Implementing Workload Postponing In Cloudsim to Maximize Renewable Energy Uti...
Implementing Workload Postponing In Cloudsim to Maximize Renewable Energy Uti...
 
Foster CRA March 2022.pptx
Foster CRA March 2022.pptxFoster CRA March 2022.pptx
Foster CRA March 2022.pptx
 
Bioclouds CAMDA (Robert Grossman) 09-v9p
Bioclouds CAMDA (Robert Grossman) 09-v9pBioclouds CAMDA (Robert Grossman) 09-v9p
Bioclouds CAMDA (Robert Grossman) 09-v9p
 
NASA's Movement Towards Cloud Computing
NASA's Movement Towards Cloud ComputingNASA's Movement Towards Cloud Computing
NASA's Movement Towards Cloud Computing
 
NASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & EngineeringNASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & Engineering
 
Q4 2016 GeoTrellis Presentation
Q4 2016 GeoTrellis PresentationQ4 2016 GeoTrellis Presentation
Q4 2016 GeoTrellis Presentation
 
VariantSpark - a Spark library for genomics
VariantSpark - a Spark library for genomicsVariantSpark - a Spark library for genomics
VariantSpark - a Spark library for genomics
 
GeoSpatially enabling your Spark and Accumulo clusters with LocationTech
GeoSpatially enabling your Spark and Accumulo clusters with LocationTechGeoSpatially enabling your Spark and Accumulo clusters with LocationTech
GeoSpatially enabling your Spark and Accumulo clusters with LocationTech
 
Project Matsu: Elastic Clouds for Disaster Relief
Project Matsu: Elastic Clouds for Disaster ReliefProject Matsu: Elastic Clouds for Disaster Relief
Project Matsu: Elastic Clouds for Disaster Relief
 
OCC Overview OMG Clouds Meeting 07-13-09 v3
OCC Overview OMG Clouds Meeting 07-13-09 v3OCC Overview OMG Clouds Meeting 07-13-09 v3
OCC Overview OMG Clouds Meeting 07-13-09 v3
 
Understanding Jupyter notebooks using bioinformatics examples
Understanding Jupyter notebooks using bioinformatics examplesUnderstanding Jupyter notebooks using bioinformatics examples
Understanding Jupyter notebooks using bioinformatics examples
 
Bioinformatics Data Pipelines built by CSIRO on AWS
Bioinformatics Data Pipelines built by CSIRO on AWSBioinformatics Data Pipelines built by CSIRO on AWS
Bioinformatics Data Pipelines built by CSIRO on AWS
 

Destaque

Optical Computing Technology
Optical Computing TechnologyOptical Computing Technology
Optical Computing Technology
Kanchan Shinde
 

Destaque (11)

many-task computing
many-task computingmany-task computing
many-task computing
 
SERC, IISc CRAY PetaFLOPS System
SERC, IISc CRAY PetaFLOPS SystemSERC, IISc CRAY PetaFLOPS System
SERC, IISc CRAY PetaFLOPS System
 
The Explosion of Petascale in the Race to Exascale
The Explosion of Petascale in the Race to ExascaleThe Explosion of Petascale in the Race to Exascale
The Explosion of Petascale in the Race to Exascale
 
Search algorithms for discrete optimization
Search algorithms for discrete optimizationSearch algorithms for discrete optimization
Search algorithms for discrete optimization
 
Future of cloud storage
Future of cloud storageFuture of cloud storage
Future of cloud storage
 
Nanotechnology by sanchit sharma
Nanotechnology by sanchit sharmaNanotechnology by sanchit sharma
Nanotechnology by sanchit sharma
 
Opticalcomputing final
Opticalcomputing finalOpticalcomputing final
Opticalcomputing final
 
Optical Computing Technology
Optical Computing TechnologyOptical Computing Technology
Optical Computing Technology
 
Synchronization in distributed systems
Synchronization in distributed systems Synchronization in distributed systems
Synchronization in distributed systems
 
Quantum Computers
Quantum ComputersQuantum Computers
Quantum Computers
 
Quantum computer ppt
Quantum computer pptQuantum computer ppt
Quantum computer ppt
 

Semelhante a Cycle Computing Record-breaking Petascale HPC Run

(BDT311) MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On...
(BDT311) MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On...(BDT311) MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On...
(BDT311) MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On...
Amazon Web Services
 
Utility HPC: Right Systems, Right Scale, Right Science
Utility HPC: Right Systems, Right Scale, Right ScienceUtility HPC: Right Systems, Right Scale, Right Science
Utility HPC: Right Systems, Right Scale, Right Science
Chef Software, Inc.
 

Semelhante a Cycle Computing Record-breaking Petascale HPC Run (20)

HPC Cluster Computing from 64 to 156,000 Cores 
HPC Cluster Computing from 64 to 156,000 Cores HPC Cluster Computing from 64 to 156,000 Cores 
HPC Cluster Computing from 64 to 156,000 Cores 
 
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...
 
HPC Clusters in the (almost) Infinite Cloud
HPC Clusters in the (almost) Infinite CloudHPC Clusters in the (almost) Infinite Cloud
HPC Clusters in the (almost) Infinite Cloud
 
How HPC and large-scale data analytics are transforming experimental science
How HPC and large-scale data analytics are transforming experimental scienceHow HPC and large-scale data analytics are transforming experimental science
How HPC and large-scale data analytics are transforming experimental science
 
Colloborative computing
Colloborative computing Colloborative computing
Colloborative computing
 
Available HPC resources at CSUC
Available HPC resources at CSUCAvailable HPC resources at CSUC
Available HPC resources at CSUC
 
Intro to High Performance Computing in the AWS Cloud
Intro to High Performance Computing in the AWS CloudIntro to High Performance Computing in the AWS Cloud
Intro to High Performance Computing in the AWS Cloud
 
Time to Science/Time to Results: Transforming Research in the Cloud
Time to Science/Time to Results: Transforming Research in the CloudTime to Science/Time to Results: Transforming Research in the Cloud
Time to Science/Time to Results: Transforming Research in the Cloud
 
Scientific Computing With Amazon Web Services
Scientific Computing With Amazon Web ServicesScientific Computing With Amazon Web Services
Scientific Computing With Amazon Web Services
 
(BDT311) MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On...
(BDT311) MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On...(BDT311) MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On...
(BDT311) MegaRun: Behind the 156,000 Core HPC Run on AWS and Experience of On...
 
Accelerators at ORNL - Application Readiness, Early Science, and Industry Impact
Accelerators at ORNL - Application Readiness, Early Science, and Industry ImpactAccelerators at ORNL - Application Readiness, Early Science, and Industry Impact
Accelerators at ORNL - Application Readiness, Early Science, and Industry Impact
 
What Are Science Clouds?
What Are Science Clouds?What Are Science Clouds?
What Are Science Clouds?
 
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
 
Modern Computing: Cloud, Distributed, & High Performance
Modern Computing: Cloud, Distributed, & High PerformanceModern Computing: Cloud, Distributed, & High Performance
Modern Computing: Cloud, Distributed, & High Performance
 
Cycle Cloud 70,000 Core AWS Cluster for HGST
Cycle Cloud 70,000 Core AWS Cluster for HGSTCycle Cloud 70,000 Core AWS Cluster for HGST
Cycle Cloud 70,000 Core AWS Cluster for HGST
 
Update on the Exascale Computing Project (ECP)
Update on the Exascale Computing Project (ECP)Update on the Exascale Computing Project (ECP)
Update on the Exascale Computing Project (ECP)
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science Research
 
Stories About Spark, HPC and Barcelona by Jordi Torres
Stories About Spark, HPC and Barcelona by Jordi TorresStories About Spark, HPC and Barcelona by Jordi Torres
Stories About Spark, HPC and Barcelona by Jordi Torres
 
Intel_IoT_Munich
Intel_IoT_MunichIntel_IoT_Munich
Intel_IoT_Munich
 
Utility HPC: Right Systems, Right Scale, Right Science
Utility HPC: Right Systems, Right Scale, Right ScienceUtility HPC: Right Systems, Right Scale, Right Science
Utility HPC: Right Systems, Right Scale, Right Science
 

Mais de inside-BigData.com

Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
inside-BigData.com
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
inside-BigData.com
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
inside-BigData.com
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
inside-BigData.com
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
inside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
inside-BigData.com
 

Mais de inside-BigData.com (20)

Major Market Shifts in IT
Major Market Shifts in ITMajor Market Shifts in IT
Major Market Shifts in IT
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
 
HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networks
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Update
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 

Último

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Último (20)

Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 

Cycle Computing Record-breaking Petascale HPC Run

  • 1. Record-breaking Petascale CycleCloud HPC Production Run 156,000-core Cluster (1.21PetaFLOPS) Accelerates Schrödinger Materials Science and Green Energy November 2013 Cycle Computing
  • 2. Cycle Computing believes utility high performance computing accelerates invention
  • 3. Records broken, Science done On November 3rd, ran a “MegaRun” cluster that had: • 156,314 cores and 1.21 PetaFLOPS of theoretical peak compute power • Ran 2.3 Million hours, totaling 264 years of computing, in 18 hours • Executed world-wide, across all 8 public AWS Regions (5 continents) • Compared to $68Million to purchase – done on CycleCloud with Spot Instances for just $33K THE SCIENCE • Finding Organic Photovoltaic Compounds that are more efficient, easier to manufacture to help remove the US’s reliance on fossil fuels. • Designing, synthesizing, and experimenting with a new material can take 1 year of a scientists time requiring hundreds of thousands of dollars in equipment, chemicals, etc. or With Schrödinger Materials Science’s tools, on Cycle and AWS Spot Instances, it cost $0.16 per molecule • The run analyzed 205,000 compounds in total • This is the exact kind of science being outlined in the Materials Genome initiative from the White House
  • 4. Challenge of Materials Science Traditional Materials Design •  Design, Synthesis, Analysis are challenging for an arbitrary material •  Low hit rate for viable materials •  Total Molecule Cost: •  Time: A year for a grad student •  $100,000s in equipment, chemicals, etc. With Schrödinger Computational Chemistry & Cycle •  Schrödinger Materials Science tools simulate accurate properties in hours •  Simulation guides the researcher’s intuition •  Focus physical analysis on promising materials •  Total cost: •  Time to enumerate molecules: Minutes/ hours •  $0.16 per molecule in infrastructure using AWS Spot Instances
  • 5. Designing Solar Materials The Challenge is efficiency •  Need to efficiently turn photons from the sun to Electricity The number of possible materials is limitless •  Need to separate the right compounds from the useless ones •  If the 20th century was the century of silicon, the 21st will be all organic How do we find the right material, without spending the entire 21st century looking for it?
  • 6. The Challenge for the Scientist Dr. Mark Thompson Professor of Chemistry, USC “Solar energy has the potential to replace some of our dependence on fossil fuels, but only if the solar panels can be made very inexpensively and have reasonable to high efficiencies. Organic solar cells have this potential.” Challenge: run a virtual screen of 205,000 molecules in continuing analysis of possible materials for organic solar cells
  • 7. The right needle in the right hay stack Before: Trade-off between compute time vs. sampling Coarse screen, Small samples Now: Better analysis, more materials è Better results Higher Quality Analysis, More materials More Materials More Materials
  • 8. Solution: Utility HPC On-demand compute power is transformative for users, but hard to make production —  Big Opportunity to help Manufacturing, Life Science, Energy, Financial companies: —  Rise of BigData, compute, Monte Carlo problems that power modern business and science —  Applications, like Schrödinger Materials Science tools, offer a compelling alternative to physically testing products —  Amazon Web Services makes infrastructure easily accessible —  AWS Spot instances decrease the cost of compute —  Science & engineering face faster time-to-market, increased agility requirements —  Capital efficiency (OpEx replacing CapEx) are organizational goals
  • 9. Why isn’t everyone doing this? Because it is really complicated, and really hard to orchestrate technical applications, securely, at scale We’re the first and only ones doing this including the wellpublicized: 2000, 4000, 10000, 30000, and 50000 core clusters in 2010-2013 Clients including: Johnson & Johnson, Schrödinger, Pfizer, Novartis, Genentech, HGST, Pacific Life Insurance, Hartford Insurance Group …
  • 10. Cycle Computing Makes Utility HPC a Reality Easily orchestrates complex workloads and data access to local and Cloud HPC —  Scales from 100-1,000,000 cores —  Handles errors, reliability —  Schedules data movement —  Secures, encrypts and audits —  Provides reporting and chargeback —  Automates spot bidding —  Supports Enterprise operations
  • 11. Challenge: 205,000 compounds totaling 2,312,959 core-hours, or 264 core-years
  • 12. Solution: “MegaRun” Cluster New record: MegaRun is the largest dedicated Cloud HPC Cluster to date on Public Cloud Tool   Description   Schrödinger  Materials  Science   tools   Set  of  automated  workflows  that  enable  organic  semiconductor   materials  to  be  simulated  accurately   CycleCloud   HPC  clusters  at  small  to  massive  scale:  application  deployment,   job/data  aware  routing,  error-­‐handling   Jupiter   Cycle’s  massively  scalable,  resilient  cloud  scheduler     Chef   Automated  configuration  at  scale   Multi-­‐Region  AWS  Spot  Instances   Massive  server  resource  capacity  across  all  public  regions  of  AWS  
  • 13. 205,000 molecules 264 years of computing 16,788 Spot Instances, 156,314 cores!
  • 14. 205,000 molecules 264 years of computing 156,314 cores = 1.21 PetaFLOPS (Rpeak) Equivalent to Top500 Jun2013 #29
  • 15. 205,000 molecules 264 years of computing Done in 18 hours Access to $68M system for $33k
  • 17. Jupiter Scheduler —  Make large cloud regions work together —  Spans many regions/datacenters to resiliently route work with minimal scheduling overhead —  Batch/MPI Schedulers get 10k cores doing 100k jobs —  Jupiter seeks to get Millions of cores doing 10Ms tasks —  Currently 100k’s cores doing 1M tasks on large runs —  Can survive machine, availability zone, and region failure while still executing the full workload
  • 19. MegaRun – Facts and Figures Metric Count Compute Hours of Work 2,312,959 hours Compute Days of Work 96,373 days Compute Years of Work 264 years Molecule Count 205,000 materials Run Time < 18 hours Max Scale (cores) 156,314 cores across 8 regions Max Scale (instances) 16,788 instances
  • 20. Accelerated Time to Result Cluster Scale Cost Run-time 156,000 core CycleCloud $33,000 ~ 18 hours 300-core Internal cluster (stopping all other work) $132,000 ~ 10.5 months
  • 23. 8 Public Regions across AWS
  • 24. Ramping up to full capacity
  • 25. Solution: 205,000 compounds, 264 core years, 156k core Utility HPC cluster in 18 hours for $0.16/molecule using Schrödinger Materials Science tools, Cycle & AWS Spot Instances