SlideShare uma empresa Scribd logo
1 de 51
Classifying Simulation and Data Intensive
Applications and the HPC-Big Data
Convergence
WBDB 2015 Seventh Workshop on Big Data Benchmarking
1
Geoffrey Fox, Shantenu Jha, Judy Qiu December 14, 2015
gcf@indiana.edu
http://www.dsc.soic.indiana.edu/, http://spidal.org/ http://hpc-abds.org/kaleidoscope/
Department of Intelligent Systems Engineering
School of Informatics and Computing, Digital Science Center
Indiana University Bloomington
12/14/2015
NIST Big Data Initiative
Led by Chaitin Baru, Bob Marcus, Wo Chang
And
Big Data Application Analysis
12/14/2015 2
NBD-PWG (NIST Big Data Public Working Group)
Subgroups & Co-Chairs
• There were 5 Subgroups
– Note mainly industry
• Requirements and Use Cases Sub Group
– Geoffrey Fox, Indiana U.; Joe Paiva, VA; Tsegereda Beyene, Cisco
• Definitions and Taxonomies SG
– Nancy Grady, SAIC; Natasha Balac, SDSC; Eugene Luster, R2AD
• Reference Architecture Sub Group
– Orit Levin, Microsoft; James Ketner, AT&T; Don Krapohl, Augmented
Intelligence
• Security and Privacy Sub Group
– Arnab Roy, CSA/Fujitsu Nancy Landreville, U. MD Akhil Manchanda, GE
• Technology Roadmap Sub Group
– Carl Buffington, Vistronix; Dan McClary, Oracle; David Boyd, Data
Tactics
• See http://bigdatawg.nist.gov/usecases.php
• and http://bigdatawg.nist.gov/V1_output_docs.php
312/14/2015
Use Case Template
• 26 fields completed for 51 apps
• Government Operation: 4
• Commercial: 8
• Defense: 3
• Healthcare and Life Sciences:
10
• Deep Learning and Social
Media: 6
• The Ecosystem for Research:
4
• Astronomy and Physics: 5
• Earth, Environmental and
Polar Science: 10
• Energy: 1
• Now an online form
412/14/2015
58/5/2015
http://hpc-abds.org/kaleidoscope/survey/
Online Use Case
Form
68/5/2015
http://hpc-
abds.org/kaleidoscope/survey/
51 Detailed Use Cases: Contributed July-September 2013
Covers goals, data features such as 3 V’s, software, hardware
• http://bigdatawg.nist.gov/usecases.php
• https://bigdatacoursespring2014.appspot.com/course (Section 5)
• Government Operation(4): National Archives and Records Administration, Census Bureau
• Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search,
Digital Materials, Cargo shipping (as in UPS)
• Defense(3): Sensors, Image surveillance, Situation Assessment
• Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis,
Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity
• Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd
Sourcing, Network Science, NIST benchmark datasets
• The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source
experiments
• Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron
Collider at CERN, Belle Accelerator II in Japan
• Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake,
Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation
datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to
watersheds), AmeriFlux and FLUXNET gas sensors
• Energy(1): Smart grid
7
26 Features for each use case
Biased to science
12/14/2015
Features and Examples
12/14/2015 8
51 Use Cases: What is Parallelism Over?
• People: either the users (but see below) or subjects of application and often both
• Decision makers like researchers or doctors (users of application)
• Items such as Images, EMR, Sequences below; observations or contents of online
store
– Images or “Electronic Information nuggets”
– EMR: Electronic Medical Records (often similar to people parallelism)
– Protein or Gene Sequences;
– Material properties, Manufactured Object specifications, etc., in custom dataset
– Modelled entities like vehicles and people
• Sensors – Internet of Things
• Events such as detected anomalies in telescope or credit card data or atmosphere
• (Complex) Nodes in RDF Graph
• Simple nodes as in a learning network
• Tweets, Blogs, Documents, Web Pages, etc.
– And characters/words in them
• Files or data to be backed up, moved or assigned metadata
• Particles/cells/mesh points as in parallel simulations
9
12/14/2015
Features of 51 Use Cases I
• PP (26) “All” Pleasingly Parallel or Map Only
• MR (18) Classic MapReduce MR (add MRStat below for full count)
• MRStat (7) Simple version of MR where key computations are simple
reduction as found in statistical averages such as histograms and
averages
• MRIter (23) Iterative MapReduce or MPI (Spark, Twister)
• Graph (9) Complex graph data structure needed in analysis
• Fusion (11) Integrate diverse data to aid discovery/decision making;
could involve sophisticated algorithms or could just be a portal
• Streaming (41) Some data comes in incrementally and is processed
this way
• Classify (30) Classification: divide data into categories
• S/Q (12) Index, Search and Query
1012/14/2015
Features of 51 Use Cases II
• CF (4) Collaborative Filtering for recommender engines
• LML (36) Local Machine Learning (Independent for each parallel entity) –
application could have GML as well
• GML (23) Global Machine Learning: Deep Learning, Clustering, LDA, PLSI,
MDS,
– Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief
Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can
call EGO or Exascale Global Optimization with scalable parallel algorithm
• Workflow (51) Universal
• GIS (16) Geotagged data and often displayed in ESRI, Microsoft Virtual
Earth, Google Earth, GeoServer etc.
• HPC(5) Classic large-scale simulation of cosmos, materials, etc. generating
(visualization) data
• Agent (2) Simulations of models of data-defined macroscopic entities
represented as agents
1112/14/2015
Local and Global Machine Learning
• Many applications use LML or Local machine Learning where machine
learning (often from R) is run separately on every data item such as on every
image
• But others are GML Global Machine Learning where machine learning is a
single algorithm run over all data items (over all nodes in computer)
– maximum likelihood or 2 with a sum over the N data items –
documents, sequences, items to be sold, images etc. and often links
(point-pairs).
– Graph analytics is typically GML
• Covering clustering/community detection, mixture models, topic
determination, Multidimensional scaling, (Deep) Learning Networks
• PageRank is “just” parallel linear algebra
• Note many Mahout algorithms are sequential – partly as MapReduce
limited; partly because parallelism unclear
– MLLib (Spark based) better
• SVM and Hidden Markov Models do not use large scale parallelization in
practice?
1212/14/2015
13 Image-based Use Cases
• 13-15 Military Sensor Data Analysis/ Intelligence PP, LML, GIS, MR
• 7:Pathology Imaging/ Digital Pathology: PP, LML, MR for search
becoming terabyte 3D images, Global Classification
• 18&35: Computational Bioimaging (Light Sources): PP, LML Also
materials
• 26: Large-scale Deep Learning: GML Stanford ran 10 million images and
11 billion parameters on a 64 GPU HPC; vision (drive car), speech, and
Natural Language Processing
• 27: Organizing large-scale, unstructured collections of photos: GML
Fit position and camera direction to assemble 3D photo ensemble
• 36: Catalina Real-Time Transient Synoptic Sky Survey (CRTS): PP,
LML followed by classification of events (GML)
• 43: Radar Data Analysis for CReSIS Remote Sensing of Ice Sheets: PP,
LML to identify glacier beds; GML for full ice-sheet
• 44: UAVSAR Data Processing, Data Product Delivery, and Data
Services: PP to find slippage from radar images
• 45, 46: Analysis of Simulation visualizations: PP LML ?GML find paths,
classify orbits, classify patterns that signal earthquakes, instabilities,
climate, turbulence
138/5/2015
Internet of Things and Streaming Apps
• It is projected that there will be 24 (Mobile Industry Group) to 50 (Cisco)
billion devices on the Internet by 2020.
• The cloud natural controller of and resource provider for the Internet of
Things.
• Smart phones/watches, Wearable devices (Smart People), “Intelligent River”
“Smart Homes and Grid” and “Ubiquitous Cities”, Robotics.
• Majority of use cases are streaming – experimental science gathers data in
a stream – sometimes batched as in a field trip. Below is sample
• 10: Cargo Shipping Tracking as in UPS, Fedex PP GIS LML
• 13: Large Scale Geospatial Analysis and Visualization PP GIS LML
• 28: Truthy: Information diffusion research from Twitter Data PP MR for
Search, GML for community determination
• 39: Particle Physics: Analysis of LHC Large Hadron Collider Data:
Discovery of Higgs particle PP for event Processing, Global statistics
• 50: DOE-BER AmeriFlux and FLUXNET Networks PP GIS LML
• 51: Consumption forecasting in Smart Grids PP GIS LML
148/5/2015
Big Data - Big Simulation (Exascale) Convergence
• Lets distinguish Data and Model (e.g. machine learning
analytics) in Big data problems
• Then almost always Data is large but Model varies
– E.g. LDA with many topics or deep learning has large model
– Clustering or Dimension reduction can be quite small
• Simulations can also be considered as Data and Model
– Model is solving particle dynamics or partial differential
equations
– Data could be small when just boundary conditions or
– Data large with data assimilation (weather forecasting) or
when data visualizations produced by simulation
• Data often static between iterations (unless streaming), model
varies between iterations
1512/14/2015
Big Data Patterns – the Ogres
Classification
12/14/2015 16
Classifying Big Data Applications
• “Benchmarks” “kernels” “algorithm” “mini-apps” can serve multiple
purposes
• Motivate hardware and software features
– e.g. collaborative filtering algorithm parallelizes well with MapReduce
and suggests using Hadoop on a cloud
– e.g. deep learning on images dominated by matrix operations; needs
CUDA&MPI and suggests HPC cluster
• Benchmark sets designed cover key features of systems in terms of
features and sizes of “important” applications
• Take 51 uses cases  derive specific features; each use case has multiple
features
• Generalize and systematize as Ogres with features termed “facets”
• 50 Facets divided into 4 sets or views where each view has “similar” facets
– Allow one to study coverage of benchmark sets
• Discuss Data and Model together as built around problems which combine
them but we can get insight by separating and this allows better
understanding of Big Data - Big Simulation “convergence”
1712/14/2015
7 Computational Giants of
NRC Massive Data Analysis Report
1) G1: Basic Statistics e.g. MRStat
2) G2: Generalized N-Body Problems
3) G3: Graph-Theoretic Computations
4) G4: Linear Algebraic Computations
5) G5: Optimizations e.g. Linear Programming
6) G6: Integration e.g. LDA and other GML
7) G7: Alignment Problems e.g. BLAST
1812/14/2015
http://www.nap.edu/catalog.php?record_id=18374 Big Data Models?
HPC Benchmark Classics
• Linpack or HPL: Parallel LU factorization
for solution of linear equations
• NPB version 1: Mainly classic HPC solver kernels
– MG: Multigrid
– CG: Conjugate Gradient
– FT: Fast Fourier Transform
– IS: Integer sort
– EP: Embarrassingly Parallel
– BT: Block Tridiagonal
– SP: Scalar Pentadiagonal
– LU: Lower-Upper symmetric Gauss Seidel
1912/14/2015
Simulation Models
13 Berkeley Dwarfs
1) Dense Linear Algebra
2) Sparse Linear Algebra
3) Spectral Methods
4) N-Body Methods
5) Structured Grids
6) Unstructured Grids
7) MapReduce
8) Combinational Logic
9) Graph Traversal
10) Dynamic Programming
11) Backtrack and
Branch-and-Bound
12) Graphical Models
13) Finite State Machines
2012/14/2015
First 6 of these correspond to Colella’s
original. (Classic simulations)
Monte Carlo dropped.
N-body methods are a subset of
Particle in Colella.
Note a little inconsistent in that
MapReduce is a programming model
and spectral method is a numerical
method.
Need multiple facets!
Largely Models for Data or Simulation
12/14/2015 21
Pleasingly Parallel
Classic MapReduce
Map-Collective
Map Point-to-Point
Shared Memory
Single Program Multiple Data
Bulk Synchronous Parallel
Fusion
Dataflow
Agents
Workflow
Geospatial Information System
HPC Simulations
Internet of Things
Metadata/Provenance
Shared / Dedicated / Transient / Permanent
Archived/Batched/Streaming
HDFS/Lustre/GPFS
Files/Objects
Enterprise Data Model
SQL/NoSQL/NewSQL
PerformanceMetrics
FlopsperByte;MemoryI/O
ExecutionEnvironment;Corelibraries
Volume
Velocity
Variety
Veracity
CommunicationStructure
DataAbstraction
Metric=M/Non-Metric=N
ON2
=NN/O(N)=N
Regular=R/Irregular=I
Dynamic=D/Static=S
Visualization
GraphAlgorithms
LinearAlgebraKernels
Alignment
Streaming
OptimizationMethodology
Learning
Classification
Search/Query/Index
BaseStatistics
GlobalAnalytics
LocalAnalytics
Micro-benchmarks
Recommendations
Data Source and Style View
Execution View
Processing View
2
3
4
6
7
8
9
10
11
12
10
9
8
7
6
5
4
3
2
1
1 2 3 4 5 6 7 8 9 10 12 14
9 8 7 5 4 3 2 114 13 12 11 10 6
13
Map Streaming 5
4 Ogre
Views and
50 Facets
Iterative/Simple
11
1
Problem
Architecture
View
Dwarfs and Ogres give
Convergence Diamonds
• Macropatterns or Problem Architecture View:
Unchanged
• Execution View: Significant changes to separate Data
and Model and add characteristics of Simulation models
• Data Source and Style View: Unchanged – present but
less important for Simulations compared to big data
• Processing View: becomes Big Data Processing View
• Add Big Simulation Processing View: includes specifics
of key simulation kernels – includes NAS Parallel
Benchmarks and Berkeley Dwarfs
2212/14/2015
Facets of the Ogres
Problem Architecture
Meta or Macro Aspects of Ogres
Valid for Big Data or Big Simulations as describes Problem
which is Model-Data combination
12/14/2015 23
Problem Architecture View of Ogres (Meta or MacroPatterns)
i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-)imagery including
Local Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio-
imagery, radar images (pleasingly parallel but sophisticated local analytics)
ii. Classic MapReduce: Search, Index and Query and Classification algorithms like
collaborative filtering (G1 for MRStat in Features, G7)
iii. Map-Collective: Iterative maps + communication dominated by “collective” operations
as in reduction, broadcast, gather, scatter. Common datamining pattern
iv. Map-Point to Point: Iterative maps + communication dominated by many small point to
point messages as in graph algorithms
v. Map-Streaming: Describes streaming, steering and assimilation problems
vi. Shared Memory: Some problems are asynchronous and are easier to parallelize on
shared rather than distributed memory – see some graph algorithms
vii. SPMD: Single Program Multiple Data, common parallel programming feature
viii. BSP or Bulk Synchronous Processing: well-defined compute-communication phases
ix. Fusion: Knowledge discovery often involves fusion of multiple methods.
x. Dataflow: Important application features often occurring in composite Ogres
xi. Use Agents: as in epidemiology (swarm approaches) This is Model
xii. Workflow: All applications often involve orchestration (workflow) of multiple
components
2412/14/2015
Relation of Problem and Machine Architecture
• Problem is Model plus Data
• In my old papers (especially book Parallel Computing Works!), I discussed
computing as multiple complex systems mapped into each other
Problem  Numerical formulation  Software 
Hardware
• Each of these 4 systems has an architecture that can be described in
similar language
• One gets an easy programming model if architecture of problem matches
that of Software
• One gets good performance if architecture of hardware matches that of
software and problem
• So “MapReduce” can be used as architecture of software (programming
model) or “Numerical formulation of problem”
2512/14/2015
6 Forms of
MapReduce
cover “all”
circumstances
Describes
- Problem (Model
reflecting data)
- Machine
- Software
Architecture
2612/14/2015
Data Analysis Problem Architectures
 1) Pleasingly Parallel PP or “map-only” in MapReduce
 BLAST Analysis; Local Machine Learning
 2A) Classic MapReduce MR, Map followed by reduction
 High Energy Physics (HEP) Histograms; Web search; Recommender Engines
 2B) Simple version of classic MapReduce MRStat
 Final reduction is just simple statistics
 3) Iterative MapReduce MRIter
 Expectation maximization Clustering Linear Algebra, PageRank
 4A) Map Point to Point Communication
 Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph
 4B) GPU (Accelerator) enhanced 4A) – especially for deep learning
 5) Map + Streaming + some sort of Communication
 Images from Synchrotron sources; Telescopes; Internet of Things IoT
 Apache Storm is (Map + Dataflow) +Streaming
 Data assimilation is (Map + Point to Point Communication) + Streaming
 6) Shared memory allowing parallel threads which are tricky to program but
lower latency
 Difficult to parallelize asynchronous parallel Graph Algorithms
278/5/2015
Ogre Facets
Execution Features View
Many similar Features for Big Data and
Simulations
Needs split into1) Model 2) Data 3) Problem (the combination)
add a) difference/differential operator in model and b) multi-
scale/hierarchical model
8/5/2015 28
One View of Ogres has Facets that are
micropatterns or Execution Features
i. Performance Metrics; property found by benchmarking Ogre
ii. Flops per byte; memory or I/O
iii. Execution Environment; Core libraries needed: matrix-matrix/vector algebra, conjugate
gradient, reduction, broadcast; Cloud, HPC etc.
iv. Volume: property of an Ogre instance: a) Data Volume and b) Model Size
v. Velocity: qualitative property of Ogre with value associated with instance. Mainly Data
vi. Variety: important property especially of composite Ogres; Data and Model separately
vii. Veracity: important property of “mini-appls” but not kernels; Data and Model separately
viii. Model Communication Structure; Interconnect requirements; Is communication BSP,
Asynchronous, Pub-Sub, Collective, Point to Point?
ix. Is Data and/or Model (graph) static or dynamic?
x. Much Data and/or Models consist of a set of interconnected entities; is this regular as a set
of pixels or is it a complicated irregular graph?
xi. Are Models Iterative or not?
xii. Data Abstraction: key-value, pixel, graph(G3), vector, bags of words or items; Model can
have same or different abstractions e.g. mesh points, finite element, Convolutional Network
xiii. Are data points in metric or non-metric spaces? Data drives Model?
xiv. Is Model algorithm O(N2) or O(N) (up to logs) for N points per iteration (G2)
298/5/2015
Comparison of Data Analytics with Simulation I
• Simulations produce big data as visualization of results – they are data
source
– Or consume often smallish data to define a simulation problem
– HPC simulation in weather data assimilation is data + model
• Pleasingly parallel often important in both
• Both are often SPMD and BSP
• Non-iterative MapReduce is major big data paradigm
– not a common simulation paradigm except where “Reduce” summarizes
pleasingly parallel execution as in Some Monte Carlos
• Big Data often has large collective communication
– Classic simulation has a lot of smallish point-to-point messages
• Simulations characterized often by difference or differential operators
• Simulation dominantly sparse (nearest neighbor) data structures
– Some important data analytics involves full matrix algorithm but
– “Bag of words (users, rankings, images..)” algorithms are sparse, as is
PageRank
“Force Diagrams” for macromolecules and
Facebook
Comparison of Data Analytics with Simulation II
• There are similarities between some graph problems and particle
simulations with a strange cutoff force.
– Both Map-Communication
• Note many big data problems are “long range force” (as in gravitational
simulations) as all points are linked.
– Easiest to parallelize. Often full matrix algorithms
– e.g. in DNA sequence studies, distance (i, j) defined by BLAST, Smith-
Waterman, etc., between all sequences i, j.
– Opportunity for “fast multipole” ideas in big data. See NRC report
• In image-based deep learning, neural network weights are block sparse
(corresponding to links to pixel blocks) but can be formulated as full matrix
operations on GPUs and MPI in blocks.
• In HPC benchmarking, Linpack being challenged by a new sparse conjugate
gradient benchmark HPCG, while I am diligently using non- sparse
conjugate gradient solvers in clustering and Multi-dimensional scaling.
Facets of the Ogres
Big Data Processing View
Largely Model Properties
Could produce a separate set of facets for
Simulation Processing View
8/5/2015 33
Facets in Processing (runtime) View of Ogres I
i. Micro-benchmarks ogres that exercise simple features of hardware such
as communication, disk I/O, CPU, memory performance
ii. Local Analytics executed on a single core or perhaps node
iii. Global Analytics requiring iterative programming models (G5,G6) across
multiple nodes of a parallel system
iv. Optimization Methodology: overlapping categories
i. Nonlinear Optimization (G6)
ii. Machine Learning
iii. Maximum Likelihood or 2 minimizations
iv. Expectation Maximization (often Steepest descent)
v. Combinatorial Optimization
vi. Linear/Quadratic Programming (G5)
vii. Dynamic Programming
v. Visualization is key application capability with algorithms like MDS useful
but it itself part of “mini-app” or composite Ogre
vi. Alignment (G7) as in BLAST compares samples with repository
348/5/2015
Facets in Processing (run time) View of Ogres II
vii. Streaming divided into 5 categories depending on event size and
synchronization and integration
– Set of independent events where precise time sequencing unimportant.
– Time series of connected small events where time ordering important.
– Set of independent large events where each event needs parallel processing with time
sequencing not critical
– Set of connected large events where each event needs parallel processing with time
sequencing critical.
– Stream of connected small or large events to be integrated in a complex way.
viii. Basic Statistics (G1): MRStat in NIST problem features
ix. Search/Query/Index: Classic database which is well studied (Baru, Rabl tutorial)
x. Recommender Engine: core to many e-commerce, media businesses;
collaborative filtering key technology
xi. Classification: assigning items to categories based on many methods
– MapReduce good in Alignment, Basic statistics, S/Q/I, Recommender, Calssification
xii. Deep Learning of growing importance due to success in speech recognition etc.
xiii. Problem set up as a graph (G3) as opposed to vector, grid, bag of words etc.
xiv. Using Linear Algebra Kernels: much machine learning uses linear algebra kernels
358/5/2015
Facets of the Ogres
Data Source and Style Aspects
add streaming from Processing view here
Present but often less important for
Simulations (that use and produce data)
12/14/2015 36
Data Source and Style View of Ogres I
i. SQL NewSQL or NoSQL: NoSQL includes Document,
Column, Key-value, Graph, Triple store; NewSQL is SQL redone to exploit
NoSQL performance
ii. Other Enterprise data systems: 10 examples from NIST integrate
SQL/NoSQL
iii. Set of Files or Objects: as managed in iRODS and extremely common in
scientific research
iv. File systems, Object, Blob and Data-parallel (HDFS) raw storage:
Separated from computing or colocated? HDFS v Lustre v. Openstack
Swift v. GPFS
v. Archive/Batched/Streaming: Streaming is incremental update of datasets
with new algorithms to achieve real-time response (G7); Before data gets
to compute system, there is often an initial data gathering phase which is
characterized by a block size and timing. Block size varies from month
(Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real
time control, streaming)
3712/14/2015
Data Source and Style View of Ogres II
vi. Shared/Dedicated/Transient/Permanent: qualitative
property of data; Other characteristics are needed for
permanent auxiliary/comparison datasets and these could be
interdisciplinary, implying nontrivial data movement/replication
vii. Metadata/Provenance: Clear qualitative property but not for
kernels as important aspect of data collection process
viii.Internet of Things: 24 to 50 Billion devices on Internet by
2020
ix. HPC simulations: generate major (visualization) output that
often needs to be mined
x. Using GIS: Geographical Information Systems provide
attractive access to geospatial data
Note 10 Bob Marcus (led NIST effort) Use cases
3812/14/2015
2. Perform real time analytics on data source
streams and notify users when specified
events occur
3912/14/2015
Storm, Kafka, Hbase, Zookeeper
Streaming Data
Streaming Data
Streaming Data
Posted Data Identified Events
Filter Identifying
Events
Repository
Specify filter
Archive
Post Selected
Events
Fetch streamed
Data
5. Perform interactive analytics on data in
analytics-optimized database
4012/14/2015
Hadoop, Spark, Giraph, Pig …
Data Storage: HDFS, Hbase
Data, Streaming, Batch …..
Mahout, R
5A. Perform interactive analytics on
observational scientific data
4112/14/2015
Grid or Many Task Software, Hadoop, Spark, Giraph, Pig …
Data Storage: HDFS, Hbase, File Collection
Streaming Twitter data for
Social Networking
Science Analysis Code,
Mahout, R
Transport batch of data to primary
analysis data system
Record Scientific Data in
“field”
Local
Accumulate
and initial
computing
Direct Transfer
NIST examples include
LHC, Remote Sensing,
Astronomy and
Bioinformatics
Benchmarks and Ogres
12/14/2015 42
Benchmarks/Mini-apps spanning Facets
• Look at NSF SPIDAL Project, NIST 51 use cases, Baru-Rabl review
• Catalog facets of benchmarks and choose entries to cover “all facets”
• Micro Benchmarks: SPEC, EnhancedDFSIO (HDFS), Terasort,
Wordcount, Grep, MPI, Basic Pub-Sub ….
• SQL and NoSQL Data systems, Search, Recommenders: TPC (-C to x–
HS for Hadoop), BigBench, Yahoo Cloud Serving, Berkeley Big Data,
HiBench, BigDataBench, Cloudsuite, Linkbench
– includes MapReduce cases Search, Bayes, Random Forests, Collaborative Filtering
• Spatial Query: select from image or earth data
• Alignment: Biology as in BLAST
• Streaming: Online classifiers, Cluster tweets, Robotics, Industrial Internet of
Things, Astronomy; BGBenchmark.
• Pleasingly parallel (Local Analytics): as in initial steps of LHC, Pathology,
Bioimaging (differ in type of data analysis)
• Global Analytics: Outlier, Clustering, LDA, SVM, Deep Learning, MDS,
PageRank, Levenberg-Marquardt, Graph 500 entries
• Workflow and Composite (analytics on xSQL) linking above
12/14/2015
43
Summary
Classification of Big Data Applications
and Big Data Exascale convergence
4412/14/2015
Big Data and (Exascale) Simulation Convergence I
• Our approach to Convergence is built around two ideas that avoid addressing
the hardware directly as with modern DevOps technology it isn’t hard to
retarget applications between different hardware systems.
• Rather we approach Convergence through applications and software. This
talk has described the Convergence Diamonds Convergence that unify Big
Simulation and Big Data applications and so allow one to more easily identify
good approaches to implement Big Data and Exascale applications in a
uniform fashion. This is summarized on Slides III and IV
• The software approach builds on the HPC-ABDS High Performance
Computing enhanced Apache Big Data Software Stack concept described
in Slide II (http://dsc.soic.indiana.edu/publications/HPC-
ABDSDescribed_final.pdf, http://hpc-abds.org/kaleidoscope/ )
• This arranges key HPC and ABDS software together in 21 layers showing
where HPC and ABDS overlap. It for example, introduces a communication
layer to allow ABDS runtime like Hadoop Storm Spark and Flink to use the
richest high performance capabilities shared with MPI Generally it proposes
how to use HPC and ABDS software together.
– Layered Architecture offers some protection to rapid ABDS technology
change (for ABDS independent of HPC)
4512/14/2015
46
Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies
Cross-
Cutting
Functions
1) Message
and Data
Protocols:
Avro, Thrift,
Protobuf
2) Distributed
Coordination:
Google
Chubby,
Zookeeper,
Giraffe,
JGroups
3) Security &
Privacy:
InCommon,
Eduroam
OpenStack
Keystone,
LDAP, Sentry,
Sqrrl, OpenID,
SAML OAuth
4)
Monitoring:
Ambari,
Ganglia,
Nagios, Inca
17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad,
Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA),
Jitterbit, Talend, Pentaho, Apatar, Docker Compose
16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine
Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM
Watson, Oracle PGX, GraphLab, GraphX, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB, Elasticsearch, Kibana
Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree, DC.js
15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud
Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero,
OODT, Agave, Atmosphere
15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq,
Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird
14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream
Analytics, Floe
14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, MR-MPI, Stratosphere (Apache Flink), Reef, Hama,
Giraph, Pregel, Pegasus, Ligra, GraphChi, Galois, Medusa-GPU, MapGraph, Totem
13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ,
NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Marionette Collective, Public Cloud: Amazon SNS, Lambda, Google Pub Sub,
Azure Queues, Event Hubs
12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan
12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC
12) Extraction Tools: UIMA, Tika
11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal
Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB
11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB,
Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J,
Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame
Public Cloud: Azure Table, Amazon Dynamo, Google DataStore
11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet
10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST
9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm,
Torque, Globus Tools, Pilot Jobs
8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS
Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage
7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis
6) DevOps: Docker (Machine, Swarm), Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat,
Sahara, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes,
Buildstep, Gitreceive, OpenTOSCA, Winery, CloudML, Blueprints, Terraform, DevOpSlang, Any2Api
5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula,
Eucalyptus, Nimbus, CloudStack, CoreOS, rkt, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds
Networking: Google Cloud DNS, Amazon Route 53
21 layers
Over 350
Software
Packages
May 15
2015
Green implies HPC
Integration
Big Data and (Exascale) Simulation Convergence II
Things to do for Big Data and (Exascale)
Simulation Convergence III
• Converge Applications: Separate data and model to classify
Applications and Benchmarks across Big Data and Big
Simulations to give Convergence Diamonds with many
facets
– Indicated how to extend Big Data Ogres to Big Simulations
by looking separately at model and data in Ogres
– Diamonds will have five views or collections of facets:
Problem Architecture; Execution; Data Source and Style;
Big Data Processing; Big Simulation Processing
– Facets cover data, model or their combination – the
problem or application
– Note Simulation Processing View has similarities to old
parallel computing benchmarks
4712/14/2015
Things to do for Big Data and (Exascale)
Simulation Convergence IV
• Convergence Benchmarks: we will use benchmarks that cover the facets of the
convergence diamonds i.e. cover big data and simulations;
– As we separate data and model, compute intensive simulation benchmarks (e.g.
solve partial differential equation) will be linked with data analytics (the model in
big data)
– IU focus SPIDAL (Scalable Parallel Interoperable Data Analytics Library) with
high performance clustering, dimension reduction, graphs, image processing as
well as MLlib will be linked to core PDE solvers to explore the communication
layer of parallel middleware
– Maybe integrating data and simulation is an interesting idea in benchmark sets
• Convergence Programming Model
– Note parameter servers used in machine learning will be mimicked by collective
operators invoked on distributed parameter (model) storage
– E.g. Harp as Hadoop HPC Plug-in
– There should be interest in using Big Data software systems to support exascale
simulations
– Streaming solutions from IoT to analysis of astronomy and LHC data will drive
high performance versions of Apache streaming systems
4812/14/2015
Things to do for Big Data and (Exascale)
Simulation Convergence V
• Converge Language: Make Java run as fast as C++ (Java
Grande) for computing and communication – see following
slide
– Surprising that so much Big Data work in industry but basic
high performance Java methodology and tools missing
– Needs some work as no agreed OpenMP for Java parallel
threads
– OpenMPI supports Java but needs enhancements to get
best performance on needed collectives (For C++ and
Java)
– Convergence Language Grande should support Python,
Java (Scala), C/C++ (Fortran)
4912/14/2015
Java MPI performs better than Threads I
128 24 core Haswell nodes
Default MPI much worse than threads
Optimized MPI using shared memory node-based messaging is much better
than threads
5012/14/2015
Java MPI performs better than Threads II
128 24 core Haswell nodes
5112/14/2015
200K Dataset Speedup

Mais conteúdo relacionado

Mais procurados

Classification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsClassification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsGeoffrey Fox
 
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Geoffrey Fox
 
High Performance Processing of Streaming Data
High Performance Processing of Streaming DataHigh Performance Processing of Streaming Data
High Performance Processing of Streaming DataGeoffrey Fox
 
Big Data HPC Convergence and a bunch of other things
Big Data HPC Convergence and a bunch of other thingsBig Data HPC Convergence and a bunch of other things
Big Data HPC Convergence and a bunch of other thingsGeoffrey Fox
 
2016 04-19 machine learning
2016 04-19 machine learning2016 04-19 machine learning
2016 04-19 machine learningMark Reynolds
 
2016 03-16 digital energy luncheon
2016 03-16 digital energy luncheon2016 03-16 digital energy luncheon
2016 03-16 digital energy luncheonMark Reynolds
 
AI at Scale for Materials and Chemistry
AI at Scale for Materials and ChemistryAI at Scale for Materials and Chemistry
AI at Scale for Materials and ChemistryIan Foster
 
Computing Just What You Need: Online Data Analysis and Reduction at Extreme ...
Computing Just What You Need: Online Data Analysis and Reduction  at Extreme ...Computing Just What You Need: Online Data Analysis and Reduction  at Extreme ...
Computing Just What You Need: Online Data Analysis and Reduction at Extreme ...Ian Foster
 
2016 06-07 data driven production
2016 06-07 data driven production2016 06-07 data driven production
2016 06-07 data driven productionMark Reynolds
 
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...How to expand the Galaxy from genes to Earth in six simple steps (and live sm...
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...Raffaele Montella
 
Accelerating the Experimental Feedback Loop: Data Streams and the Advanced Ph...
Accelerating the Experimental Feedback Loop: Data Streams and the Advanced Ph...Accelerating the Experimental Feedback Loop: Data Streams and the Advanced Ph...
Accelerating the Experimental Feedback Loop: Data Streams and the Advanced Ph...Ian Foster
 
Materials Data Facility: Streamlined and automated data sharing, discovery, ...
Materials Data Facility: Streamlined and automated data sharing,  discovery, ...Materials Data Facility: Streamlined and automated data sharing,  discovery, ...
Materials Data Facility: Streamlined and automated data sharing, discovery, ...Ian Foster
 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Geoffrey Fox
 
Hattrick Simpers TMS Machine Learning Workshop Slides
Hattrick Simpers TMS Machine Learning Workshop SlidesHattrick Simpers TMS Machine Learning Workshop Slides
Hattrick Simpers TMS Machine Learning Workshop SlidesJason Hattrick-Simpers
 

Mais procurados (20)

Classification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsClassification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different Facets
 
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
 
V3 i35
V3 i35V3 i35
V3 i35
 
High Performance Processing of Streaming Data
High Performance Processing of Streaming DataHigh Performance Processing of Streaming Data
High Performance Processing of Streaming Data
 
Big Data HPC Convergence and a bunch of other things
Big Data HPC Convergence and a bunch of other thingsBig Data HPC Convergence and a bunch of other things
Big Data HPC Convergence and a bunch of other things
 
Big data analytics
Big data analyticsBig data analytics
Big data analytics
 
2016 04-19 machine learning
2016 04-19 machine learning2016 04-19 machine learning
2016 04-19 machine learning
 
Big Data
Big Data Big Data
Big Data
 
2016 03-16 digital energy luncheon
2016 03-16 digital energy luncheon2016 03-16 digital energy luncheon
2016 03-16 digital energy luncheon
 
Cri big data
Cri big dataCri big data
Cri big data
 
AI at Scale for Materials and Chemistry
AI at Scale for Materials and ChemistryAI at Scale for Materials and Chemistry
AI at Scale for Materials and Chemistry
 
Computing Just What You Need: Online Data Analysis and Reduction at Extreme ...
Computing Just What You Need: Online Data Analysis and Reduction  at Extreme ...Computing Just What You Need: Online Data Analysis and Reduction  at Extreme ...
Computing Just What You Need: Online Data Analysis and Reduction at Extreme ...
 
2016 06-07 data driven production
2016 06-07 data driven production2016 06-07 data driven production
2016 06-07 data driven production
 
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...How to expand the Galaxy from genes to Earth in six simple steps (and live sm...
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...
 
Future of hpc
Future of hpcFuture of hpc
Future of hpc
 
Big data analytics_7_giants_public_24_sep_2013
Big data analytics_7_giants_public_24_sep_2013Big data analytics_7_giants_public_24_sep_2013
Big data analytics_7_giants_public_24_sep_2013
 
Accelerating the Experimental Feedback Loop: Data Streams and the Advanced Ph...
Accelerating the Experimental Feedback Loop: Data Streams and the Advanced Ph...Accelerating the Experimental Feedback Loop: Data Streams and the Advanced Ph...
Accelerating the Experimental Feedback Loop: Data Streams and the Advanced Ph...
 
Materials Data Facility: Streamlined and automated data sharing, discovery, ...
Materials Data Facility: Streamlined and automated data sharing,  discovery, ...Materials Data Facility: Streamlined and automated data sharing,  discovery, ...
Materials Data Facility: Streamlined and automated data sharing, discovery, ...
 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...
 
Hattrick Simpers TMS Machine Learning Workshop Slides
Hattrick Simpers TMS Machine Learning Workshop SlidesHattrick Simpers TMS Machine Learning Workshop Slides
Hattrick Simpers TMS Machine Learning Workshop Slides
 

Semelhante a Classifying Simulation and Data Intensive Applications and the HPC-Big Data Convergence

High Performance Computing and Big Data
High Performance Computing and Big Data High Performance Computing and Big Data
High Performance Computing and Big Data Geoffrey Fox
 
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesMatching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesGeoffrey Fox
 
51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data StackGeoffrey Fox
 
Imcs review 2013_04_v7
Imcs review 2013_04_v7Imcs review 2013_04_v7
Imcs review 2013_04_v7Karel Charvat
 
Geospatial Analytics at Scale with Deep Learning and Apache Spark with Tim hu...
Geospatial Analytics at Scale with Deep Learning and Apache Spark with Tim hu...Geospatial Analytics at Scale with Deep Learning and Apache Spark with Tim hu...
Geospatial Analytics at Scale with Deep Learning and Apache Spark with Tim hu...Databricks
 
big_data_casestudies_2.ppt
big_data_casestudies_2.pptbig_data_casestudies_2.ppt
big_data_casestudies_2.pptvishal choudhary
 
NIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGNIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGGeoffrey Fox
 
Unit 1 (Chapter-1) on data mining concepts.ppt
Unit 1 (Chapter-1) on data mining concepts.pptUnit 1 (Chapter-1) on data mining concepts.ppt
Unit 1 (Chapter-1) on data mining concepts.pptPadmajaLaksh
 
Chapter 1. Introduction
Chapter 1. IntroductionChapter 1. Introduction
Chapter 1. Introductionbutest
 
Upstate CSCI 525 Data Mining Chapter 1
Upstate CSCI 525 Data Mining Chapter 1Upstate CSCI 525 Data Mining Chapter 1
Upstate CSCI 525 Data Mining Chapter 1DanWooster1
 
01Introduction to data mining chapter 1.ppt
01Introduction to data mining chapter 1.ppt01Introduction to data mining chapter 1.ppt
01Introduction to data mining chapter 1.pptadmsoyadm4
 
Big learning 1.2
Big learning   1.2Big learning   1.2
Big learning 1.2Mohit Garg
 

Semelhante a Classifying Simulation and Data Intensive Applications and the HPC-Big Data Convergence (20)

High Performance Computing and Big Data
High Performance Computing and Big Data High Performance Computing and Big Data
High Performance Computing and Big Data
 
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesMatching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software Architectures
 
51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack
 
Imcs review 2013_04_v7
Imcs review 2013_04_v7Imcs review 2013_04_v7
Imcs review 2013_04_v7
 
Geospatial Analytics at Scale with Deep Learning and Apache Spark with Tim hu...
Geospatial Analytics at Scale with Deep Learning and Apache Spark with Tim hu...Geospatial Analytics at Scale with Deep Learning and Apache Spark with Tim hu...
Geospatial Analytics at Scale with Deep Learning and Apache Spark with Tim hu...
 
big_data_casestudies_2.ppt
big_data_casestudies_2.pptbig_data_casestudies_2.ppt
big_data_casestudies_2.ppt
 
NIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGNIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWG
 
Big dataanalyticsbeyondhadoop public_20_june_2013
Big dataanalyticsbeyondhadoop public_20_june_2013Big dataanalyticsbeyondhadoop public_20_june_2013
Big dataanalyticsbeyondhadoop public_20_june_2013
 
Unit 1 (Chapter-1) on data mining concepts.ppt
Unit 1 (Chapter-1) on data mining concepts.pptUnit 1 (Chapter-1) on data mining concepts.ppt
Unit 1 (Chapter-1) on data mining concepts.ppt
 
Big data
Big dataBig data
Big data
 
Chapter 1. Introduction
Chapter 1. IntroductionChapter 1. Introduction
Chapter 1. Introduction
 
Upstate CSCI 525 Data Mining Chapter 1
Upstate CSCI 525 Data Mining Chapter 1Upstate CSCI 525 Data Mining Chapter 1
Upstate CSCI 525 Data Mining Chapter 1
 
Semantics-enhanced Geoscience Interoperability, Analytics, and Applications
Semantics-enhanced Geoscience Interoperability, Analytics, and ApplicationsSemantics-enhanced Geoscience Interoperability, Analytics, and Applications
Semantics-enhanced Geoscience Interoperability, Analytics, and Applications
 
Data Mining Intro
Data Mining IntroData Mining Intro
Data Mining Intro
 
data mining
data miningdata mining
data mining
 
01Intro.ppt
01Intro.ppt01Intro.ppt
01Intro.ppt
 
01Introduction to data mining chapter 1.ppt
01Introduction to data mining chapter 1.ppt01Introduction to data mining chapter 1.ppt
01Introduction to data mining chapter 1.ppt
 
01Intro.ppt
01Intro.ppt01Intro.ppt
01Intro.ppt
 
Big learning 1.2
Big learning   1.2Big learning   1.2
Big learning 1.2
 
Chapter 1. Introduction.ppt
Chapter 1. Introduction.pptChapter 1. Introduction.ppt
Chapter 1. Introduction.ppt
 

Mais de Geoffrey Fox

AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...Geoffrey Fox
 
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Geoffrey Fox
 
Data Science and Online Education
Data Science and Online EducationData Science and Online Education
Data Science and Online EducationGeoffrey Fox
 
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...Geoffrey Fox
 
Data Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityData Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityGeoffrey Fox
 
Experience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyExperience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyGeoffrey Fox
 
Big Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationBig Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationGeoffrey Fox
 
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC Geoffrey Fox
 
FutureGrid Computing Testbed as a Service
 FutureGrid Computing Testbed as a Service FutureGrid Computing Testbed as a Service
FutureGrid Computing Testbed as a ServiceGeoffrey Fox
 
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Geoffrey Fox
 
Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Geoffrey Fox
 
CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2Geoffrey Fox
 
CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1Geoffrey Fox
 

Mais de Geoffrey Fox (14)

AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
 
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
 
Data Science and Online Education
Data Science and Online EducationData Science and Online Education
Data Science and Online Education
 
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
 
Data Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityData Science Curriculum at Indiana University
Data Science Curriculum at Indiana University
 
Experience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyExperience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC Technology
 
Big Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationBig Data and Clouds: Research and Education
Big Data and Clouds: Research and Education
 
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
 
Remarks on MOOC's
Remarks on MOOC'sRemarks on MOOC's
Remarks on MOOC's
 
FutureGrid Computing Testbed as a Service
 FutureGrid Computing Testbed as a Service FutureGrid Computing Testbed as a Service
FutureGrid Computing Testbed as a Service
 
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
 
Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore
 
CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2
 
CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1
 

Último

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Visualising and forecasting stocks using Dash
Visualising and forecasting stocks using DashVisualising and forecasting stocks using Dash
Visualising and forecasting stocks using Dashnarutouzumaki53779
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 

Último (20)

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Visualising and forecasting stocks using Dash
Visualising and forecasting stocks using DashVisualising and forecasting stocks using Dash
Visualising and forecasting stocks using Dash
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 

Classifying Simulation and Data Intensive Applications and the HPC-Big Data Convergence

  • 1. Classifying Simulation and Data Intensive Applications and the HPC-Big Data Convergence WBDB 2015 Seventh Workshop on Big Data Benchmarking 1 Geoffrey Fox, Shantenu Jha, Judy Qiu December 14, 2015 gcf@indiana.edu http://www.dsc.soic.indiana.edu/, http://spidal.org/ http://hpc-abds.org/kaleidoscope/ Department of Intelligent Systems Engineering School of Informatics and Computing, Digital Science Center Indiana University Bloomington 12/14/2015
  • 2. NIST Big Data Initiative Led by Chaitin Baru, Bob Marcus, Wo Chang And Big Data Application Analysis 12/14/2015 2
  • 3. NBD-PWG (NIST Big Data Public Working Group) Subgroups & Co-Chairs • There were 5 Subgroups – Note mainly industry • Requirements and Use Cases Sub Group – Geoffrey Fox, Indiana U.; Joe Paiva, VA; Tsegereda Beyene, Cisco • Definitions and Taxonomies SG – Nancy Grady, SAIC; Natasha Balac, SDSC; Eugene Luster, R2AD • Reference Architecture Sub Group – Orit Levin, Microsoft; James Ketner, AT&T; Don Krapohl, Augmented Intelligence • Security and Privacy Sub Group – Arnab Roy, CSA/Fujitsu Nancy Landreville, U. MD Akhil Manchanda, GE • Technology Roadmap Sub Group – Carl Buffington, Vistronix; Dan McClary, Oracle; David Boyd, Data Tactics • See http://bigdatawg.nist.gov/usecases.php • and http://bigdatawg.nist.gov/V1_output_docs.php 312/14/2015
  • 4. Use Case Template • 26 fields completed for 51 apps • Government Operation: 4 • Commercial: 8 • Defense: 3 • Healthcare and Life Sciences: 10 • Deep Learning and Social Media: 6 • The Ecosystem for Research: 4 • Astronomy and Physics: 5 • Earth, Environmental and Polar Science: 10 • Energy: 1 • Now an online form 412/14/2015
  • 7. 51 Detailed Use Cases: Contributed July-September 2013 Covers goals, data features such as 3 V’s, software, hardware • http://bigdatawg.nist.gov/usecases.php • https://bigdatacoursespring2014.appspot.com/course (Section 5) • Government Operation(4): National Archives and Records Administration, Census Bureau • Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search, Digital Materials, Cargo shipping (as in UPS) • Defense(3): Sensors, Image surveillance, Situation Assessment • Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis, Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity • Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd Sourcing, Network Science, NIST benchmark datasets • The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source experiments • Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron Collider at CERN, Belle Accelerator II in Japan • Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake, Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to watersheds), AmeriFlux and FLUXNET gas sensors • Energy(1): Smart grid 7 26 Features for each use case Biased to science 12/14/2015
  • 9. 51 Use Cases: What is Parallelism Over? • People: either the users (but see below) or subjects of application and often both • Decision makers like researchers or doctors (users of application) • Items such as Images, EMR, Sequences below; observations or contents of online store – Images or “Electronic Information nuggets” – EMR: Electronic Medical Records (often similar to people parallelism) – Protein or Gene Sequences; – Material properties, Manufactured Object specifications, etc., in custom dataset – Modelled entities like vehicles and people • Sensors – Internet of Things • Events such as detected anomalies in telescope or credit card data or atmosphere • (Complex) Nodes in RDF Graph • Simple nodes as in a learning network • Tweets, Blogs, Documents, Web Pages, etc. – And characters/words in them • Files or data to be backed up, moved or assigned metadata • Particles/cells/mesh points as in parallel simulations 9 12/14/2015
  • 10. Features of 51 Use Cases I • PP (26) “All” Pleasingly Parallel or Map Only • MR (18) Classic MapReduce MR (add MRStat below for full count) • MRStat (7) Simple version of MR where key computations are simple reduction as found in statistical averages such as histograms and averages • MRIter (23) Iterative MapReduce or MPI (Spark, Twister) • Graph (9) Complex graph data structure needed in analysis • Fusion (11) Integrate diverse data to aid discovery/decision making; could involve sophisticated algorithms or could just be a portal • Streaming (41) Some data comes in incrementally and is processed this way • Classify (30) Classification: divide data into categories • S/Q (12) Index, Search and Query 1012/14/2015
  • 11. Features of 51 Use Cases II • CF (4) Collaborative Filtering for recommender engines • LML (36) Local Machine Learning (Independent for each parallel entity) – application could have GML as well • GML (23) Global Machine Learning: Deep Learning, Clustering, LDA, PLSI, MDS, – Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can call EGO or Exascale Global Optimization with scalable parallel algorithm • Workflow (51) Universal • GIS (16) Geotagged data and often displayed in ESRI, Microsoft Virtual Earth, Google Earth, GeoServer etc. • HPC(5) Classic large-scale simulation of cosmos, materials, etc. generating (visualization) data • Agent (2) Simulations of models of data-defined macroscopic entities represented as agents 1112/14/2015
  • 12. Local and Global Machine Learning • Many applications use LML or Local machine Learning where machine learning (often from R) is run separately on every data item such as on every image • But others are GML Global Machine Learning where machine learning is a single algorithm run over all data items (over all nodes in computer) – maximum likelihood or 2 with a sum over the N data items – documents, sequences, items to be sold, images etc. and often links (point-pairs). – Graph analytics is typically GML • Covering clustering/community detection, mixture models, topic determination, Multidimensional scaling, (Deep) Learning Networks • PageRank is “just” parallel linear algebra • Note many Mahout algorithms are sequential – partly as MapReduce limited; partly because parallelism unclear – MLLib (Spark based) better • SVM and Hidden Markov Models do not use large scale parallelization in practice? 1212/14/2015
  • 13. 13 Image-based Use Cases • 13-15 Military Sensor Data Analysis/ Intelligence PP, LML, GIS, MR • 7:Pathology Imaging/ Digital Pathology: PP, LML, MR for search becoming terabyte 3D images, Global Classification • 18&35: Computational Bioimaging (Light Sources): PP, LML Also materials • 26: Large-scale Deep Learning: GML Stanford ran 10 million images and 11 billion parameters on a 64 GPU HPC; vision (drive car), speech, and Natural Language Processing • 27: Organizing large-scale, unstructured collections of photos: GML Fit position and camera direction to assemble 3D photo ensemble • 36: Catalina Real-Time Transient Synoptic Sky Survey (CRTS): PP, LML followed by classification of events (GML) • 43: Radar Data Analysis for CReSIS Remote Sensing of Ice Sheets: PP, LML to identify glacier beds; GML for full ice-sheet • 44: UAVSAR Data Processing, Data Product Delivery, and Data Services: PP to find slippage from radar images • 45, 46: Analysis of Simulation visualizations: PP LML ?GML find paths, classify orbits, classify patterns that signal earthquakes, instabilities, climate, turbulence 138/5/2015
  • 14. Internet of Things and Streaming Apps • It is projected that there will be 24 (Mobile Industry Group) to 50 (Cisco) billion devices on the Internet by 2020. • The cloud natural controller of and resource provider for the Internet of Things. • Smart phones/watches, Wearable devices (Smart People), “Intelligent River” “Smart Homes and Grid” and “Ubiquitous Cities”, Robotics. • Majority of use cases are streaming – experimental science gathers data in a stream – sometimes batched as in a field trip. Below is sample • 10: Cargo Shipping Tracking as in UPS, Fedex PP GIS LML • 13: Large Scale Geospatial Analysis and Visualization PP GIS LML • 28: Truthy: Information diffusion research from Twitter Data PP MR for Search, GML for community determination • 39: Particle Physics: Analysis of LHC Large Hadron Collider Data: Discovery of Higgs particle PP for event Processing, Global statistics • 50: DOE-BER AmeriFlux and FLUXNET Networks PP GIS LML • 51: Consumption forecasting in Smart Grids PP GIS LML 148/5/2015
  • 15. Big Data - Big Simulation (Exascale) Convergence • Lets distinguish Data and Model (e.g. machine learning analytics) in Big data problems • Then almost always Data is large but Model varies – E.g. LDA with many topics or deep learning has large model – Clustering or Dimension reduction can be quite small • Simulations can also be considered as Data and Model – Model is solving particle dynamics or partial differential equations – Data could be small when just boundary conditions or – Data large with data assimilation (weather forecasting) or when data visualizations produced by simulation • Data often static between iterations (unless streaming), model varies between iterations 1512/14/2015
  • 16. Big Data Patterns – the Ogres Classification 12/14/2015 16
  • 17. Classifying Big Data Applications • “Benchmarks” “kernels” “algorithm” “mini-apps” can serve multiple purposes • Motivate hardware and software features – e.g. collaborative filtering algorithm parallelizes well with MapReduce and suggests using Hadoop on a cloud – e.g. deep learning on images dominated by matrix operations; needs CUDA&MPI and suggests HPC cluster • Benchmark sets designed cover key features of systems in terms of features and sizes of “important” applications • Take 51 uses cases  derive specific features; each use case has multiple features • Generalize and systematize as Ogres with features termed “facets” • 50 Facets divided into 4 sets or views where each view has “similar” facets – Allow one to study coverage of benchmark sets • Discuss Data and Model together as built around problems which combine them but we can get insight by separating and this allows better understanding of Big Data - Big Simulation “convergence” 1712/14/2015
  • 18. 7 Computational Giants of NRC Massive Data Analysis Report 1) G1: Basic Statistics e.g. MRStat 2) G2: Generalized N-Body Problems 3) G3: Graph-Theoretic Computations 4) G4: Linear Algebraic Computations 5) G5: Optimizations e.g. Linear Programming 6) G6: Integration e.g. LDA and other GML 7) G7: Alignment Problems e.g. BLAST 1812/14/2015 http://www.nap.edu/catalog.php?record_id=18374 Big Data Models?
  • 19. HPC Benchmark Classics • Linpack or HPL: Parallel LU factorization for solution of linear equations • NPB version 1: Mainly classic HPC solver kernels – MG: Multigrid – CG: Conjugate Gradient – FT: Fast Fourier Transform – IS: Integer sort – EP: Embarrassingly Parallel – BT: Block Tridiagonal – SP: Scalar Pentadiagonal – LU: Lower-Upper symmetric Gauss Seidel 1912/14/2015 Simulation Models
  • 20. 13 Berkeley Dwarfs 1) Dense Linear Algebra 2) Sparse Linear Algebra 3) Spectral Methods 4) N-Body Methods 5) Structured Grids 6) Unstructured Grids 7) MapReduce 8) Combinational Logic 9) Graph Traversal 10) Dynamic Programming 11) Backtrack and Branch-and-Bound 12) Graphical Models 13) Finite State Machines 2012/14/2015 First 6 of these correspond to Colella’s original. (Classic simulations) Monte Carlo dropped. N-body methods are a subset of Particle in Colella. Note a little inconsistent in that MapReduce is a programming model and spectral method is a numerical method. Need multiple facets! Largely Models for Data or Simulation
  • 21. 12/14/2015 21 Pleasingly Parallel Classic MapReduce Map-Collective Map Point-to-Point Shared Memory Single Program Multiple Data Bulk Synchronous Parallel Fusion Dataflow Agents Workflow Geospatial Information System HPC Simulations Internet of Things Metadata/Provenance Shared / Dedicated / Transient / Permanent Archived/Batched/Streaming HDFS/Lustre/GPFS Files/Objects Enterprise Data Model SQL/NoSQL/NewSQL PerformanceMetrics FlopsperByte;MemoryI/O ExecutionEnvironment;Corelibraries Volume Velocity Variety Veracity CommunicationStructure DataAbstraction Metric=M/Non-Metric=N ON2 =NN/O(N)=N Regular=R/Irregular=I Dynamic=D/Static=S Visualization GraphAlgorithms LinearAlgebraKernels Alignment Streaming OptimizationMethodology Learning Classification Search/Query/Index BaseStatistics GlobalAnalytics LocalAnalytics Micro-benchmarks Recommendations Data Source and Style View Execution View Processing View 2 3 4 6 7 8 9 10 11 12 10 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 9 10 12 14 9 8 7 5 4 3 2 114 13 12 11 10 6 13 Map Streaming 5 4 Ogre Views and 50 Facets Iterative/Simple 11 1 Problem Architecture View
  • 22. Dwarfs and Ogres give Convergence Diamonds • Macropatterns or Problem Architecture View: Unchanged • Execution View: Significant changes to separate Data and Model and add characteristics of Simulation models • Data Source and Style View: Unchanged – present but less important for Simulations compared to big data • Processing View: becomes Big Data Processing View • Add Big Simulation Processing View: includes specifics of key simulation kernels – includes NAS Parallel Benchmarks and Berkeley Dwarfs 2212/14/2015
  • 23. Facets of the Ogres Problem Architecture Meta or Macro Aspects of Ogres Valid for Big Data or Big Simulations as describes Problem which is Model-Data combination 12/14/2015 23
  • 24. Problem Architecture View of Ogres (Meta or MacroPatterns) i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-)imagery including Local Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio- imagery, radar images (pleasingly parallel but sophisticated local analytics) ii. Classic MapReduce: Search, Index and Query and Classification algorithms like collaborative filtering (G1 for MRStat in Features, G7) iii. Map-Collective: Iterative maps + communication dominated by “collective” operations as in reduction, broadcast, gather, scatter. Common datamining pattern iv. Map-Point to Point: Iterative maps + communication dominated by many small point to point messages as in graph algorithms v. Map-Streaming: Describes streaming, steering and assimilation problems vi. Shared Memory: Some problems are asynchronous and are easier to parallelize on shared rather than distributed memory – see some graph algorithms vii. SPMD: Single Program Multiple Data, common parallel programming feature viii. BSP or Bulk Synchronous Processing: well-defined compute-communication phases ix. Fusion: Knowledge discovery often involves fusion of multiple methods. x. Dataflow: Important application features often occurring in composite Ogres xi. Use Agents: as in epidemiology (swarm approaches) This is Model xii. Workflow: All applications often involve orchestration (workflow) of multiple components 2412/14/2015
  • 25. Relation of Problem and Machine Architecture • Problem is Model plus Data • In my old papers (especially book Parallel Computing Works!), I discussed computing as multiple complex systems mapped into each other Problem  Numerical formulation  Software  Hardware • Each of these 4 systems has an architecture that can be described in similar language • One gets an easy programming model if architecture of problem matches that of Software • One gets good performance if architecture of hardware matches that of software and problem • So “MapReduce” can be used as architecture of software (programming model) or “Numerical formulation of problem” 2512/14/2015
  • 26. 6 Forms of MapReduce cover “all” circumstances Describes - Problem (Model reflecting data) - Machine - Software Architecture 2612/14/2015
  • 27. Data Analysis Problem Architectures  1) Pleasingly Parallel PP or “map-only” in MapReduce  BLAST Analysis; Local Machine Learning  2A) Classic MapReduce MR, Map followed by reduction  High Energy Physics (HEP) Histograms; Web search; Recommender Engines  2B) Simple version of classic MapReduce MRStat  Final reduction is just simple statistics  3) Iterative MapReduce MRIter  Expectation maximization Clustering Linear Algebra, PageRank  4A) Map Point to Point Communication  Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph  4B) GPU (Accelerator) enhanced 4A) – especially for deep learning  5) Map + Streaming + some sort of Communication  Images from Synchrotron sources; Telescopes; Internet of Things IoT  Apache Storm is (Map + Dataflow) +Streaming  Data assimilation is (Map + Point to Point Communication) + Streaming  6) Shared memory allowing parallel threads which are tricky to program but lower latency  Difficult to parallelize asynchronous parallel Graph Algorithms 278/5/2015
  • 28. Ogre Facets Execution Features View Many similar Features for Big Data and Simulations Needs split into1) Model 2) Data 3) Problem (the combination) add a) difference/differential operator in model and b) multi- scale/hierarchical model 8/5/2015 28
  • 29. One View of Ogres has Facets that are micropatterns or Execution Features i. Performance Metrics; property found by benchmarking Ogre ii. Flops per byte; memory or I/O iii. Execution Environment; Core libraries needed: matrix-matrix/vector algebra, conjugate gradient, reduction, broadcast; Cloud, HPC etc. iv. Volume: property of an Ogre instance: a) Data Volume and b) Model Size v. Velocity: qualitative property of Ogre with value associated with instance. Mainly Data vi. Variety: important property especially of composite Ogres; Data and Model separately vii. Veracity: important property of “mini-appls” but not kernels; Data and Model separately viii. Model Communication Structure; Interconnect requirements; Is communication BSP, Asynchronous, Pub-Sub, Collective, Point to Point? ix. Is Data and/or Model (graph) static or dynamic? x. Much Data and/or Models consist of a set of interconnected entities; is this regular as a set of pixels or is it a complicated irregular graph? xi. Are Models Iterative or not? xii. Data Abstraction: key-value, pixel, graph(G3), vector, bags of words or items; Model can have same or different abstractions e.g. mesh points, finite element, Convolutional Network xiii. Are data points in metric or non-metric spaces? Data drives Model? xiv. Is Model algorithm O(N2) or O(N) (up to logs) for N points per iteration (G2) 298/5/2015
  • 30. Comparison of Data Analytics with Simulation I • Simulations produce big data as visualization of results – they are data source – Or consume often smallish data to define a simulation problem – HPC simulation in weather data assimilation is data + model • Pleasingly parallel often important in both • Both are often SPMD and BSP • Non-iterative MapReduce is major big data paradigm – not a common simulation paradigm except where “Reduce” summarizes pleasingly parallel execution as in Some Monte Carlos • Big Data often has large collective communication – Classic simulation has a lot of smallish point-to-point messages • Simulations characterized often by difference or differential operators • Simulation dominantly sparse (nearest neighbor) data structures – Some important data analytics involves full matrix algorithm but – “Bag of words (users, rankings, images..)” algorithms are sparse, as is PageRank
  • 31. “Force Diagrams” for macromolecules and Facebook
  • 32. Comparison of Data Analytics with Simulation II • There are similarities between some graph problems and particle simulations with a strange cutoff force. – Both Map-Communication • Note many big data problems are “long range force” (as in gravitational simulations) as all points are linked. – Easiest to parallelize. Often full matrix algorithms – e.g. in DNA sequence studies, distance (i, j) defined by BLAST, Smith- Waterman, etc., between all sequences i, j. – Opportunity for “fast multipole” ideas in big data. See NRC report • In image-based deep learning, neural network weights are block sparse (corresponding to links to pixel blocks) but can be formulated as full matrix operations on GPUs and MPI in blocks. • In HPC benchmarking, Linpack being challenged by a new sparse conjugate gradient benchmark HPCG, while I am diligently using non- sparse conjugate gradient solvers in clustering and Multi-dimensional scaling.
  • 33. Facets of the Ogres Big Data Processing View Largely Model Properties Could produce a separate set of facets for Simulation Processing View 8/5/2015 33
  • 34. Facets in Processing (runtime) View of Ogres I i. Micro-benchmarks ogres that exercise simple features of hardware such as communication, disk I/O, CPU, memory performance ii. Local Analytics executed on a single core or perhaps node iii. Global Analytics requiring iterative programming models (G5,G6) across multiple nodes of a parallel system iv. Optimization Methodology: overlapping categories i. Nonlinear Optimization (G6) ii. Machine Learning iii. Maximum Likelihood or 2 minimizations iv. Expectation Maximization (often Steepest descent) v. Combinatorial Optimization vi. Linear/Quadratic Programming (G5) vii. Dynamic Programming v. Visualization is key application capability with algorithms like MDS useful but it itself part of “mini-app” or composite Ogre vi. Alignment (G7) as in BLAST compares samples with repository 348/5/2015
  • 35. Facets in Processing (run time) View of Ogres II vii. Streaming divided into 5 categories depending on event size and synchronization and integration – Set of independent events where precise time sequencing unimportant. – Time series of connected small events where time ordering important. – Set of independent large events where each event needs parallel processing with time sequencing not critical – Set of connected large events where each event needs parallel processing with time sequencing critical. – Stream of connected small or large events to be integrated in a complex way. viii. Basic Statistics (G1): MRStat in NIST problem features ix. Search/Query/Index: Classic database which is well studied (Baru, Rabl tutorial) x. Recommender Engine: core to many e-commerce, media businesses; collaborative filtering key technology xi. Classification: assigning items to categories based on many methods – MapReduce good in Alignment, Basic statistics, S/Q/I, Recommender, Calssification xii. Deep Learning of growing importance due to success in speech recognition etc. xiii. Problem set up as a graph (G3) as opposed to vector, grid, bag of words etc. xiv. Using Linear Algebra Kernels: much machine learning uses linear algebra kernels 358/5/2015
  • 36. Facets of the Ogres Data Source and Style Aspects add streaming from Processing view here Present but often less important for Simulations (that use and produce data) 12/14/2015 36
  • 37. Data Source and Style View of Ogres I i. SQL NewSQL or NoSQL: NoSQL includes Document, Column, Key-value, Graph, Triple store; NewSQL is SQL redone to exploit NoSQL performance ii. Other Enterprise data systems: 10 examples from NIST integrate SQL/NoSQL iii. Set of Files or Objects: as managed in iRODS and extremely common in scientific research iv. File systems, Object, Blob and Data-parallel (HDFS) raw storage: Separated from computing or colocated? HDFS v Lustre v. Openstack Swift v. GPFS v. Archive/Batched/Streaming: Streaming is incremental update of datasets with new algorithms to achieve real-time response (G7); Before data gets to compute system, there is often an initial data gathering phase which is characterized by a block size and timing. Block size varies from month (Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real time control, streaming) 3712/14/2015
  • 38. Data Source and Style View of Ogres II vi. Shared/Dedicated/Transient/Permanent: qualitative property of data; Other characteristics are needed for permanent auxiliary/comparison datasets and these could be interdisciplinary, implying nontrivial data movement/replication vii. Metadata/Provenance: Clear qualitative property but not for kernels as important aspect of data collection process viii.Internet of Things: 24 to 50 Billion devices on Internet by 2020 ix. HPC simulations: generate major (visualization) output that often needs to be mined x. Using GIS: Geographical Information Systems provide attractive access to geospatial data Note 10 Bob Marcus (led NIST effort) Use cases 3812/14/2015
  • 39. 2. Perform real time analytics on data source streams and notify users when specified events occur 3912/14/2015 Storm, Kafka, Hbase, Zookeeper Streaming Data Streaming Data Streaming Data Posted Data Identified Events Filter Identifying Events Repository Specify filter Archive Post Selected Events Fetch streamed Data
  • 40. 5. Perform interactive analytics on data in analytics-optimized database 4012/14/2015 Hadoop, Spark, Giraph, Pig … Data Storage: HDFS, Hbase Data, Streaming, Batch ….. Mahout, R
  • 41. 5A. Perform interactive analytics on observational scientific data 4112/14/2015 Grid or Many Task Software, Hadoop, Spark, Giraph, Pig … Data Storage: HDFS, Hbase, File Collection Streaming Twitter data for Social Networking Science Analysis Code, Mahout, R Transport batch of data to primary analysis data system Record Scientific Data in “field” Local Accumulate and initial computing Direct Transfer NIST examples include LHC, Remote Sensing, Astronomy and Bioinformatics
  • 43. Benchmarks/Mini-apps spanning Facets • Look at NSF SPIDAL Project, NIST 51 use cases, Baru-Rabl review • Catalog facets of benchmarks and choose entries to cover “all facets” • Micro Benchmarks: SPEC, EnhancedDFSIO (HDFS), Terasort, Wordcount, Grep, MPI, Basic Pub-Sub …. • SQL and NoSQL Data systems, Search, Recommenders: TPC (-C to x– HS for Hadoop), BigBench, Yahoo Cloud Serving, Berkeley Big Data, HiBench, BigDataBench, Cloudsuite, Linkbench – includes MapReduce cases Search, Bayes, Random Forests, Collaborative Filtering • Spatial Query: select from image or earth data • Alignment: Biology as in BLAST • Streaming: Online classifiers, Cluster tweets, Robotics, Industrial Internet of Things, Astronomy; BGBenchmark. • Pleasingly parallel (Local Analytics): as in initial steps of LHC, Pathology, Bioimaging (differ in type of data analysis) • Global Analytics: Outlier, Clustering, LDA, SVM, Deep Learning, MDS, PageRank, Levenberg-Marquardt, Graph 500 entries • Workflow and Composite (analytics on xSQL) linking above 12/14/2015 43
  • 44. Summary Classification of Big Data Applications and Big Data Exascale convergence 4412/14/2015
  • 45. Big Data and (Exascale) Simulation Convergence I • Our approach to Convergence is built around two ideas that avoid addressing the hardware directly as with modern DevOps technology it isn’t hard to retarget applications between different hardware systems. • Rather we approach Convergence through applications and software. This talk has described the Convergence Diamonds Convergence that unify Big Simulation and Big Data applications and so allow one to more easily identify good approaches to implement Big Data and Exascale applications in a uniform fashion. This is summarized on Slides III and IV • The software approach builds on the HPC-ABDS High Performance Computing enhanced Apache Big Data Software Stack concept described in Slide II (http://dsc.soic.indiana.edu/publications/HPC- ABDSDescribed_final.pdf, http://hpc-abds.org/kaleidoscope/ ) • This arranges key HPC and ABDS software together in 21 layers showing where HPC and ABDS overlap. It for example, introduces a communication layer to allow ABDS runtime like Hadoop Storm Spark and Flink to use the richest high performance capabilities shared with MPI Generally it proposes how to use HPC and ABDS software together. – Layered Architecture offers some protection to rapid ABDS technology change (for ABDS independent of HPC) 4512/14/2015
  • 46. 46 Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies Cross- Cutting Functions 1) Message and Data Protocols: Avro, Thrift, Protobuf 2) Distributed Coordination: Google Chubby, Zookeeper, Giraffe, JGroups 3) Security & Privacy: InCommon, Eduroam OpenStack Keystone, LDAP, Sentry, Sqrrl, OpenID, SAML OAuth 4) Monitoring: Ambari, Ganglia, Nagios, Inca 17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad, Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA), Jitterbit, Talend, Pentaho, Apatar, Docker Compose 16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB, Elasticsearch, Kibana Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree, DC.js 15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT, Agave, Atmosphere 15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq, Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird 14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream Analytics, Floe 14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, MR-MPI, Stratosphere (Apache Flink), Reef, Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi, Galois, Medusa-GPU, MapGraph, Totem 13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Marionette Collective, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs 12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan 12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC 12) Extraction Tools: UIMA, Tika 11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB 11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB, Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame Public Cloud: Azure Table, Amazon Dynamo, Google DataStore 11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet 10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST 9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm, Torque, Globus Tools, Pilot Jobs 8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage 7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis 6) DevOps: Docker (Machine, Swarm), Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Sahara, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive, OpenTOSCA, Winery, CloudML, Blueprints, Terraform, DevOpSlang, Any2Api 5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula, Eucalyptus, Nimbus, CloudStack, CoreOS, rkt, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds Networking: Google Cloud DNS, Amazon Route 53 21 layers Over 350 Software Packages May 15 2015 Green implies HPC Integration Big Data and (Exascale) Simulation Convergence II
  • 47. Things to do for Big Data and (Exascale) Simulation Convergence III • Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with many facets – Indicated how to extend Big Data Ogres to Big Simulations by looking separately at model and data in Ogres – Diamonds will have five views or collections of facets: Problem Architecture; Execution; Data Source and Style; Big Data Processing; Big Simulation Processing – Facets cover data, model or their combination – the problem or application – Note Simulation Processing View has similarities to old parallel computing benchmarks 4712/14/2015
  • 48. Things to do for Big Data and (Exascale) Simulation Convergence IV • Convergence Benchmarks: we will use benchmarks that cover the facets of the convergence diamonds i.e. cover big data and simulations; – As we separate data and model, compute intensive simulation benchmarks (e.g. solve partial differential equation) will be linked with data analytics (the model in big data) – IU focus SPIDAL (Scalable Parallel Interoperable Data Analytics Library) with high performance clustering, dimension reduction, graphs, image processing as well as MLlib will be linked to core PDE solvers to explore the communication layer of parallel middleware – Maybe integrating data and simulation is an interesting idea in benchmark sets • Convergence Programming Model – Note parameter servers used in machine learning will be mimicked by collective operators invoked on distributed parameter (model) storage – E.g. Harp as Hadoop HPC Plug-in – There should be interest in using Big Data software systems to support exascale simulations – Streaming solutions from IoT to analysis of astronomy and LHC data will drive high performance versions of Apache streaming systems 4812/14/2015
  • 49. Things to do for Big Data and (Exascale) Simulation Convergence V • Converge Language: Make Java run as fast as C++ (Java Grande) for computing and communication – see following slide – Surprising that so much Big Data work in industry but basic high performance Java methodology and tools missing – Needs some work as no agreed OpenMP for Java parallel threads – OpenMPI supports Java but needs enhancements to get best performance on needed collectives (For C++ and Java) – Convergence Language Grande should support Python, Java (Scala), C/C++ (Fortran) 4912/14/2015
  • 50. Java MPI performs better than Threads I 128 24 core Haswell nodes Default MPI much worse than threads Optimized MPI using shared memory node-based messaging is much better than threads 5012/14/2015
  • 51. Java MPI performs better than Threads II 128 24 core Haswell nodes 5112/14/2015 200K Dataset Speedup

Notas do Editor

  1. Big dwarfs are Ogres Implement Ogres in ABDS+
  2. Big dwarfs are Ogres Implement Ogres in ABDS+
  3. Big dwarfs are Ogres Implement Ogres in ABDS+
  4. Big dwarfs are Ogres Implement Ogres in ABDS+
  5. Big dwarfs are Ogres Implement Ogres in ABDS+