SlideShare uma empresa Scribd logo
1 de 60
Baixar para ler offline
Multi-faceted Classification of Big Data
Uses and Proposed Architecture
Integrating High Performance Computing
and the Apache Stack
Sixth International Workshop on Cloud Data Management
CloudDB 2014
Chicago March 31 2014
Geoffrey Fox
gcf@indiana.edu
http://www.infomall.org
School of Informatics and Computing
Digital Science Center
Indiana University Bloomington
Abstract
• We introduce the NIST collection of 51 use cases and describe
their scope over industry, government and research areas. We
look at their structure from several points of view or facets
covering problem architecture, analytics kernels, micro-
system usage such as flops/bytes, application class (GIS,
expectation maximization) and very importantly data source.
• We then propose that in many cases it is wise to combine the
well known commodity best practice (often Apache) Big Data
Stack (with ~120 software subsystems) with high performance
computing technologies.
• We describe this and give early results based on clustering
running with different paradigms.
• We identify key layers where HPC Apache integration is
particularly important: File systems, Cluster resource
management, File and object data management, Inter
process and thread communication, Analytics libraries,
Workflow and Monitoring.
NIST Big Data Use Cases
NIST Requirements and Use Case Subgroup
• Part of NIST Big Data Public Working Group (NBD-PWG) June-September 2013
http://bigdatawg.nist.gov/
• Leaders of activity
– Wo Chang, NIST
– Robert Marcus, ET-Strategies
– Chaitanya Baru, UC San Diego
The focus is to form a community of interest from industry, academia,
and government, with the goal of developing a consensus list of Big
Data requirements across all stakeholders. This includes gathering and
understanding various use cases from diversified application domains.
Tasks
• Gather use case input from all stakeholders
• Derive Big Data requirements from each use case.
• Analyze/prioritize a list of challenging general requirements that may delay or
prevent adoption of Big Data deployment
• Develop a set of general patterns capturing the “essence” of use cases (to do)
• Work with Reference Architecture to validate requirements and reference
architecture by explicitly implementing some patterns based on use cases
4
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
Big Data Definition
• More consensus on Data Science definition than that of Big Data
• Big Data refers to digital data volume, velocity and/or variety that:
• Enable novel approaches to frontier questions previously
inaccessible or impractical using current or conventional methods;
and/or
• Exceed the storage capacity or analysis capability of current or
conventional methods and systems; and
• Differentiates by storing and analyzing population data and not
sample sizes.
• Needs management requiring scalability across coupled
horizontal resources
• Everybody says their data is big (!) Perhaps how it is used is most
important
5
What is Data Science?
• I was impressed by number of NIST working group members who
were self declared data scientists
• I was also impressed by universal adoption by participants of
Apache technologies – see later
• McKinsey says there are lots of jobs (1.65M by 2018 in USA) but
that’s not enough! Is this a field – what is it and what is its core?
• The emergence of the 4th or data driven paradigm of science
illustrates significance - http://research.microsoft.com/en-
us/collaboration/fourthparadigm/
• Discovery is guided by data rather than by a model
• The End of (traditional) science http://www.wired.com/wired/issue/16-
07 is famous here
• Another example is recommender systems in Netflix, e-
commerce etc. where pure data (user ratings of movies or
products) allows an empirical prediction of what users like
http://www.wired.com/wired/issue/16-07 September 2008
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
Data Science Definition
• Data Science is the extraction of actionable knowledge directly from data
through a process of discovery, hypothesis, and analytical hypothesis
analysis.
• A Data Scientist is a
practitioner who has
sufficient knowledge of the
overlapping regimes of
expertise in business needs,
domain knowledge,
analytical skills and
programming expertise to
manage the end-to-end
scientific method process
through each stage in the
big data lifecycle.
8
Use Case Template
• 26 fields completed for 51
areas
• Government Operation: 4
• Commercial: 8
• Defense: 3
• Healthcare and Life Sciences:
10
• Deep Learning and Social
Media: 6
• The Ecosystem for Research:
4
• Astronomy and Physics: 5
• Earth, Environmental and
Polar Science: 10
• Energy: 1
9
51 Detailed Use Cases: Contributed July-September 2013
Covers goals, data features such as 3 V’s, software, hardware
• http://bigdatawg.nist.gov/usecases.php
• https://bigdatacoursespring2014.appspot.com/course (Section 5)
• Government Operation(4): National Archives and Records Administration, Census Bureau
• Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search,
Digital Materials, Cargo shipping (as in UPS)
• Defense(3): Sensors, Image surveillance, Situation Assessment
• Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis,
Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity
• Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd
Sourcing, Network Science, NIST benchmark datasets
• The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source
experiments
• Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron
Collider at CERN, Belle Accelerator II in Japan
• Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake,
Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate
simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry
(microbes to watersheds), AmeriFlux and FLUXNET gas sensors
• Energy(1): Smart grid
26 Features for each use case
Biased to science
10
Part of Property Summary Table
11
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
3: Census Bureau Statistical Survey
Response Improvement (Adaptive Design)
• Application: Survey costs are increasing as survey response declines. The goal of this
work is to use advanced “recommendation system techniques” that are open and
scientifically objective, using data mashed up from several sources and historical
survey para-data (administrative data about the survey) to drive operational
processes in an effort to increase quality and reduce the cost of field surveys.
• Current Approach: About a petabyte of data coming from surveys and other
government administrative sources. Data can be streamed with approximately 150
million records transmitted as field data streamed continuously, during the decennial
census. All data must be both confidential and secure. All processes must be
auditable for security and confidentiality as required by various legal statutes. Data
quality should be high and statistically checked for accuracy and reliability
throughout the collection process. Use Hadoop, Spark, Hive, R, SAS, Mahout,
Allegrograph, MySQL, Oracle, Storm, BigMemory, Cassandra, Pig software.
• Futures: Analytics needs to be developed which give statistical estimations that
provide more detail, on a more near real time basis for less cost. The reliability of
estimated statistics from such “mashed up” sources still must be evaluated.
Government
12
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
7: Netflix Movie Service
• Application: Allow streaming of user selected movies to satisfy multiple objectives (for
different stakeholders) -- especially retaining subscribers. Find best possible ordering of a
set of videos for a user (household) within a given context in real-time; maximize movie
consumption. Digital movies stored in cloud with metadata; user profiles and rankings for
small fraction of movies for each user. Use multiple criteria – content based
recommender system; user-based recommender system; diversity. Refine algorithms
continuously with A/B testing.
• Current Approach: Recommender systems and streaming video delivery are core Netflix
technologies. Recommender systems are always personalized and use logistic/linear
regression, elastic nets, matrix factorization, clustering, latent Dirichlet allocation,
association rules, gradient boosted decision trees etc. Winner of Netflix competition (to
improve ratings by 10%) combined over 100 different algorithms. Uses SQL, NoSQL,
MapReduce on Amazon Web Services. Netflix recommender systems have features in
common to e-commerce like Amazon. Streaming video has features in common with
other content providing services like iTunes, Google Play, Pandora and Last.fm.
• Futures: Very competitive business. Need to be aware of other companies and trends in
both content (which Movies are hot) and technology. Need to investigate new business
initiatives such as Netflix sponsored content
Commercial
13
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
15: Intelligence Data
Processing and Analysis
• Application: Allow Intelligence Analysts to a) Identify relationships between entities
(people, organizations, places, equipment) b) Spot trends in sentiment or intent for either
general population or leadership group (state, non-state actors) c) Find location of and
possibly timing of hostile actions (including implantation of IEDs) d) Track the location and
actions of (potentially) hostile actors e) Ability to reason against and derive knowledge
from diverse, disconnected, and frequently unstructured (e.g. text) data sources f) Ability
to process data close to the point of collection and allow data to be shared easily to/from
individual soldiers, forward deployed units, and senior leadership in garrison.
• Current Approach: Software includes Hadoop, Accumulo (Big Table), Solr, Natural
Language Processing, Puppet (for deployment and security) and Storm running on
medium size clusters. Data size in 10s of Terabytes to 100s of Petabytes with Imagery
intelligence device gathering petabyte in a few hours. Dismounted warfighters would
have at most 1-100s of Gigabytes (typically handheld data storage).
• Futures: Data currently exists in disparate silos which must be accessible through a
semantically integrated data space. Wide variety of data types, sources, structures, and
quality which will span domains and requires integrated search and reasoning. Most
critical data is either unstructured or imagery/video which requires significant processing
to extract entities and information. Network quality, Provenance and security essential.
Defense
14
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
26: Large-scale Deep Learning
• Application: Large models (e.g., neural networks with more neurons and connections) combined with
large datasets are increasingly the top performers in benchmark tasks for vision, speech, and Natural
Language Processing. One needs to train a deep neural network from a large (>>1TB) corpus of data
(typically imagery, video, audio, or text). Such training procedures often require customization of the
neural network architecture, learning criteria, and dataset pre-processing. In addition to the
computational expense demanded by the learning algorithms, the need for rapid prototyping and
ease of development is extremely high.
• Current Approach: The largest applications so far are to image recognition and scientific studies of
unsupervised learning with 10 million images and up to 11 billion parameters on a 64 GPU HPC
Infiniband cluster. Both supervised (using existing classified images) and unsupervised applications
Deep Learning
Social Networking
• Futures: Large datasets of 100TB or more may be
necessary in order to exploit the representational power
of the larger models. Training a self-driving car could take
100 million images at megapixel resolution. Deep
Learning shares many characteristics with the broader
field of machine learning. The paramount requirements
are high computational throughput for mostly dense
linear algebra operations, and extremely high productivity
for researcher exploration. One needs integration of high
performance libraries with high level (python) prototyping
environments
IN
Classified
OUT
15
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
35: Light source beamlines
• Application: Samples are exposed to X-rays from light sources in a variety of
configurations depending on the experiment. Detectors (essentially high-speed
digital cameras) collect the data. The data are then analyzed to reconstruct a
view of the sample or process being studied.
• Current Approach: A variety of commercial and open source software is used for
data analysis – examples including Octopus for Tomographic Reconstruction,
Avizo (http://vsg3d.com) and FIJI (a distribution of ImageJ) for Visualization and
Analysis. Data transfer is accomplished using physical transport of portable
media (severely limits performance) or using high-performance GridFTP,
managed by Globus Online or workflow systems such as SPADE.
• Futures: Camera resolution is continually increasing. Data transfer to large-scale
computing facilities is becoming necessary because of the computational power
required to conduct the analysis on time scales useful to the experiment. Large
number of beamlines (e.g. 39 at LBNL ALS) means that total data load is likely to
increase significantly and require a generalized infrastructure for analyzing
gigabytes per second of data from many beamline detectors at multiple
facilities.
Research Ecosystem
16
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
36: Catalina Real-Time Transient Survey (CRTS):
a digital, panoramic, synoptic sky survey I
• Application: The survey explores the variable universe in the visible light regime, on time
scales ranging from minutes to years, by searching for variable and transient sources. It
discovers a broad variety of astrophysical objects and phenomena, including various types
of cosmic explosions (e.g., Supernovae), variable stars, phenomena associated with
accretion to massive black holes (active galactic nuclei) and their relativistic jets, high
proper motion stars, etc. The data are collected from 3 telescopes (2 in Arizona and 1 in
Australia), with additional ones expected in the near future (in Chile).
• Current Approach: The survey generates up to ~ 0.1 TB on a clear night with a total of
~100 TB in current data holdings. The data are preprocessed at the telescope, and
transferred to Univ. of Arizona and Caltech, for further analysis, distribution, and archiving.
The data are processed in real time, and detected transient events are published
electronically through a variety of dissemination mechanisms, with no proprietary
withholding period (CRTS has a completely open data policy). Further data analysis
includes classification of the detected transient events, additional observations using
other telescopes, scientific interpretation, and publishing. In this process, it makes a
heavy use of the archival data (several PB’s) from a wide variety of geographically
distributed resources connected through the Virtual Observatory (VO) framework.
Astronomy & Physics
17
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
36: Catalina Real-Time Transient Survey (CRTS):
a digital, panoramic, synoptic sky survey II
• Futures: CRTS is a scientific and methodological testbed and precursor of larger surveys to
come, notably the Large Synoptic Survey Telescope (LSST), expected to operate in 2020’s
and selected as the highest-priority ground-based instrument in the 2010 Astronomy and
Astrophysics Decadal Survey. LSST will gather about 30 TB per night.
Astronomy & Physics
18
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
47: Atmospheric Turbulence - Event
Discovery and Predictive Analytics
• Application: This builds datamining on top of reanalysis products including the North
American Regional Reanalysis (NARR) and the Modern-Era Retrospective-Analysis for
Research (MERRA) from NASA where latter described earlier. The analytics correlate
aircraft reports of turbulence (either from pilot reports or from automated aircraft
measurements of eddy dissipation rates) with recently completed atmospheric re-analyses.
This is of value to aviation industry and to weather forecasters. There are no standards for
re-analysis products complicating system where MapReduce is being investigated. The
reanalysis data is hundreds of terabytes and slowly updated whereas turbulence is smaller
in size and implemented as a streaming service.
Earth, Environmental
and Polar Science
• Current Approach: Current 200TB dataset can
be analyzed with MapReduce or the like using
SciDB or other scientific database.
• Futures: The dataset will reach 500TB in 5
years. The initial turbulence case can be
extended to other ocean/atmosphere
phenomena but the analytics would be
different in each case.
Typical NASA image of turbulent waves
19
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
51: Consumption forecasting in
Smart Grids
• Application: Predict energy consumption for customers, transformers, sub-
stations and the electrical grid service area using smart meters providing
measurements every 15-mins at the granularity of individual consumers within
the service area of smart power utilities. Combine Head-end of smart meters
(distributed), Utility databases (Customer Information, Network topology;
centralized), US Census data (distributed), NOAA weather data (distributed),
Micro-grid building information system (centralized), Micro-grid sensor network
(distributed). This generalizes to real-time data-driven analytics for time series
from cyber physical systems
• Current Approach: GIS based visualization. Data is around 4 TB a year for a city
with 1.4M sensors in Los Angeles. Uses R/Matlab, Weka, Hadoop software.
Significant privacy issues requiring anonymization by aggregation. Combine real
time and historic data with machine learning for predicting consumption.
• Futures: Wide spread deployment of Smart Grids with new analytics integrating
diverse data and supporting curtailment requests. Mobile applications for client
interactions.
Energy
20
10 Suggested Generic Use Cases
1) Multiple users performing interactive queries and updates on a database
with basic availability and eventual consistency (BASE)
2) Perform real time analytics on data source streams and notify users when
specified events occur
3) Move data from external data sources into a highly horizontally scalable
data store, transform it using highly horizontally scalable processing (e.g.
Map-Reduce), and return it to the horizontally scalable data store (ELT)
4) Perform batch analytics on the data in a highly horizontally scalable data
store using highly horizontally scalable processing (e.g MapReduce) with a
user-friendly interface (e.g. SQL like)
5) Perform interactive analytics on data in analytics-optimized database
6) Visualize data extracted from horizontally scalable Big Data score
7) Move data from a highly horizontally scalable data store into a traditional
Enterprise Data Warehouse
8) Extract, process, and move data from data stores to archives
9) Combine data from Cloud databases and on premise data stores for
analytics, data mining, and/or machine learning
10) Orchestrate multiple sequential and parallel data transformations and/or
analytic processing using a workflow manager
10 Security & Privacy Use Cases
• Consumer Digital Media Usage
• Nielsen Homescan
• Web Traffic Analytics
• Health Information Exchange
• Personal Genetic Privacy
• Pharma Clinic Trial Data Sharing
• Cyber-security
• Aviation Industry
• Military - Unmanned Vehicle sensor data
• Education - “Common Core” Student Performance Reporting
• Need to integrate 10 “generic” and 10 “security & privacy” with
51 “full use cases”
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
Management
Security&Privacy
Big Data Application Provider
Visualization AccessAnalyticsCurationCollection
System Orchestrator
DATA
SW
DATA
SW
I N F O R M AT I O N VA L U E C H A I N
ITVALUECHAIN
DataConsumer
DataProvider
Horizontally Scalable (VM clusters)
Vertically Scalable
Horizontally Scalable
Vertically Scalable
Horizontally Scalable
Vertically Scalable
Big Data Framework Provider
Processing Frameworks (analytic tools, etc.)
Platforms (databases, etc.)
Infrastructures
Physical and Virtual Resources (networking, computing, etc.)
DATA
SW
K E Y :
SW
Service Use
Data Flow
Analytics Tools
Transfer
DATA
NIST Big Data Reference Architecture
23
Requirements Extraction Process
• Two-step process is used for requirement extraction:
1) Extract specific requirements and map to reference architecture
based on each application’s characteristics such as:
a) data sources (data size, file formats, rate of grow, at rest or in motion, etc.)
b) data lifecycle management (curation, conversion, quality check, pre-analytic
processing, etc.)
c) data transformation (data fusion/mashup, analytics),
d) capability infrastructure (software tools, platform tools, hardware resources
such as storage and networking), and
e) data usage (processed results in text, table, visual, and other formats).
f) all architecture components informed by Goals and use case description
g) Security & Privacy has direct map
2) Aggregate all specific requirements into high-level generalized
requirements which are vendor-neutral and technology agnostic.
24
Size of Process
• The draft use case and requirements report is 264 pages
– How much web and how much publication?
• 35 General Requirements
• 437 Specific Requirements
– 8.6 per use case, 12.5 per general requirement
• Data Sources: 3 General 78 Specific
• Transformation: 4 General 60 Specific
• Capability (Infrastructure): 6 General 133 Specific
• Data Consumer: 6 General 55 Specific
• Security & Privacy: 2 General 45 Specific
• Lifecycle: 9 General 43 Specific
• Other: 5 General 23 Specific
• Not clearly useful – prefer to identify common “structure/kernels”
25
Significant Web Resources
• Index to all use cases http://bigdatawg.nist.gov/usecases.php
– This links to individual submissions and other
processed/collected information
• List of specific requirements versus use case
http://bigdatawg.nist.gov/uc_reqs_summary.php
• List of general requirements versus architecture component
http://bigdatawg.nist.gov/uc_reqs_gen.php
• List of general requirements versus architecture component with
record of use cases giving requirement
http://bigdatawg.nist.gov/uc_reqs_gen_ref.php
• List of architecture component and specific requirements plus use
case constraining this component
http://bigdatawg.nist.gov/uc_reqs_gen_detail.php
26
Would like to capture “essence of
these use cases”
“small” kernels, mini-apps
Or Classify applications into patterns
Do it from HPC background not database view point
e.g. focus on cases with detailed analytics
Section 5 of my class
https://bigdatacoursespring2014.appspot.com/preview classifies
51 use cases with ogre facets
What are “mini-Applications”
• Use for benchmarks of computers and software (is my
parallel compiler any good?)
• In parallel computing, this is well established
– Linpack for measuring performance to rank machines in Top500
(changing?)
– NAS Parallel Benchmarks (originally a pencil and paper
specification to allow optimal implementations; then MPI library)
– Other specialized Benchmark sets keep changing and used to
guide procurements
• Last 2 NSF hardware solicitations had NO preset benchmarks –
perhaps as no agreement on key applications for clouds and
data intensive applications
– Berkeley dwarfs capture different structures that any approach
to parallel computing must address
– Templates used to capture parallel computing patterns
• I’ll let experts comment on database benchmarks like TPC
HPC Benchmark Classics
• Linpack or HPL: Parallel LU factorization for solution of
linear equations
• NPB version 1: Mainly classic HPC solver kernels
– MG: Multigrid
– CG: Conjugate Gradient
– FT: Fast Fourier Transform
– IS: Integer sort
– EP: Embarrassingly Parallel
– BT: Block Tridiagonal
– SP: Scalar Pentadiagonal
– LU: Lower-Upper symmetric Gauss Seidel
7 Original Berkeley Dwarfs (Colella)
1. Structured Grids (including locally structured
grids, e.g. Adaptive Mesh Refinement)
2. Unstructured Grids
3. Fast Fourier Transform
4. Dense Linear Algebra
5. Sparse Linear Algebra
6. Particles
7. Monte Carlo
8. Note “vaguer” than NPB
13 Berkeley Dwarfs
• Dense Linear Algebra
• Sparse Linear Algebra
• Spectral Methods
• N-Body Methods
• Structured Grids
• Unstructured Grids
• MapReduce
• Combinational Logic
• Graph Traversal
• Dynamic Programming
• Backtrack and Branch-and-Bound
• Graphical Models
• Finite State Machines
First 6 of these correspond to
Colella’s original.
Monte Carlo dropped
N-body methods are a subset of
Particle
Note a little inconsistent in that
MapReduce is a programming
model and spectral method is a
numerical method
Need multiple facets!
Distributed Computing MetaPatterns I
Jha, Cole, Katz, Parashar, Rana, Weissman
Distributed Computing MetaPatterns II
Jha, Cole, Katz, Parashar, Rana, Weissman
Distributed Computing MetaPatterns III
Jha, Cole, Katz, Parashar, Rana, Weissman
Core Analytics Facet of Ogres (microPattern)
i. Search/Query
ii. Local Machine Learning – pleasingly parallel
iii. Summarizing statistics
iv. Recommender Systems (Collaborative Filtering)
v. Outlier Detection (iORCA)
vi. Clustering (many methods),
vii. LDA (Latent Dirichlet Allocation) or variants like PLSI (Probabilistic
Latent Semantic Indexing),
viii. SVM and Linear Classifiers (Bayes, Random Forests),
ix. PageRank, (Find leading eigenvector of sparse matrix)
x. SVD (Singular Value Decomposition),
xi. Learning Neural Networks (Deep Learning),
xii. MDS (Multidimensional Scaling),
xiii. Graph Structure Algorithms (seen in search of RDF Triple stores),
xiv. Network Dynamics - Graph simulation Algorithms (epidemiology)
Matrix
Algebra
Global
Optimization
Problem Architecture Facet of Ogres (Meta or MacroPattern)
i. Pleasingly Parallel – as in Blast, Protein docking, some
(bio-)imagery
ii. Local Analytics or Machine Learning – ML or filtering
pleasingly parallel as in bio-imagery, radar images (really
just pleasingly parallel but sophisticated local analytics)
iii. Global Analytics or Machine Learning seen in LDA,
Clustering etc. with parallel ML over nodes of system
iv. SPMD (Single Program Multiple Data)
v. Bulk Synchronous Processing: well defined compute-
communication phases
vi. Fusion: Knowledge discovery often involves fusion of
multiple methods.
vii. Workflow (often used in fusion)
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
18: Computational
Bioimaging
• Application: Data delivered from bioimaging is increasingly automated, higher
resolution, and multi-modal. This has created a data analysis bottleneck that, if
resolved, can advance the biosciences discovery through Big Data techniques.
• Current Approach: The current piecemeal analysis approach does not scale to
situation where a single scan on emerging machines is 32TB and medical
diagnostic imaging is annually around 70 PB even excluding cardiology. One
needs a web-based one-stop-shop for high performance, high throughput image
processing for producers and consumers of models built on bio-imaging data.
• Futures: Goal is to solve that bottleneck with extreme scale computing with
community-focused science gateways to support the application of massive data
analysis toward massive imaging data sets. Workflow components include data
acquisition, storage, enhancement, minimizing noise, segmentation of regions of
interest, crowd-based selection and extraction of features, and object
classification, and organization, and search. Use ImageJ, OMERO, VolRover,
advanced segmentation and feature detection software.
Healthcare
Life Sciences
Largely Local Machine Learning
37
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
27: Organizing large-scale, unstructured
collections of consumer photos I
• Application: Produce 3D reconstructions of scenes using collections
of millions to billions of consumer images, where neither the scene
structure nor the camera positions are known a priori. Use resulting
3d models to allow efficient browsing of large-scale photo
collections by geographic position. Geolocate new images by
matching to 3d models. Perform object recognition on each image.
3d reconstruction posed as a robust non-linear least squares
optimization problem where observed relations between images
are constraints and unknowns are 6-d camera pose of each image
and 3-d position of each point in the scene.
• Current Approach: Hadoop cluster with 480 cores processing data
of initial applications. Note over 500 billion images on Facebook
and over 5 billion on Flickr with over 500 million images added to
social media sites each day.
Deep Learning
Social Networking
Global Machine Learning after Initial Local steps 38
Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13
27: Organizing large-scale, unstructured
collections of consumer photos II
• Futures: Need many analytics including feature extraction, feature
matching, and large-scale probabilistic inference, which appear in many
or most computer vision and image processing problems, including
recognition, stereo resolution, and image denoising. Need to visualize
large-scale 3-d reconstructions, and navigate large-scale collections of
images that have been aligned to maps.
Deep Learning
Social Networking
Global Machine Learning after Initial Local steps 39
This Facet of Ogres has Features
• These core analytics/kernels can be classified by features
like
• (a) Flops per byte;
• (b) Communication Interconnect requirements;
• (c) Is application (graph) constant or dynamic
• (d) Most applications consist of a set of interconnected
entities; is this regular as a set of pixels or is it a
complicated irregular graph
• (d) Is communication BSP or Asynchronous; in latter case
shared memory may be attractive
• (e) Are algorithms Iterative or not?
• (f) Are data points in metric or non-metric spaces
Application Class Facet of Ogres
• (a) Search and query
• (b) Maximum Likelihood,
• (c) χ2 minimizations,
• (d) Expectation Maximization (often Steepest descent)
• (e) Global Optimization (Variational Bayes)
• (f) Agents, as in epidemiology (swarm approaches)
• (g) GIS (Geographical Information Systems).
• Not as essential
Data Source Facet of Ogres
• (i) SQL,
• (ii) NOSQL based,
• (iii) Other Enterprise data systems (10 examples from Bob Marcus)
• (iv) Set of Files (as managed in iRODS),
• (v) Internet of Things,
• (vi) Streaming and
• (vii) HPC simulations.
• Before data gets to compute system, there is often an initial data
gathering phase which is characterized by a block size and timing. Block
size varies from month (Remote Sensing, Seismic) to day (genomic) to
seconds or lower (Real time control, streaming)
• There are storage/compute system styles: Shared, Dedicated,
Permanent, Transient
• Other characteristics are need for permanent auxiliary/comparison
datasets and these could be interdisciplinary implying nontrivial data
movement/replication
Lessons / Insights
• Ogres classify Big Data applications by multiple
facets – each with several exemplars and features
– Guide to breadth and depth of Big Data
– Does your architecture/software support all the ogres?
• Add database exemplars
• In parallel computing, the simple analytic kernels
dominate mindshare even though agreed limited
HPC-ABDS
Integrating High Performance Computing
with Apache Big Data Stack
Enhanced
Apache Big Data
Stack
ABDS
• ~120 Capabilities
• >40 Apache
• Green layers have strong HPC
Integration opportunities
• Goal
• Functionality of ABDS
• Performance of HPC
Broad Layers in HPC-ABDS
• Workflow-Orchestration
• Application and Analytics
• High level Programming
• Basic Programming model and runtime
– SPMD, Streaming, MapReduce, MPI
• Inter process communication
– Collectives, point to point, publish-subscribe
• In memory databases/caches
• Object-relational mapping
• SQL and NoSQL, File management
• Data Transport
• Cluster Resource Management (Yarn, Slurm, SGE)
• File systems(HDFS, Lustre …)
• DevOps (Puppet, Chef …)
• IaaS Management from HPC to hypervisors (OpenStack)
• Cross Cutting
– Message Protocols
– Distributed Coordination
– Security & Privacy
– Monitoring
Getting High Performance on Data
Analytics (e.g. Mahout, R …)
• On the systems side, we have two principles
– The Apache Big Data Stack with ~120 projects has important broad
functionality with a vital large support organization
– HPC including MPI has striking success in delivering high performance
with however a fragile sustainability model
• There are key systems abstractions which are levels in HPC-ABDS software
stack where Apache approach needs careful integration with HPC
– Resource management
– Storage
– Programming model -- horizontal scaling parallelism
– Collective and Point to Point communication
– Support of iteration
– Data interface (not just key-value)
• In application areas, we define application abstractions to support
– Graphs/network
– Geospatial
– Images etc.
Mahout and Hadoop MR – Slow due to MapReduce
Python slow as Scripting
Spark Iterative MapReduce, non optimal communication
Harp Hadoop plug in with ~MPI collectives
MPI fastest as C not JavaIncreasing
Communication
Identical Computation
4 Forms of MapReduce
(a) Map Only
(d) Loosely
Synchronous
(c) Iterative
MapReduce
(b) Classic
MapReduce
Input
map
reduce
Input
map
reduce
Iterations
Input
Output
map
Pij
BLAST Analysis
Parametric sweep
Pleasingly Parallel
High Energy Physics
(HEP) Histograms
Distributed search
Classic MPI
PDE Solvers and
particle dynamics
Domain of MapReduce and Iterative Extensions
Science Clouds
MPI
Giraph
Expectation maximization
Clustering e.g. Kmeans
Linear Algebra, Page Rank
MPI is Map followed by Point to Point or Collective Communication
– as in style c) plus d)
51
Map Collective Model (Judy Qiu)
• Generalizes Iterative MapReduce
• Combine MPI and MapReduce ideas
• Implement collectives optimally on Infiniband, Azure, Amazon ……
Input
map
Generalized Reduce
Initial Collective Step
Final Collective Step
Iterate
52
Major Analytics Architectures in Use Cases
• Pleasingly Parallel including local machine learning as in parallel
over images and apply image processing to each image --
Hadoop
• Search including collaborative filtering and motif finding
implemented using classic MapReduce (Hadoop) or non
iterative Giraph
• Iterative MapReduce using Collective Communication
(clustering) – Hadoop with Harp, Spark …..
• Iterative Giraph (MapReduce) with point to point
communication (most graph algorithms such as maximum
clique, connected component, finding diameter, community
detection)
– Vary in difficulty of finding partitioning (classic parallel load balancing)
• Shared memory thread based (event driven) graph algorithms
(shortest path, Betweenness centrality)
HPC-ABDS
Hourglass
HPC ABDS
System (Middleware)
High performance
Applications
• HPC Yarn for Resource management
• Horizontally scalable parallel programming model
• Collective and Point to Point communication
• Support of iteration
System Abstractions/standards
• Data format
• Storage
120 Software Projects
Application Abstractions/standards
Graphs, Networks, Images, Geospatial ….
SPIDAL (Scalable Parallel
Interoperable Data Analytics Library)
or High performance Mahout, R,
Matlab …..
Integrating Yarn with HPC
Using Optimal “Collective” Operations
• Twister4Azure Iterative MapReduce with enhanced collectives
– Map-AllReduce primitive and MapReduce-MergeBroadcast.
• Strong Scaling on Kmeans for up to 256 cores on Azure
Collectives improve traditional
MapReduce
• This is Kmeans running within basic Hadoop but
with optimal AllReduce collective operations
• Running on Infiniband Linux Cluster
• Shaded areas are computing only where Hadoop on HPC cluster
fastest
• Areas above shading are overheads where T4A smallest and T4A with
AllReduce collective has lowest overhead
• Note even on Azure Java (Orange) faster than T4A C# for compute
0
200
400
600
800
1000
1200
1400
32 x 32 M 64 x 64 M 128 x 128 M 256 x 256 M
Time(s)
Num. Cores X Num. Data Points
Hadoop AllReduce
Hadoop MapReduce
Twister4Azure AllReduce
Twister4Azure Broadcast
Twister4Azure
HDInsight
(AzureHadoop)
Kmeans and (Iterative) MapReduce
58
Harp Architecture
YARN
MapReduce V2
Harp
MapReduce Applications Map-Collective ApplicationsApplication
Framework
Resource Manager
Features of Harp Hadoop Plug in
• Hadoop Plugin (on Hadoop 1.2.1 and Hadoop
2.2.0)
• Hierarchical data abstraction on arrays, key-values
and graphs for easy programming expressiveness.
• Collective communication model to support
various communication operations on the data
abstractions.
• Caching with buffer management for memory
allocation required from computation and
communication
• BSP style parallelism
• Fault tolerance with check-pointing

Mais conteúdo relacionado

Mais procurados

What is a Data Commons and Why Should You Care?
What is a Data Commons and Why Should You Care? What is a Data Commons and Why Should You Care?
What is a Data Commons and Why Should You Care? Robert Grossman
 
Future of Data - Big Data
Future of Data - Big DataFuture of Data - Big Data
Future of Data - Big DataShankar R
 
Big Data Analysis Patterns - TriHUG 6/27/2013
Big Data Analysis Patterns - TriHUG 6/27/2013Big Data Analysis Patterns - TriHUG 6/27/2013
Big Data Analysis Patterns - TriHUG 6/27/2013boorad
 
Hadoop - An Introduction
Hadoop - An IntroductionHadoop - An Introduction
Hadoop - An IntroductionShankar R
 
Introduction to Big Data
Introduction to Big DataIntroduction to Big Data
Introduction to Big DataAmpoolIO
 
How Data Commons are Changing the Way that Large Datasets Are Analyzed and Sh...
How Data Commons are Changing the Way that Large Datasets Are Analyzed and Sh...How Data Commons are Changing the Way that Large Datasets Are Analyzed and Sh...
How Data Commons are Changing the Way that Large Datasets Are Analyzed and Sh...Robert Grossman
 
Genome Analysis Pipelines, Big Data Style
Genome Analysis Pipelines, Big Data StyleGenome Analysis Pipelines, Big Data Style
Genome Analysis Pipelines, Big Data StyleJulius Remigio, CBIP
 
The evolution of data analytics
The evolution of data analyticsThe evolution of data analytics
The evolution of data analyticsNatalino Busa
 
Introduction to Big Data
Introduction to Big DataIntroduction to Big Data
Introduction to Big DataVipin Batra
 
Crossing the Analytics Chasm and Getting the Models You Developed Deployed
Crossing the Analytics Chasm and Getting the Models You Developed DeployedCrossing the Analytics Chasm and Getting the Models You Developed Deployed
Crossing the Analytics Chasm and Getting the Models You Developed DeployedRobert Grossman
 
Fundamentals of big data analytics and Hadoop
Fundamentals of big data analytics and HadoopFundamentals of big data analytics and Hadoop
Fundamentals of big data analytics and HadoopArchana Gopinath
 
Modern Big Data Analytics Tools: An Overview
Modern Big Data Analytics Tools: An OverviewModern Big Data Analytics Tools: An Overview
Modern Big Data Analytics Tools: An OverviewGreat Wide Open
 

Mais procurados (20)

What is a Data Commons and Why Should You Care?
What is a Data Commons and Why Should You Care? What is a Data Commons and Why Should You Care?
What is a Data Commons and Why Should You Care?
 
Big data landscape
Big data landscapeBig data landscape
Big data landscape
 
Future of Data - Big Data
Future of Data - Big DataFuture of Data - Big Data
Future of Data - Big Data
 
Big Data Analysis Patterns - TriHUG 6/27/2013
Big Data Analysis Patterns - TriHUG 6/27/2013Big Data Analysis Patterns - TriHUG 6/27/2013
Big Data Analysis Patterns - TriHUG 6/27/2013
 
Bigdata
BigdataBigdata
Bigdata
 
Hadoop - An Introduction
Hadoop - An IntroductionHadoop - An Introduction
Hadoop - An Introduction
 
Introduction to Big Data
Introduction to Big DataIntroduction to Big Data
Introduction to Big Data
 
How Data Commons are Changing the Way that Large Datasets Are Analyzed and Sh...
How Data Commons are Changing the Way that Large Datasets Are Analyzed and Sh...How Data Commons are Changing the Way that Large Datasets Are Analyzed and Sh...
How Data Commons are Changing the Way that Large Datasets Are Analyzed and Sh...
 
Genome Analysis Pipelines, Big Data Style
Genome Analysis Pipelines, Big Data StyleGenome Analysis Pipelines, Big Data Style
Genome Analysis Pipelines, Big Data Style
 
The evolution of data analytics
The evolution of data analyticsThe evolution of data analytics
The evolution of data analytics
 
Big Data Analytics
Big Data AnalyticsBig Data Analytics
Big Data Analytics
 
Big Data
Big Data Big Data
Big Data
 
Introduction to Big Data
Introduction to Big DataIntroduction to Big Data
Introduction to Big Data
 
Cri big data
Cri big dataCri big data
Cri big data
 
Big Data
Big DataBig Data
Big Data
 
Crossing the Analytics Chasm and Getting the Models You Developed Deployed
Crossing the Analytics Chasm and Getting the Models You Developed DeployedCrossing the Analytics Chasm and Getting the Models You Developed Deployed
Crossing the Analytics Chasm and Getting the Models You Developed Deployed
 
Fundamentals of big data analytics and Hadoop
Fundamentals of big data analytics and HadoopFundamentals of big data analytics and Hadoop
Fundamentals of big data analytics and Hadoop
 
BigData
BigDataBigData
BigData
 
Modern Big Data Analytics Tools: An Overview
Modern Big Data Analytics Tools: An OverviewModern Big Data Analytics Tools: An Overview
Modern Big Data Analytics Tools: An Overview
 
Exploring Big Data Analytics Tools
Exploring Big Data Analytics ToolsExploring Big Data Analytics Tools
Exploring Big Data Analytics Tools
 

Destaque

Classification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsClassification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsGeoffrey Fox
 
Big data analytics and innovation
Big data analytics and innovationBig data analytics and innovation
Big data analytics and innovationAhmed Fattah
 
Analysing data analytics use cases to understand big data platform
Analysing data analytics use cases  to understand big data platformAnalysing data analytics use cases  to understand big data platform
Analysing data analytics use cases to understand big data platformdataeaze systems
 
Boldon James - How Data Classification can harness the power of Big Data
Boldon James - How Data Classification can harness the power of Big DataBoldon James - How Data Classification can harness the power of Big Data
Boldon James - How Data Classification can harness the power of Big Databoldonjames
 
Knowledge Collaboration: Working with Data and Web Specialists
Knowledge Collaboration: Working with Data and Web SpecialistsKnowledge Collaboration: Working with Data and Web Specialists
Knowledge Collaboration: Working with Data and Web SpecialistsOlivier Serrat
 
Topic Maps Web Service: Case Examples and General Structure
Topic Maps Web Service: Case Examples and General StructureTopic Maps Web Service: Case Examples and General Structure
Topic Maps Web Service: Case Examples and General Structuretmra
 
Evaluating Network and Security Devices
Evaluating Network and Security DevicesEvaluating Network and Security Devices
Evaluating Network and Security Devicesponealmickelson
 
Kalray TURBOCARD2 @ ISC'14
Kalray TURBOCARD2 @ ISC'14Kalray TURBOCARD2 @ ISC'14
Kalray TURBOCARD2 @ ISC'14KALRAY
 
High Performance Computing in the Cloud?
High Performance Computing in the Cloud?High Performance Computing in the Cloud?
High Performance Computing in the Cloud?Ian Lumb
 
A Framework for Developing and Operationalizing Security Use Cases
A Framework for Developing and Operationalizing Security Use CasesA Framework for Developing and Operationalizing Security Use Cases
A Framework for Developing and Operationalizing Security Use CasesRyan Faircloth
 
High Performance Computing - The Future is Here
High Performance Computing - The Future is HereHigh Performance Computing - The Future is Here
High Performance Computing - The Future is HereMartin Hamilton
 
Topic map for Topic Maps case examples
Topic map for Topic Maps case examplesTopic map for Topic Maps case examples
Topic map for Topic Maps case examplestmra
 
Open-Source Security Management and Vulnerability Impact Assessment
Open-Source Security Management and Vulnerability Impact AssessmentOpen-Source Security Management and Vulnerability Impact Assessment
Open-Source Security Management and Vulnerability Impact AssessmentPriyanka Aash
 
High Performance Computing using MPI
High Performance Computing using MPIHigh Performance Computing using MPI
High Performance Computing using MPIAnkit Mahato
 
Introduction to High-Performance Computing
Introduction to High-Performance ComputingIntroduction to High-Performance Computing
Introduction to High-Performance ComputingUmarudin Zaenuri
 
Big Data & Analytics - Use Cases in Mobile, E-commerce, Media and more
Big Data & Analytics - Use Cases in Mobile, E-commerce, Media and moreBig Data & Analytics - Use Cases in Mobile, E-commerce, Media and more
Big Data & Analytics - Use Cases in Mobile, E-commerce, Media and moreAmazon Web Services
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance ComputingDell World
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance ComputingDivyen Patel
 

Destaque (20)

Classification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsClassification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different Facets
 
Big data analytics use case and software
Big data analytics use case and softwareBig data analytics use case and software
Big data analytics use case and software
 
Big data analytics and innovation
Big data analytics and innovationBig data analytics and innovation
Big data analytics and innovation
 
Json
JsonJson
Json
 
Analysing data analytics use cases to understand big data platform
Analysing data analytics use cases  to understand big data platformAnalysing data analytics use cases  to understand big data platform
Analysing data analytics use cases to understand big data platform
 
Boldon James - How Data Classification can harness the power of Big Data
Boldon James - How Data Classification can harness the power of Big DataBoldon James - How Data Classification can harness the power of Big Data
Boldon James - How Data Classification can harness the power of Big Data
 
Knowledge Collaboration: Working with Data and Web Specialists
Knowledge Collaboration: Working with Data and Web SpecialistsKnowledge Collaboration: Working with Data and Web Specialists
Knowledge Collaboration: Working with Data and Web Specialists
 
Topic Maps Web Service: Case Examples and General Structure
Topic Maps Web Service: Case Examples and General StructureTopic Maps Web Service: Case Examples and General Structure
Topic Maps Web Service: Case Examples and General Structure
 
Evaluating Network and Security Devices
Evaluating Network and Security DevicesEvaluating Network and Security Devices
Evaluating Network and Security Devices
 
Kalray TURBOCARD2 @ ISC'14
Kalray TURBOCARD2 @ ISC'14Kalray TURBOCARD2 @ ISC'14
Kalray TURBOCARD2 @ ISC'14
 
High Performance Computing in the Cloud?
High Performance Computing in the Cloud?High Performance Computing in the Cloud?
High Performance Computing in the Cloud?
 
A Framework for Developing and Operationalizing Security Use Cases
A Framework for Developing and Operationalizing Security Use CasesA Framework for Developing and Operationalizing Security Use Cases
A Framework for Developing and Operationalizing Security Use Cases
 
High Performance Computing - The Future is Here
High Performance Computing - The Future is HereHigh Performance Computing - The Future is Here
High Performance Computing - The Future is Here
 
Topic map for Topic Maps case examples
Topic map for Topic Maps case examplesTopic map for Topic Maps case examples
Topic map for Topic Maps case examples
 
Open-Source Security Management and Vulnerability Impact Assessment
Open-Source Security Management and Vulnerability Impact AssessmentOpen-Source Security Management and Vulnerability Impact Assessment
Open-Source Security Management and Vulnerability Impact Assessment
 
High Performance Computing using MPI
High Performance Computing using MPIHigh Performance Computing using MPI
High Performance Computing using MPI
 
Introduction to High-Performance Computing
Introduction to High-Performance ComputingIntroduction to High-Performance Computing
Introduction to High-Performance Computing
 
Big Data & Analytics - Use Cases in Mobile, E-commerce, Media and more
Big Data & Analytics - Use Cases in Mobile, E-commerce, Media and moreBig Data & Analytics - Use Cases in Mobile, E-commerce, Media and more
Big Data & Analytics - Use Cases in Mobile, E-commerce, Media and more
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance Computing
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance Computing
 

Semelhante a Multi-faceted Classification of Big Data Use Cases and Proposed Architecture Integrating High Performance Computing and the Apache Stack 

Agile Big Data Analytics Development: An Architecture-Centric Approach
Agile Big Data Analytics Development: An Architecture-Centric ApproachAgile Big Data Analytics Development: An Architecture-Centric Approach
Agile Big Data Analytics Development: An Architecture-Centric ApproachSoftServe
 
CTO Perspectives: What's Next for Data Management and Healthcare?
CTO Perspectives: What's Next for Data Management and Healthcare?CTO Perspectives: What's Next for Data Management and Healthcare?
CTO Perspectives: What's Next for Data Management and Healthcare?Health Catalyst
 
Apache Hadoop Summit 2016: The Future of Apache Hadoop an Enterprise Architec...
Apache Hadoop Summit 2016: The Future of Apache Hadoop an Enterprise Architec...Apache Hadoop Summit 2016: The Future of Apache Hadoop an Enterprise Architec...
Apache Hadoop Summit 2016: The Future of Apache Hadoop an Enterprise Architec...PwC
 
The Future of Apache Hadoop an Enterprise Architecture View
The Future of Apache Hadoop an Enterprise Architecture ViewThe Future of Apache Hadoop an Enterprise Architecture View
The Future of Apache Hadoop an Enterprise Architecture ViewDataWorks Summit/Hadoop Summit
 
Advanced Analytics and Machine Learning with Data Virtualization (India)
Advanced Analytics and Machine Learning with Data Virtualization (India)Advanced Analytics and Machine Learning with Data Virtualization (India)
Advanced Analytics and Machine Learning with Data Virtualization (India)Denodo
 
Big Data Evolution
Big Data EvolutionBig Data Evolution
Big Data Evolutionitnewsafrica
 
Paving the way to open and interoperable research data service workflows Prog...
Paving the way to open and interoperable research data service workflows Prog...Paving the way to open and interoperable research data service workflows Prog...
Paving the way to open and interoperable research data service workflows Prog...ResearchSpace
 
Paving the way to open and interoperable research data service workflows
Paving the way to open and interoperable research data service workflowsPaving the way to open and interoperable research data service workflows
Paving the way to open and interoperable research data service workflowsThe University of Edinburgh
 
NIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGNIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGGeoffrey Fox
 
[Webinar] Getting to Insights Faster: A Framework for Agile Big Data
[Webinar] Getting to Insights Faster: A Framework for Agile Big Data[Webinar] Getting to Insights Faster: A Framework for Agile Big Data
[Webinar] Getting to Insights Faster: A Framework for Agile Big DataInfochimps, a CSC Big Data Business
 
Pemanfaatan Big Data Dalam Riset 2023.pptx
Pemanfaatan Big Data Dalam Riset 2023.pptxPemanfaatan Big Data Dalam Riset 2023.pptx
Pemanfaatan Big Data Dalam Riset 2023.pptxelisarosa29
 
Hattrick Simpers TMS Machine Learning Workshop Slides
Hattrick Simpers TMS Machine Learning Workshop SlidesHattrick Simpers TMS Machine Learning Workshop Slides
Hattrick Simpers TMS Machine Learning Workshop SlidesJason Hattrick-Simpers
 
Philips john huffman
Philips john huffmanPhilips john huffman
Philips john huffmanBigDataExpo
 
Unlock Your Data for ML & AI using Data Virtualization
Unlock Your Data for ML & AI using Data VirtualizationUnlock Your Data for ML & AI using Data Virtualization
Unlock Your Data for ML & AI using Data VirtualizationDenodo
 
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...Denodo
 
Oracle NoSQL DB & InfiniteGraph - Trends in Big Data and Graph Technology
Oracle NoSQL DB & InfiniteGraph - Trends in Big Data and Graph TechnologyOracle NoSQL DB & InfiniteGraph - Trends in Big Data and Graph Technology
Oracle NoSQL DB & InfiniteGraph - Trends in Big Data and Graph TechnologyInfiniteGraph
 
Predictive Analytics: Context and Use Cases
Predictive Analytics: Context and Use CasesPredictive Analytics: Context and Use Cases
Predictive Analytics: Context and Use CasesKimberley Mitchell
 
The Shifting Landscape of Data Integration
The Shifting Landscape of Data IntegrationThe Shifting Landscape of Data Integration
The Shifting Landscape of Data IntegrationDATAVERSITY
 
Analyst Keynote: Delivering Faster Insights with a Logical Data Fabric in a H...
Analyst Keynote: Delivering Faster Insights with a Logical Data Fabric in a H...Analyst Keynote: Delivering Faster Insights with a Logical Data Fabric in a H...
Analyst Keynote: Delivering Faster Insights with a Logical Data Fabric in a H...Denodo
 

Semelhante a Multi-faceted Classification of Big Data Use Cases and Proposed Architecture Integrating High Performance Computing and the Apache Stack  (20)

Agile Big Data Analytics Development: An Architecture-Centric Approach
Agile Big Data Analytics Development: An Architecture-Centric ApproachAgile Big Data Analytics Development: An Architecture-Centric Approach
Agile Big Data Analytics Development: An Architecture-Centric Approach
 
CTO Perspectives: What's Next for Data Management and Healthcare?
CTO Perspectives: What's Next for Data Management and Healthcare?CTO Perspectives: What's Next for Data Management and Healthcare?
CTO Perspectives: What's Next for Data Management and Healthcare?
 
Apache Hadoop Summit 2016: The Future of Apache Hadoop an Enterprise Architec...
Apache Hadoop Summit 2016: The Future of Apache Hadoop an Enterprise Architec...Apache Hadoop Summit 2016: The Future of Apache Hadoop an Enterprise Architec...
Apache Hadoop Summit 2016: The Future of Apache Hadoop an Enterprise Architec...
 
The Future of Apache Hadoop an Enterprise Architecture View
The Future of Apache Hadoop an Enterprise Architecture ViewThe Future of Apache Hadoop an Enterprise Architecture View
The Future of Apache Hadoop an Enterprise Architecture View
 
Advanced Analytics and Machine Learning with Data Virtualization (India)
Advanced Analytics and Machine Learning with Data Virtualization (India)Advanced Analytics and Machine Learning with Data Virtualization (India)
Advanced Analytics and Machine Learning with Data Virtualization (India)
 
Big Data Evolution
Big Data EvolutionBig Data Evolution
Big Data Evolution
 
Paving the way to open and interoperable research data service workflows Prog...
Paving the way to open and interoperable research data service workflows Prog...Paving the way to open and interoperable research data service workflows Prog...
Paving the way to open and interoperable research data service workflows Prog...
 
Paving the way to open and interoperable research data service workflows
Paving the way to open and interoperable research data service workflowsPaving the way to open and interoperable research data service workflows
Paving the way to open and interoperable research data service workflows
 
NIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGNIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWG
 
[Webinar] Getting to Insights Faster: A Framework for Agile Big Data
[Webinar] Getting to Insights Faster: A Framework for Agile Big Data[Webinar] Getting to Insights Faster: A Framework for Agile Big Data
[Webinar] Getting to Insights Faster: A Framework for Agile Big Data
 
Pemanfaatan Big Data Dalam Riset 2023.pptx
Pemanfaatan Big Data Dalam Riset 2023.pptxPemanfaatan Big Data Dalam Riset 2023.pptx
Pemanfaatan Big Data Dalam Riset 2023.pptx
 
Hattrick Simpers TMS Machine Learning Workshop Slides
Hattrick Simpers TMS Machine Learning Workshop SlidesHattrick Simpers TMS Machine Learning Workshop Slides
Hattrick Simpers TMS Machine Learning Workshop Slides
 
Philips john huffman
Philips john huffmanPhilips john huffman
Philips john huffman
 
TOUG Big Data Challenge and Impact
TOUG Big Data Challenge and ImpactTOUG Big Data Challenge and Impact
TOUG Big Data Challenge and Impact
 
Unlock Your Data for ML & AI using Data Virtualization
Unlock Your Data for ML & AI using Data VirtualizationUnlock Your Data for ML & AI using Data Virtualization
Unlock Your Data for ML & AI using Data Virtualization
 
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...
 
Oracle NoSQL DB & InfiniteGraph - Trends in Big Data and Graph Technology
Oracle NoSQL DB & InfiniteGraph - Trends in Big Data and Graph TechnologyOracle NoSQL DB & InfiniteGraph - Trends in Big Data and Graph Technology
Oracle NoSQL DB & InfiniteGraph - Trends in Big Data and Graph Technology
 
Predictive Analytics: Context and Use Cases
Predictive Analytics: Context and Use CasesPredictive Analytics: Context and Use Cases
Predictive Analytics: Context and Use Cases
 
The Shifting Landscape of Data Integration
The Shifting Landscape of Data IntegrationThe Shifting Landscape of Data Integration
The Shifting Landscape of Data Integration
 
Analyst Keynote: Delivering Faster Insights with a Logical Data Fabric in a H...
Analyst Keynote: Delivering Faster Insights with a Logical Data Fabric in a H...Analyst Keynote: Delivering Faster Insights with a Logical Data Fabric in a H...
Analyst Keynote: Delivering Faster Insights with a Logical Data Fabric in a H...
 

Mais de Geoffrey Fox

AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...Geoffrey Fox
 
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Geoffrey Fox
 
High Performance Computing and Big Data
High Performance Computing and Big Data High Performance Computing and Big Data
High Performance Computing and Big Data Geoffrey Fox
 
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Geoffrey Fox
 
Big Data HPC Convergence
Big Data HPC ConvergenceBig Data HPC Convergence
Big Data HPC ConvergenceGeoffrey Fox
 
Data Science and Online Education
Data Science and Online EducationData Science and Online Education
Data Science and Online EducationGeoffrey Fox
 
High Performance Processing of Streaming Data
High Performance Processing of Streaming DataHigh Performance Processing of Streaming Data
High Performance Processing of Streaming DataGeoffrey Fox
 
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Geoffrey Fox
 
Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Geoffrey Fox
 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Geoffrey Fox
 
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...Geoffrey Fox
 
Data Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityData Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityGeoffrey Fox
 
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
What is the "Big Data" version of the Linpack Benchmark?; What is “Big Data...What is the "Big Data" version of the Linpack Benchmark?; What is “Big Data...
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...Geoffrey Fox
 
Experience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyExperience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyGeoffrey Fox
 
Cloud Services for Big Data Analytics
Cloud Services for Big Data AnalyticsCloud Services for Big Data Analytics
Cloud Services for Big Data AnalyticsGeoffrey Fox
 
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesMatching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesGeoffrey Fox
 
Big Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationBig Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationGeoffrey Fox
 
Comparing Big Data and Simulation Applications and Implications for Software ...
Comparing Big Data and Simulation Applications and Implications for Software ...Comparing Big Data and Simulation Applications and Implications for Software ...
Comparing Big Data and Simulation Applications and Implications for Software ...Geoffrey Fox
 
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC Geoffrey Fox
 

Mais de Geoffrey Fox (20)

AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
 
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
 
High Performance Computing and Big Data
High Performance Computing and Big Data High Performance Computing and Big Data
High Performance Computing and Big Data
 
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
 
Big Data HPC Convergence
Big Data HPC ConvergenceBig Data HPC Convergence
Big Data HPC Convergence
 
Data Science and Online Education
Data Science and Online EducationData Science and Online Education
Data Science and Online Education
 
High Performance Processing of Streaming Data
High Performance Processing of Streaming DataHigh Performance Processing of Streaming Data
High Performance Processing of Streaming Data
 
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
 
Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel 
 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...
 
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
 
Data Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityData Science Curriculum at Indiana University
Data Science Curriculum at Indiana University
 
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
What is the "Big Data" version of the Linpack Benchmark?; What is “Big Data...What is the "Big Data" version of the Linpack Benchmark?; What is “Big Data...
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
 
Experience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyExperience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC Technology
 
Cloud Services for Big Data Analytics
Cloud Services for Big Data AnalyticsCloud Services for Big Data Analytics
Cloud Services for Big Data Analytics
 
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesMatching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software Architectures
 
Big Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationBig Data and Clouds: Research and Education
Big Data and Clouds: Research and Education
 
Comparing Big Data and Simulation Applications and Implications for Software ...
Comparing Big Data and Simulation Applications and Implications for Software ...Comparing Big Data and Simulation Applications and Implications for Software ...
Comparing Big Data and Simulation Applications and Implications for Software ...
 
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
 
Remarks on MOOC's
Remarks on MOOC'sRemarks on MOOC's
Remarks on MOOC's
 

Último

Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdfChristopherTHyatt
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 

Último (20)

Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 

Multi-faceted Classification of Big Data Use Cases and Proposed Architecture Integrating High Performance Computing and the Apache Stack 

  • 1. Multi-faceted Classification of Big Data Uses and Proposed Architecture Integrating High Performance Computing and the Apache Stack Sixth International Workshop on Cloud Data Management CloudDB 2014 Chicago March 31 2014 Geoffrey Fox gcf@indiana.edu http://www.infomall.org School of Informatics and Computing Digital Science Center Indiana University Bloomington
  • 2. Abstract • We introduce the NIST collection of 51 use cases and describe their scope over industry, government and research areas. We look at their structure from several points of view or facets covering problem architecture, analytics kernels, micro- system usage such as flops/bytes, application class (GIS, expectation maximization) and very importantly data source. • We then propose that in many cases it is wise to combine the well known commodity best practice (often Apache) Big Data Stack (with ~120 software subsystems) with high performance computing technologies. • We describe this and give early results based on clustering running with different paradigms. • We identify key layers where HPC Apache integration is particularly important: File systems, Cluster resource management, File and object data management, Inter process and thread communication, Analytics libraries, Workflow and Monitoring.
  • 3. NIST Big Data Use Cases
  • 4. NIST Requirements and Use Case Subgroup • Part of NIST Big Data Public Working Group (NBD-PWG) June-September 2013 http://bigdatawg.nist.gov/ • Leaders of activity – Wo Chang, NIST – Robert Marcus, ET-Strategies – Chaitanya Baru, UC San Diego The focus is to form a community of interest from industry, academia, and government, with the goal of developing a consensus list of Big Data requirements across all stakeholders. This includes gathering and understanding various use cases from diversified application domains. Tasks • Gather use case input from all stakeholders • Derive Big Data requirements from each use case. • Analyze/prioritize a list of challenging general requirements that may delay or prevent adoption of Big Data deployment • Develop a set of general patterns capturing the “essence” of use cases (to do) • Work with Reference Architecture to validate requirements and reference architecture by explicitly implementing some patterns based on use cases 4
  • 5. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 Big Data Definition • More consensus on Data Science definition than that of Big Data • Big Data refers to digital data volume, velocity and/or variety that: • Enable novel approaches to frontier questions previously inaccessible or impractical using current or conventional methods; and/or • Exceed the storage capacity or analysis capability of current or conventional methods and systems; and • Differentiates by storing and analyzing population data and not sample sizes. • Needs management requiring scalability across coupled horizontal resources • Everybody says their data is big (!) Perhaps how it is used is most important 5
  • 6. What is Data Science? • I was impressed by number of NIST working group members who were self declared data scientists • I was also impressed by universal adoption by participants of Apache technologies – see later • McKinsey says there are lots of jobs (1.65M by 2018 in USA) but that’s not enough! Is this a field – what is it and what is its core? • The emergence of the 4th or data driven paradigm of science illustrates significance - http://research.microsoft.com/en- us/collaboration/fourthparadigm/ • Discovery is guided by data rather than by a model • The End of (traditional) science http://www.wired.com/wired/issue/16- 07 is famous here • Another example is recommender systems in Netflix, e- commerce etc. where pure data (user ratings of movies or products) allows an empirical prediction of what users like
  • 8. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 Data Science Definition • Data Science is the extraction of actionable knowledge directly from data through a process of discovery, hypothesis, and analytical hypothesis analysis. • A Data Scientist is a practitioner who has sufficient knowledge of the overlapping regimes of expertise in business needs, domain knowledge, analytical skills and programming expertise to manage the end-to-end scientific method process through each stage in the big data lifecycle. 8
  • 9. Use Case Template • 26 fields completed for 51 areas • Government Operation: 4 • Commercial: 8 • Defense: 3 • Healthcare and Life Sciences: 10 • Deep Learning and Social Media: 6 • The Ecosystem for Research: 4 • Astronomy and Physics: 5 • Earth, Environmental and Polar Science: 10 • Energy: 1 9
  • 10. 51 Detailed Use Cases: Contributed July-September 2013 Covers goals, data features such as 3 V’s, software, hardware • http://bigdatawg.nist.gov/usecases.php • https://bigdatacoursespring2014.appspot.com/course (Section 5) • Government Operation(4): National Archives and Records Administration, Census Bureau • Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search, Digital Materials, Cargo shipping (as in UPS) • Defense(3): Sensors, Image surveillance, Situation Assessment • Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis, Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity • Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd Sourcing, Network Science, NIST benchmark datasets • The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source experiments • Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron Collider at CERN, Belle Accelerator II in Japan • Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake, Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to watersheds), AmeriFlux and FLUXNET gas sensors • Energy(1): Smart grid 26 Features for each use case Biased to science 10
  • 11. Part of Property Summary Table 11
  • 12. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 3: Census Bureau Statistical Survey Response Improvement (Adaptive Design) • Application: Survey costs are increasing as survey response declines. The goal of this work is to use advanced “recommendation system techniques” that are open and scientifically objective, using data mashed up from several sources and historical survey para-data (administrative data about the survey) to drive operational processes in an effort to increase quality and reduce the cost of field surveys. • Current Approach: About a petabyte of data coming from surveys and other government administrative sources. Data can be streamed with approximately 150 million records transmitted as field data streamed continuously, during the decennial census. All data must be both confidential and secure. All processes must be auditable for security and confidentiality as required by various legal statutes. Data quality should be high and statistically checked for accuracy and reliability throughout the collection process. Use Hadoop, Spark, Hive, R, SAS, Mahout, Allegrograph, MySQL, Oracle, Storm, BigMemory, Cassandra, Pig software. • Futures: Analytics needs to be developed which give statistical estimations that provide more detail, on a more near real time basis for less cost. The reliability of estimated statistics from such “mashed up” sources still must be evaluated. Government 12
  • 13. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 7: Netflix Movie Service • Application: Allow streaming of user selected movies to satisfy multiple objectives (for different stakeholders) -- especially retaining subscribers. Find best possible ordering of a set of videos for a user (household) within a given context in real-time; maximize movie consumption. Digital movies stored in cloud with metadata; user profiles and rankings for small fraction of movies for each user. Use multiple criteria – content based recommender system; user-based recommender system; diversity. Refine algorithms continuously with A/B testing. • Current Approach: Recommender systems and streaming video delivery are core Netflix technologies. Recommender systems are always personalized and use logistic/linear regression, elastic nets, matrix factorization, clustering, latent Dirichlet allocation, association rules, gradient boosted decision trees etc. Winner of Netflix competition (to improve ratings by 10%) combined over 100 different algorithms. Uses SQL, NoSQL, MapReduce on Amazon Web Services. Netflix recommender systems have features in common to e-commerce like Amazon. Streaming video has features in common with other content providing services like iTunes, Google Play, Pandora and Last.fm. • Futures: Very competitive business. Need to be aware of other companies and trends in both content (which Movies are hot) and technology. Need to investigate new business initiatives such as Netflix sponsored content Commercial 13
  • 14. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 15: Intelligence Data Processing and Analysis • Application: Allow Intelligence Analysts to a) Identify relationships between entities (people, organizations, places, equipment) b) Spot trends in sentiment or intent for either general population or leadership group (state, non-state actors) c) Find location of and possibly timing of hostile actions (including implantation of IEDs) d) Track the location and actions of (potentially) hostile actors e) Ability to reason against and derive knowledge from diverse, disconnected, and frequently unstructured (e.g. text) data sources f) Ability to process data close to the point of collection and allow data to be shared easily to/from individual soldiers, forward deployed units, and senior leadership in garrison. • Current Approach: Software includes Hadoop, Accumulo (Big Table), Solr, Natural Language Processing, Puppet (for deployment and security) and Storm running on medium size clusters. Data size in 10s of Terabytes to 100s of Petabytes with Imagery intelligence device gathering petabyte in a few hours. Dismounted warfighters would have at most 1-100s of Gigabytes (typically handheld data storage). • Futures: Data currently exists in disparate silos which must be accessible through a semantically integrated data space. Wide variety of data types, sources, structures, and quality which will span domains and requires integrated search and reasoning. Most critical data is either unstructured or imagery/video which requires significant processing to extract entities and information. Network quality, Provenance and security essential. Defense 14
  • 15. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 26: Large-scale Deep Learning • Application: Large models (e.g., neural networks with more neurons and connections) combined with large datasets are increasingly the top performers in benchmark tasks for vision, speech, and Natural Language Processing. One needs to train a deep neural network from a large (>>1TB) corpus of data (typically imagery, video, audio, or text). Such training procedures often require customization of the neural network architecture, learning criteria, and dataset pre-processing. In addition to the computational expense demanded by the learning algorithms, the need for rapid prototyping and ease of development is extremely high. • Current Approach: The largest applications so far are to image recognition and scientific studies of unsupervised learning with 10 million images and up to 11 billion parameters on a 64 GPU HPC Infiniband cluster. Both supervised (using existing classified images) and unsupervised applications Deep Learning Social Networking • Futures: Large datasets of 100TB or more may be necessary in order to exploit the representational power of the larger models. Training a self-driving car could take 100 million images at megapixel resolution. Deep Learning shares many characteristics with the broader field of machine learning. The paramount requirements are high computational throughput for mostly dense linear algebra operations, and extremely high productivity for researcher exploration. One needs integration of high performance libraries with high level (python) prototyping environments IN Classified OUT 15
  • 16. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 35: Light source beamlines • Application: Samples are exposed to X-rays from light sources in a variety of configurations depending on the experiment. Detectors (essentially high-speed digital cameras) collect the data. The data are then analyzed to reconstruct a view of the sample or process being studied. • Current Approach: A variety of commercial and open source software is used for data analysis – examples including Octopus for Tomographic Reconstruction, Avizo (http://vsg3d.com) and FIJI (a distribution of ImageJ) for Visualization and Analysis. Data transfer is accomplished using physical transport of portable media (severely limits performance) or using high-performance GridFTP, managed by Globus Online or workflow systems such as SPADE. • Futures: Camera resolution is continually increasing. Data transfer to large-scale computing facilities is becoming necessary because of the computational power required to conduct the analysis on time scales useful to the experiment. Large number of beamlines (e.g. 39 at LBNL ALS) means that total data load is likely to increase significantly and require a generalized infrastructure for analyzing gigabytes per second of data from many beamline detectors at multiple facilities. Research Ecosystem 16
  • 17. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 36: Catalina Real-Time Transient Survey (CRTS): a digital, panoramic, synoptic sky survey I • Application: The survey explores the variable universe in the visible light regime, on time scales ranging from minutes to years, by searching for variable and transient sources. It discovers a broad variety of astrophysical objects and phenomena, including various types of cosmic explosions (e.g., Supernovae), variable stars, phenomena associated with accretion to massive black holes (active galactic nuclei) and their relativistic jets, high proper motion stars, etc. The data are collected from 3 telescopes (2 in Arizona and 1 in Australia), with additional ones expected in the near future (in Chile). • Current Approach: The survey generates up to ~ 0.1 TB on a clear night with a total of ~100 TB in current data holdings. The data are preprocessed at the telescope, and transferred to Univ. of Arizona and Caltech, for further analysis, distribution, and archiving. The data are processed in real time, and detected transient events are published electronically through a variety of dissemination mechanisms, with no proprietary withholding period (CRTS has a completely open data policy). Further data analysis includes classification of the detected transient events, additional observations using other telescopes, scientific interpretation, and publishing. In this process, it makes a heavy use of the archival data (several PB’s) from a wide variety of geographically distributed resources connected through the Virtual Observatory (VO) framework. Astronomy & Physics 17
  • 18. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 36: Catalina Real-Time Transient Survey (CRTS): a digital, panoramic, synoptic sky survey II • Futures: CRTS is a scientific and methodological testbed and precursor of larger surveys to come, notably the Large Synoptic Survey Telescope (LSST), expected to operate in 2020’s and selected as the highest-priority ground-based instrument in the 2010 Astronomy and Astrophysics Decadal Survey. LSST will gather about 30 TB per night. Astronomy & Physics 18
  • 19. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 47: Atmospheric Turbulence - Event Discovery and Predictive Analytics • Application: This builds datamining on top of reanalysis products including the North American Regional Reanalysis (NARR) and the Modern-Era Retrospective-Analysis for Research (MERRA) from NASA where latter described earlier. The analytics correlate aircraft reports of turbulence (either from pilot reports or from automated aircraft measurements of eddy dissipation rates) with recently completed atmospheric re-analyses. This is of value to aviation industry and to weather forecasters. There are no standards for re-analysis products complicating system where MapReduce is being investigated. The reanalysis data is hundreds of terabytes and slowly updated whereas turbulence is smaller in size and implemented as a streaming service. Earth, Environmental and Polar Science • Current Approach: Current 200TB dataset can be analyzed with MapReduce or the like using SciDB or other scientific database. • Futures: The dataset will reach 500TB in 5 years. The initial turbulence case can be extended to other ocean/atmosphere phenomena but the analytics would be different in each case. Typical NASA image of turbulent waves 19
  • 20. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 51: Consumption forecasting in Smart Grids • Application: Predict energy consumption for customers, transformers, sub- stations and the electrical grid service area using smart meters providing measurements every 15-mins at the granularity of individual consumers within the service area of smart power utilities. Combine Head-end of smart meters (distributed), Utility databases (Customer Information, Network topology; centralized), US Census data (distributed), NOAA weather data (distributed), Micro-grid building information system (centralized), Micro-grid sensor network (distributed). This generalizes to real-time data-driven analytics for time series from cyber physical systems • Current Approach: GIS based visualization. Data is around 4 TB a year for a city with 1.4M sensors in Los Angeles. Uses R/Matlab, Weka, Hadoop software. Significant privacy issues requiring anonymization by aggregation. Combine real time and historic data with machine learning for predicting consumption. • Futures: Wide spread deployment of Smart Grids with new analytics integrating diverse data and supporting curtailment requests. Mobile applications for client interactions. Energy 20
  • 21. 10 Suggested Generic Use Cases 1) Multiple users performing interactive queries and updates on a database with basic availability and eventual consistency (BASE) 2) Perform real time analytics on data source streams and notify users when specified events occur 3) Move data from external data sources into a highly horizontally scalable data store, transform it using highly horizontally scalable processing (e.g. Map-Reduce), and return it to the horizontally scalable data store (ELT) 4) Perform batch analytics on the data in a highly horizontally scalable data store using highly horizontally scalable processing (e.g MapReduce) with a user-friendly interface (e.g. SQL like) 5) Perform interactive analytics on data in analytics-optimized database 6) Visualize data extracted from horizontally scalable Big Data score 7) Move data from a highly horizontally scalable data store into a traditional Enterprise Data Warehouse 8) Extract, process, and move data from data stores to archives 9) Combine data from Cloud databases and on premise data stores for analytics, data mining, and/or machine learning 10) Orchestrate multiple sequential and parallel data transformations and/or analytic processing using a workflow manager
  • 22. 10 Security & Privacy Use Cases • Consumer Digital Media Usage • Nielsen Homescan • Web Traffic Analytics • Health Information Exchange • Personal Genetic Privacy • Pharma Clinic Trial Data Sharing • Cyber-security • Aviation Industry • Military - Unmanned Vehicle sensor data • Education - “Common Core” Student Performance Reporting • Need to integrate 10 “generic” and 10 “security & privacy” with 51 “full use cases”
  • 23. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 Management Security&Privacy Big Data Application Provider Visualization AccessAnalyticsCurationCollection System Orchestrator DATA SW DATA SW I N F O R M AT I O N VA L U E C H A I N ITVALUECHAIN DataConsumer DataProvider Horizontally Scalable (VM clusters) Vertically Scalable Horizontally Scalable Vertically Scalable Horizontally Scalable Vertically Scalable Big Data Framework Provider Processing Frameworks (analytic tools, etc.) Platforms (databases, etc.) Infrastructures Physical and Virtual Resources (networking, computing, etc.) DATA SW K E Y : SW Service Use Data Flow Analytics Tools Transfer DATA NIST Big Data Reference Architecture 23
  • 24. Requirements Extraction Process • Two-step process is used for requirement extraction: 1) Extract specific requirements and map to reference architecture based on each application’s characteristics such as: a) data sources (data size, file formats, rate of grow, at rest or in motion, etc.) b) data lifecycle management (curation, conversion, quality check, pre-analytic processing, etc.) c) data transformation (data fusion/mashup, analytics), d) capability infrastructure (software tools, platform tools, hardware resources such as storage and networking), and e) data usage (processed results in text, table, visual, and other formats). f) all architecture components informed by Goals and use case description g) Security & Privacy has direct map 2) Aggregate all specific requirements into high-level generalized requirements which are vendor-neutral and technology agnostic. 24
  • 25. Size of Process • The draft use case and requirements report is 264 pages – How much web and how much publication? • 35 General Requirements • 437 Specific Requirements – 8.6 per use case, 12.5 per general requirement • Data Sources: 3 General 78 Specific • Transformation: 4 General 60 Specific • Capability (Infrastructure): 6 General 133 Specific • Data Consumer: 6 General 55 Specific • Security & Privacy: 2 General 45 Specific • Lifecycle: 9 General 43 Specific • Other: 5 General 23 Specific • Not clearly useful – prefer to identify common “structure/kernels” 25
  • 26. Significant Web Resources • Index to all use cases http://bigdatawg.nist.gov/usecases.php – This links to individual submissions and other processed/collected information • List of specific requirements versus use case http://bigdatawg.nist.gov/uc_reqs_summary.php • List of general requirements versus architecture component http://bigdatawg.nist.gov/uc_reqs_gen.php • List of general requirements versus architecture component with record of use cases giving requirement http://bigdatawg.nist.gov/uc_reqs_gen_ref.php • List of architecture component and specific requirements plus use case constraining this component http://bigdatawg.nist.gov/uc_reqs_gen_detail.php 26
  • 27. Would like to capture “essence of these use cases” “small” kernels, mini-apps Or Classify applications into patterns Do it from HPC background not database view point e.g. focus on cases with detailed analytics Section 5 of my class https://bigdatacoursespring2014.appspot.com/preview classifies 51 use cases with ogre facets
  • 28. What are “mini-Applications” • Use for benchmarks of computers and software (is my parallel compiler any good?) • In parallel computing, this is well established – Linpack for measuring performance to rank machines in Top500 (changing?) – NAS Parallel Benchmarks (originally a pencil and paper specification to allow optimal implementations; then MPI library) – Other specialized Benchmark sets keep changing and used to guide procurements • Last 2 NSF hardware solicitations had NO preset benchmarks – perhaps as no agreement on key applications for clouds and data intensive applications – Berkeley dwarfs capture different structures that any approach to parallel computing must address – Templates used to capture parallel computing patterns • I’ll let experts comment on database benchmarks like TPC
  • 29. HPC Benchmark Classics • Linpack or HPL: Parallel LU factorization for solution of linear equations • NPB version 1: Mainly classic HPC solver kernels – MG: Multigrid – CG: Conjugate Gradient – FT: Fast Fourier Transform – IS: Integer sort – EP: Embarrassingly Parallel – BT: Block Tridiagonal – SP: Scalar Pentadiagonal – LU: Lower-Upper symmetric Gauss Seidel
  • 30. 7 Original Berkeley Dwarfs (Colella) 1. Structured Grids (including locally structured grids, e.g. Adaptive Mesh Refinement) 2. Unstructured Grids 3. Fast Fourier Transform 4. Dense Linear Algebra 5. Sparse Linear Algebra 6. Particles 7. Monte Carlo 8. Note “vaguer” than NPB
  • 31. 13 Berkeley Dwarfs • Dense Linear Algebra • Sparse Linear Algebra • Spectral Methods • N-Body Methods • Structured Grids • Unstructured Grids • MapReduce • Combinational Logic • Graph Traversal • Dynamic Programming • Backtrack and Branch-and-Bound • Graphical Models • Finite State Machines First 6 of these correspond to Colella’s original. Monte Carlo dropped N-body methods are a subset of Particle Note a little inconsistent in that MapReduce is a programming model and spectral method is a numerical method Need multiple facets!
  • 32. Distributed Computing MetaPatterns I Jha, Cole, Katz, Parashar, Rana, Weissman
  • 33. Distributed Computing MetaPatterns II Jha, Cole, Katz, Parashar, Rana, Weissman
  • 34. Distributed Computing MetaPatterns III Jha, Cole, Katz, Parashar, Rana, Weissman
  • 35. Core Analytics Facet of Ogres (microPattern) i. Search/Query ii. Local Machine Learning – pleasingly parallel iii. Summarizing statistics iv. Recommender Systems (Collaborative Filtering) v. Outlier Detection (iORCA) vi. Clustering (many methods), vii. LDA (Latent Dirichlet Allocation) or variants like PLSI (Probabilistic Latent Semantic Indexing), viii. SVM and Linear Classifiers (Bayes, Random Forests), ix. PageRank, (Find leading eigenvector of sparse matrix) x. SVD (Singular Value Decomposition), xi. Learning Neural Networks (Deep Learning), xii. MDS (Multidimensional Scaling), xiii. Graph Structure Algorithms (seen in search of RDF Triple stores), xiv. Network Dynamics - Graph simulation Algorithms (epidemiology) Matrix Algebra Global Optimization
  • 36. Problem Architecture Facet of Ogres (Meta or MacroPattern) i. Pleasingly Parallel – as in Blast, Protein docking, some (bio-)imagery ii. Local Analytics or Machine Learning – ML or filtering pleasingly parallel as in bio-imagery, radar images (really just pleasingly parallel but sophisticated local analytics) iii. Global Analytics or Machine Learning seen in LDA, Clustering etc. with parallel ML over nodes of system iv. SPMD (Single Program Multiple Data) v. Bulk Synchronous Processing: well defined compute- communication phases vi. Fusion: Knowledge discovery often involves fusion of multiple methods. vii. Workflow (often used in fusion)
  • 37. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 18: Computational Bioimaging • Application: Data delivered from bioimaging is increasingly automated, higher resolution, and multi-modal. This has created a data analysis bottleneck that, if resolved, can advance the biosciences discovery through Big Data techniques. • Current Approach: The current piecemeal analysis approach does not scale to situation where a single scan on emerging machines is 32TB and medical diagnostic imaging is annually around 70 PB even excluding cardiology. One needs a web-based one-stop-shop for high performance, high throughput image processing for producers and consumers of models built on bio-imaging data. • Futures: Goal is to solve that bottleneck with extreme scale computing with community-focused science gateways to support the application of massive data analysis toward massive imaging data sets. Workflow components include data acquisition, storage, enhancement, minimizing noise, segmentation of regions of interest, crowd-based selection and extraction of features, and object classification, and organization, and search. Use ImageJ, OMERO, VolRover, advanced segmentation and feature detection software. Healthcare Life Sciences Largely Local Machine Learning 37
  • 38. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 27: Organizing large-scale, unstructured collections of consumer photos I • Application: Produce 3D reconstructions of scenes using collections of millions to billions of consumer images, where neither the scene structure nor the camera positions are known a priori. Use resulting 3d models to allow efficient browsing of large-scale photo collections by geographic position. Geolocate new images by matching to 3d models. Perform object recognition on each image. 3d reconstruction posed as a robust non-linear least squares optimization problem where observed relations between images are constraints and unknowns are 6-d camera pose of each image and 3-d position of each point in the scene. • Current Approach: Hadoop cluster with 480 cores processing data of initial applications. Note over 500 billion images on Facebook and over 5 billion on Flickr with over 500 million images added to social media sites each day. Deep Learning Social Networking Global Machine Learning after Initial Local steps 38
  • 39. Big Data Applications & Analytics MOOC Use Case Analysis Fall 201312/26/13 27: Organizing large-scale, unstructured collections of consumer photos II • Futures: Need many analytics including feature extraction, feature matching, and large-scale probabilistic inference, which appear in many or most computer vision and image processing problems, including recognition, stereo resolution, and image denoising. Need to visualize large-scale 3-d reconstructions, and navigate large-scale collections of images that have been aligned to maps. Deep Learning Social Networking Global Machine Learning after Initial Local steps 39
  • 40. This Facet of Ogres has Features • These core analytics/kernels can be classified by features like • (a) Flops per byte; • (b) Communication Interconnect requirements; • (c) Is application (graph) constant or dynamic • (d) Most applications consist of a set of interconnected entities; is this regular as a set of pixels or is it a complicated irregular graph • (d) Is communication BSP or Asynchronous; in latter case shared memory may be attractive • (e) Are algorithms Iterative or not? • (f) Are data points in metric or non-metric spaces
  • 41. Application Class Facet of Ogres • (a) Search and query • (b) Maximum Likelihood, • (c) χ2 minimizations, • (d) Expectation Maximization (often Steepest descent) • (e) Global Optimization (Variational Bayes) • (f) Agents, as in epidemiology (swarm approaches) • (g) GIS (Geographical Information Systems). • Not as essential
  • 42. Data Source Facet of Ogres • (i) SQL, • (ii) NOSQL based, • (iii) Other Enterprise data systems (10 examples from Bob Marcus) • (iv) Set of Files (as managed in iRODS), • (v) Internet of Things, • (vi) Streaming and • (vii) HPC simulations. • Before data gets to compute system, there is often an initial data gathering phase which is characterized by a block size and timing. Block size varies from month (Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real time control, streaming) • There are storage/compute system styles: Shared, Dedicated, Permanent, Transient • Other characteristics are need for permanent auxiliary/comparison datasets and these could be interdisciplinary implying nontrivial data movement/replication
  • 43. Lessons / Insights • Ogres classify Big Data applications by multiple facets – each with several exemplars and features – Guide to breadth and depth of Big Data – Does your architecture/software support all the ogres? • Add database exemplars • In parallel computing, the simple analytic kernels dominate mindshare even though agreed limited
  • 44. HPC-ABDS Integrating High Performance Computing with Apache Big Data Stack
  • 45. Enhanced Apache Big Data Stack ABDS • ~120 Capabilities • >40 Apache • Green layers have strong HPC Integration opportunities • Goal • Functionality of ABDS • Performance of HPC
  • 46. Broad Layers in HPC-ABDS • Workflow-Orchestration • Application and Analytics • High level Programming • Basic Programming model and runtime – SPMD, Streaming, MapReduce, MPI • Inter process communication – Collectives, point to point, publish-subscribe • In memory databases/caches • Object-relational mapping • SQL and NoSQL, File management • Data Transport • Cluster Resource Management (Yarn, Slurm, SGE) • File systems(HDFS, Lustre …) • DevOps (Puppet, Chef …) • IaaS Management from HPC to hypervisors (OpenStack) • Cross Cutting – Message Protocols – Distributed Coordination – Security & Privacy – Monitoring
  • 47.
  • 48.
  • 49. Getting High Performance on Data Analytics (e.g. Mahout, R …) • On the systems side, we have two principles – The Apache Big Data Stack with ~120 projects has important broad functionality with a vital large support organization – HPC including MPI has striking success in delivering high performance with however a fragile sustainability model • There are key systems abstractions which are levels in HPC-ABDS software stack where Apache approach needs careful integration with HPC – Resource management – Storage – Programming model -- horizontal scaling parallelism – Collective and Point to Point communication – Support of iteration – Data interface (not just key-value) • In application areas, we define application abstractions to support – Graphs/network – Geospatial – Images etc.
  • 50. Mahout and Hadoop MR – Slow due to MapReduce Python slow as Scripting Spark Iterative MapReduce, non optimal communication Harp Hadoop plug in with ~MPI collectives MPI fastest as C not JavaIncreasing Communication Identical Computation
  • 51. 4 Forms of MapReduce (a) Map Only (d) Loosely Synchronous (c) Iterative MapReduce (b) Classic MapReduce Input map reduce Input map reduce Iterations Input Output map Pij BLAST Analysis Parametric sweep Pleasingly Parallel High Energy Physics (HEP) Histograms Distributed search Classic MPI PDE Solvers and particle dynamics Domain of MapReduce and Iterative Extensions Science Clouds MPI Giraph Expectation maximization Clustering e.g. Kmeans Linear Algebra, Page Rank MPI is Map followed by Point to Point or Collective Communication – as in style c) plus d) 51
  • 52. Map Collective Model (Judy Qiu) • Generalizes Iterative MapReduce • Combine MPI and MapReduce ideas • Implement collectives optimally on Infiniband, Azure, Amazon …… Input map Generalized Reduce Initial Collective Step Final Collective Step Iterate 52
  • 53. Major Analytics Architectures in Use Cases • Pleasingly Parallel including local machine learning as in parallel over images and apply image processing to each image -- Hadoop • Search including collaborative filtering and motif finding implemented using classic MapReduce (Hadoop) or non iterative Giraph • Iterative MapReduce using Collective Communication (clustering) – Hadoop with Harp, Spark ….. • Iterative Giraph (MapReduce) with point to point communication (most graph algorithms such as maximum clique, connected component, finding diameter, community detection) – Vary in difficulty of finding partitioning (classic parallel load balancing) • Shared memory thread based (event driven) graph algorithms (shortest path, Betweenness centrality)
  • 54. HPC-ABDS Hourglass HPC ABDS System (Middleware) High performance Applications • HPC Yarn for Resource management • Horizontally scalable parallel programming model • Collective and Point to Point communication • Support of iteration System Abstractions/standards • Data format • Storage 120 Software Projects Application Abstractions/standards Graphs, Networks, Images, Geospatial …. SPIDAL (Scalable Parallel Interoperable Data Analytics Library) or High performance Mahout, R, Matlab …..
  • 56. Using Optimal “Collective” Operations • Twister4Azure Iterative MapReduce with enhanced collectives – Map-AllReduce primitive and MapReduce-MergeBroadcast. • Strong Scaling on Kmeans for up to 256 cores on Azure
  • 57. Collectives improve traditional MapReduce • This is Kmeans running within basic Hadoop but with optimal AllReduce collective operations • Running on Infiniband Linux Cluster
  • 58. • Shaded areas are computing only where Hadoop on HPC cluster fastest • Areas above shading are overheads where T4A smallest and T4A with AllReduce collective has lowest overhead • Note even on Azure Java (Orange) faster than T4A C# for compute 0 200 400 600 800 1000 1200 1400 32 x 32 M 64 x 64 M 128 x 128 M 256 x 256 M Time(s) Num. Cores X Num. Data Points Hadoop AllReduce Hadoop MapReduce Twister4Azure AllReduce Twister4Azure Broadcast Twister4Azure HDInsight (AzureHadoop) Kmeans and (Iterative) MapReduce 58
  • 59. Harp Architecture YARN MapReduce V2 Harp MapReduce Applications Map-Collective ApplicationsApplication Framework Resource Manager
  • 60. Features of Harp Hadoop Plug in • Hadoop Plugin (on Hadoop 1.2.1 and Hadoop 2.2.0) • Hierarchical data abstraction on arrays, key-values and graphs for easy programming expressiveness. • Collective communication model to support various communication operations on the data abstractions. • Caching with buffer management for memory allocation required from computation and communication • BSP style parallelism • Fault tolerance with check-pointing
  • 61. Performance on Madrid Cluster (8 nodes) 0 200 400 600 800 1000 1200 1400 1600 100m 500 10m 5k 1m 50k ExecutionTime(s) Problem Size K-Means Clustering Harp v.s. Hadoop on Madrid Hadoop 24 cores Harp 24 cores Hadoop 48 cores Harp 48 cores Hadoop 96 cores Harp 96 cores Note compute same in each case as product of centers times points identical Increasing CommunicationIdentical Computation
  • 62. Mahout and Hadoop MR – Slow due to MapReduce Python slow as Scripting Spark Iterative MapReduce, non optimal communication Harp Hadoop plug in with ~MPI collectives MPI fastest as C not JavaIncreasing Communication Identical Computation
  • 63. Performance of MPI Kernel Operations 1 100 10000 0B 2B 8B 32B 128B 512B 2KB 8KB 32KB 128KB 512KB Averagetime(us) Message size (bytes) MPI.NET C# in Tempest FastMPJ Java in FG OMPI-nightly Java FG OMPI-trunk Java FG OMPI-trunk C FG Performance of MPI send and receive operations 5 5000 4B 16B 64B 256B 1KB 4KB 16KB 64KB 256KB 1MB 4MB Averagetime(us) Message size (bytes) MPI.NET C# in Tempest FastMPJ Java in FG OMPI-nightly Java FG OMPI-trunk Java FG OMPI-trunk C FG Performance of MPI allreduce operation 1 100 10000 1000000 4B 16B 64B 256B 1KB 4KB 16KB 64KB 256KB 1MB 4MB AverageTime(us) Message Size (bytes) OMPI-trunk C Madrid OMPI-trunk Java Madrid OMPI-trunk C FG OMPI-trunk Java FG 1 10 100 1000 10000 0B 2B 8B 32B 128B 512B 2KB 8KB 32KB 128KB 512KB AverageTime(us) Message Size (bytes) OMPI-trunk C Madrid OMPI-trunk Java Madrid OMPI-trunk C FG OMPI-trunk Java FG Performance of MPI send and receive on Infiniband and Ethernet Performance of MPI allreduce on Infiniband and Ethernet Pure Java as in FastMPJ slower than Java interfacing to C version of MPI
  • 64. Lessons / Insights • Integrate (don’t compete) HPC with “Commodity Big data” (Google to Amazon to Enterprise data Analytics) – i.e. improve Mahout; don’t compete with it – Use Hadoop plug-ins rather than replacing Hadoop – Enhanced Apache Big Data Stack HPC-ABDS has 120 members – please improve list! • HPC-ABDS+ Integration areas include – file systems, – cluster resource management, – file and object data management, – inter process and thread communication, – analytics libraries, – Workflow – monitoring