SlideShare uma empresa Scribd logo
1 de 93
Big Data HPC Convergence and a bunch of
other things
JSU/CSET’s BIG DATA | SPRING 2016
Thought Leaders Colloquium
1
Geoffrey Fox
February 4, 2016
gcf@indiana.edu
http://www.dsc.soic.indiana.edu/, http://spidal.org/ http://hpc-abds.org/kaleidoscope/
Department of Intelligent Systems Engineering
School of Informatics and Computing, Digital Science Center
Indiana University Bloomington
02/04/2016
Abstract
• Two major trends in computing systems are the growth in high performance
computing (HPC) with an international exascale initiative, and the big data
phenomenon with an accompanying cloud infrastructure of well publicized dramatic
and increasing size and sophistication. We survey these trends focusing on Big Data
due to its pervasive importance. Then we look at linking these trends together, where
one needs to consider multiple aspects: hardware, software, applications/algorithms
and even broader issues like business model and education. We study in detail a
convergence (of big data and HPC/big simulations) approach for software and
applications/algorithms and show what hardware architectures it suggests. We start
by dividing applications into data plus model components and classifying each
component (whether from Big Data or Big Simulations) in the same way. These leads
to 64 properties divided into 4 views, which are Problem Architecture (Macro
pattern); Execution Features (Micro patterns); Data Source and Style; and finally the
Processing (runtime) View. We discuss convergence software built around HPC-
ABDS (High Performance Computing enhanced Apache Big Data Stack) http://hpc-
abds.org/kaleidoscope/ and show how one can merge Big Data and HPC (Big
Simulation) concepts into a single stack. We give examples of data analytics running
on HPC systems including details on persuading Java to run fast. Some details can
be found at http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf
202/04/2016
Education
02/04/2016 3
Background of the School of Informatics
and Computing SOIC
• The School of Informatics was established in 2000 as first of
its kind in the United States.
• Computer Science was established in 1971 and became part
of the school in 2005.
• Library and Information Science
was established in 1951 and
became part of the school
in 2013.
• Now named the School of
Informatics and Computing.
• Data Science added January 2014
– Masters now
• Engineering to be added Fall 2016
2/6/2016 4
Data Science Definition from NIST Public Working Group
• Data Science is the extraction of actionable knowledge directly from data
through a process of discovery, hypothesis, and analytical hypothesis
analysis.
• A Data Scientist is a
practitioner who has sufficient
knowledge of the overlapping
regimes of expertise in
business needs, domain
knowledge, analytical skills
and programming expertise to
manage the end-to-end
scientific method process
through each stage in the big
data lifecycle.
See Big Data Definitions in
http://bigdatawg.nist.gov/V1_output_docs.php
2/6/2016 5
Misses library
science part like
curation
Data Science Summary
• We have strong curriculum
– Online 4 course certificate
– Online Residential Hybrid masters started Spring 2015
– Adding PhD
• Fall 2015 Data Science total enrollment 178
– 34 Online Certificate
– 82 Online Masters
– 62 Residential Masters
• Spring 2016
– total applicants:175
– Residential 74(58) These are admits (accepts)
– Online 60(51)
– Certificate 5(5)
• Note high acceptance rate
• This is “program” not a department
2/6/2016 6
Computational Science
• Computational science has important similarities to data
science but with a simulation rather than data analysis flavor.
• Although a great deal of effort went into with meetings and
several academic curricula/programs, it didn’t take off
– In my experience not a lot of students were interested and
– The academic job opportunities were not great
• Data science has more jobs; maybe it will do better?
• Can we usefully link these concepts?
• PS both use parallel computing!
• In days gone by, I did research in particle physics
phenomenology which in retrospect was an early form of data
science using models extensively
2/6/2016 7
Some Online Data Science Classes by
Fox
• BDAA: Big Data Applications & Analytics
– Used to be called X-Informatics
– ~40 hours of video mainly discussing applications (The X in
X-Informatics or X-Analytics) in context of big data and
clouds https://bigdatacourse.appspot.com/course
• BDOSSP: Big Data Open Source Software and Projects
http://bigdataopensourceprojects.soic.indiana.edu/
– ~27 Hours of video discussing HPC-ABDS and use on
FutureSystems for Big Data software
• Both divided into sections (coherent topics), units (~lectures)
and lessons (5-20 minutes) in which student is meant to stay
awake
2/6/2016 8
9
Intelligent Systems
Engineering ISE Structure
The focus is on engineering of
systems of small scale, often mobile
devices that draw upon modern
information technology techniques
including intelligent systems, big
data and user interface design. The
foundation of these devices include
sensor and detector technologies,
signal processing, and information
and control theory.
End to end Engineering in 6 areas
(Starting Fall 2016
IU Bloomington is the only university among AAU’s 62 member
institutions that does not have any type of engineering program.
Introduction
What is Big Data
What is Big Simulation
02/04/2016 10
Big Simulations
1102/04/2016
Computational Fluid Dynamics
Flow in an aircraft engine
Complete model of the Kv1.2 channel.
The atomic model comprises 1,560 amino
acids, 645 lipid molecules, 80,850 water
molecules and ~300K+ and Cl- ion pairs.
In total, there are more than 350,000 atoms
in the system
The LHC produces some 15 petabytes of data per year of all varieties and with the exact
value depending on duty factor of accelerator (which is reduced simply to cut electricity
cost but also due to malfunction of one or more of the many complex systems) and
experiments. The raw data produced by experiments is processed on the LHC
Computing Grid, which has some 350,000 Cores arranged in a three level structure.
Tier-0 is CERN itself, Tier 1 are national facilities and Tier 2 are regional systems. For
example one LHC experiment (CMS) has 7 Tier-1 and 50 Tier-2 facilities.
This analysis raw data  reconstructed data  AOD
and TAGS  Physics is performed on the multi-tier
LHC Computing Grid. Note that every event can be
analyzed independently so that many events can be
processed in parallel with some concentration
operations such as those to gather entries in a
histogram. This implies that both Grid and Cloud
solutions work with this type of data with currently
Grids being the only implementation today. Higgs Event
http://grids.ucs.indiana.edu/ptliupages/publications/Where%20does%20all%20the%20data%20come%20from%20v7.pdf
Note LHC lies
in a tunnel 27
kilometres
(17 mi) in
circumference
ATLAS Expt
Model
http://www.quantumdiaries.org/2012/09/07/why-particle-detectors-need-a-trigger/atlasmgg/
http://www.kpcb.com/internet-trends
http://www.genome.gov/images/content/cost_per_genome_oct2015.jpg
Ruh VP Software GE http://fisheritcenter.haas.berkeley.edu/Big_Data/index.html
Online!
We Are Here
Introduction
Infrastructure
02/04/2016 17
http://www.kpcb.com/internet-trends
Note that translates NOW into smaller
devices
In PAST translated into faster devices of
same form factor
http://www.kpcb.com/internet-trends
http://www.kpcb.com/internet-trends
http://www.kpcb.com/internet-trends
My Research focus is Science Big Data but
largest science ~100 petabytes = 0.000025 total
Science should take notice of commodity
Converse not clearly true?
Note 7 ZB (7. 1021) is about a
terabyte (1012) for each person in world
Amazon Web Services
2202/04/2016
• Apple use is 10% AWS; will spend $1B in AWS in 2016 but building its own
cloud; Netflix another major user
• AWS 30%, Microsoft 12%, IBM 7%, and Google had 6% of global public
cloud market
Top 500 Supercomputers
• Exponential increase tailing off but such glitches seen
before and “corrected”
• Fastest machine ~ 100x #500 and 0.1 Sum
2302/04/2016
Clouds v Supercomputers
• Clouds and Supercomputers are both collections of computers networked
together in a data center
• Top Supercomputers Intel MIC chip, NVIDIA+AMD, IBM Blue Gene
– #3 Sequoia Blue Gene Q at LLNL 16.32 Petaflop/s on the Linpack
benchmark using 98,304 CPU compute chips with 1.6 million processor
cores and 1.6 Petabyte of memory in 96 racks covering an area of about
3,000 square feet
– 7.9 Megawatts power
• Largest (cloud) computing data centers up to 100,000 servers at ~200
watts per CPU chip
• Each of 3 major cloud vendors has ~2 million servers
• Total clouds 100 times performance of largest supercomputer
– Clouds have different networking, I/O and CPU trade-offs than
supercomputers
– Cloud workloads data oriented and less closely coupled than
supercomputers but still principles of parallel computing same on both
24
http://www.kpcb.com/internet-trends
IoT
100B
Devices
~2030
Introduction
Jobs
02/04/2016 27
Job Trends
Big Data much larger
than data science
19 May 2015 Jobs
3475 for “data science“
2277 for “data scientist“
19488 for “big data”
7 Dec 2015 Jobs
5014 for “data science“
2830 for “data scientist“
22388 for “big data”
http://www.indeed.com/jobtrends?
q=%22Data+science%22%2C+%
22data+scientist%22%2C+%22bi
g+data%22%2C&l=
2/6/2016
28
Charts Jan 6 2016
The 25
Hottest Skills
of 2015 on
LinkedIn --
Global
• #1: Cloud
Computing
• #2 Data
Science
2902/04/2016
http://www.slideshare.
net/linkedin/the-25-
skills-that-could-get-
you-hired-in-2016
Introduction
HPC-ABDS
02/04/2016 31
Data Platforms
32
3302/04/2016
Big Data and (Exascale) Simulation Convergence IIKaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies
Cross-
Cutting
Functions
1) Message
and Data
Protocols:
Avro, Thrift,
Protobuf
2) Distributed
Coordination
: Google
Chubby,
Zookeeper,
Giraffe,
JGroups
3) Security &
Privacy:
InCommon,
Eduroam
OpenStack
Keystone,
LDAP, Sentry,
Sqrrl, OpenID,
SAML OAuth
4)
Monitoring:
Ambari,
Ganglia,
Nagios, Inca
17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad,
Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA),
Jitterbit, Talend, Pentaho, Apatar, Docker Compose, KeystoneML
16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, PLASMA MAGMA,
Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j,
H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, IBM System G, GraphBuilder(Intel), TinkerPop, Parasol, Dream:Lab, Google Fusion Tables,
CINET, NWB, Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree, DC.js, TensorFlow, CNTK
15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud
Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT,
Agave, Atmosphere
15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq,
Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird
14B) Streams: Storm, S4, Samza, Granules, Neptune, Google MillWheel, Amazon Kinesis, LinkedIn, Twitter Heron, Databus, Facebook
Puma/Ptail/Scribe/ODS, Azure Stream Analytics, Floe, Spark Streaming, Flink Streaming, DataTurbine
14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, MR-MPI, Stratosphere (Apache Flink), Reef, Disco,
Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi, Galois, Medusa-GPU, MapGraph, Totem
13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, HPX-5, Argo BEAST HPX-5 BEAST PULSAR, Harp, Netty,
ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Marionette Collective, Public Cloud: Amazon
SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs
12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan, VoltDB,
H-Store
12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC
12) Extraction Tools: UIMA, Tika
11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal
Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB, Spark SQL
11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, ZHT, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB,
Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J,
graphdb, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame
Public Cloud: Azure Table, Amazon Dynamo, Google DataStore
11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet
10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST
9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm,
Torque, Globus Tools, Pilot Jobs
8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS
Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage
7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis
6) DevOps: Docker (Machine, Swarm), Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat,
Sahara, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes,
Buildstep, Gitreceive, OpenTOSCA, Winery, CloudML, Blueprints, Terraform, DevOpSlang, Any2Api
5) IaaS Management from HPC to hypervisors: Xen, KVM, QEMU, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula,
Eucalyptus, Nimbus, CloudStack, CoreOS, rkt, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds
Networking: Google Cloud DNS, Amazon Route 53
21 layers
Over 350
Software
Packages
January
29
2016
Functionality of 21 HPC-ABDS Layers
1) Message Protocols:
2) Distributed Coordination:
3) Security & Privacy:
4) Monitoring:
5) IaaS Management from HPC to hypervisors:
6) DevOps:
7) Interoperability:
8) File systems:
9) Cluster Resource Management:
10) Data Transport:
11) A) File management
B) NoSQL
C) SQL
12) In-memory databases&caches / Object-relational mapping / Extraction Tools
13) Inter process communication Collectives, point-to-point, publish-subscribe, MPI:
14) A) Basic Programming model and runtime, SPMD, MapReduce:
B) Streaming:
15) A) High level Programming:
B) Frameworks
16) Application and Analytics:
17) Workflow-Orchestration:
34
Here are 21 functionalities.
(including 11, 14, 15 subparts)
4 Cross cutting at top
17 in order of layered diagram
starting at bottom
35
HPC-ABDS
Integrated
Software
Big Data ABDS HPC, Cluster
17. Orchestration Crunch, Tez, Cloud Dataflow Kepler, Pegasus, Taverna
16. Libraries MLlib/Mahout, R, Python ScaLAPACK, PETSc, Matlab
15A. High Level Programming Pig, Hive, Drill Domain-specific Languages
15B. Platform as a Service App Engine, BlueMix, Elastic Beanstalk XSEDE Software Stack
Languages Java, Erlang, Scala, Clojure, SQL, SPARQL, Python Fortran, C/C++, Python
14B. Streaming Storm, Kafka, Kinesis
13,14A. Parallel Runtime Hadoop, MapReduce MPI/OpenMP/OpenCL
2. Coordination Zookeeper
12. Caching Memcached
11. Data Management Hbase, Accumulo, Neo4J, MySQL iRODS
10. Data Transfer Sqoop GridFTP
9. Scheduling Yarn Slurm
8. File Systems HDFS, Object Stores Lustre
1, 11A Formats Thrift, Protobuf FITS, HDF
5. IaaS OpenStack, Docker Linux, Bare-metal, SR-IOV
Infrastructure CLOUDS SUPERCOMPUTERS
CUDA, Exascale Runtime
Java Grande
Revisited on 3 data analytics codes
Clustering
Multidimensional Scaling
Latent Dirichlet Allocation
all sophisticated algorithms
36
446K sequences
~100 clusters
37
Protein Universe Browser for COG Sequences with a
few illustrative biologically identified clusters
38
Heatmap of Original distances vs 3D
Euclidean Distances
39
Proteomics (Needleman-Wunsch)
Stock market: Annual Change 2004
y=x is perfection
3D Phylogenetic Tree from WDA SMACOF
40
July 21 2007 Positions
End 2008 Positions
41
10 year US Stock daily price time series mapped to 3D (work
in progress)
3400 stocks
Sector Groupings
Java MPI performs better than Threads I
128 24 core Haswell nodes
Default MPI much worse than threads
Optimized MPI using shared memory node-based messaging is much better
than threads
4202/04/2016
Java MPI performs better than Threads II
128 24 core Haswell nodes
4302/04/2016
200K Dataset Speedup
NIST Big Data Initiative
Led by Chaitin Baru, Bob Marcus, Wo Chang
And
Big Data Application Analysis
02/04/2016 44
NBD-PWG (NIST Big Data Public Working Group)
Subgroups & Co-Chairs
• There were 5 Subgroups
– Note mainly industry
• Requirements and Use Cases Sub Group
– Geoffrey Fox, Indiana U.; Joe Paiva, VA; Tsegereda Beyene, Cisco
• Definitions and Taxonomies SG
– Nancy Grady, SAIC; Natasha Balac, SDSC; Eugene Luster, R2AD
• Reference Architecture Sub Group
– Orit Levin, Microsoft; James Ketner, AT&T; Don Krapohl, Augmented
Intelligence
• Security and Privacy Sub Group
– Arnab Roy, CSA/Fujitsu Nancy Landreville, U. MD Akhil Manchanda, GE
• Technology Roadmap Sub Group
– Carl Buffington, Vistronix; Dan McClary, Oracle; David Boyd, Data
Tactics
• See http://bigdatawg.nist.gov/usecases.php
• and http://bigdatawg.nist.gov/V1_output_docs.php
4502/04/2016
Use Case Template
• 26 fields completed for 51 apps
• Government Operation: 4
• Commercial: 8
• Defense: 3
• Healthcare and Life Sciences:
10
• Deep Learning and Social
Media: 6
• The Ecosystem for Research:
4
• Astronomy and Physics: 5
• Earth, Environmental and
Polar Science: 10
• Energy: 1
• Now an online form
4602/04/2016
4702/04/2016
http://hpc-abds.org/kaleidoscope/survey/
Online Use Case
Form
4802/04/2016
http://hpc-
abds.org/kaleidoscope/survey/
51 Detailed Use Cases: Contributed July-September 2013
Covers goals, data features such as 3 V’s, software, hardware
• http://bigdatawg.nist.gov/usecases.php
• https://bigdatacoursespring2014.appspot.com/course (Section 5)
• Government Operation(4): National Archives and Records Administration, Census Bureau
• Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search,
Digital Materials, Cargo shipping (as in UPS)
• Defense(3): Sensors, Image surveillance, Situation Assessment
• Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis,
Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity
• Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd
Sourcing, Network Science, NIST benchmark datasets
• The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source
experiments
• Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron
Collider at CERN, Belle Accelerator II in Japan
• Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake,
Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation
datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to
watersheds), AmeriFlux and FLUXNET gas sensors
• Energy(1): Smart grid
49
26 Features for each use case
Biased to science
02/04/2016
Features and Examples
02/04/2016 50
51 Use Cases: What is Parallelism Over?
• People: either the users (but see below) or subjects of application and often both
• Decision makers like researchers or doctors (users of application)
• Items such as Images, EMR, Sequences below; observations or contents of online
store
– Images or “Electronic Information nuggets”
– EMR: Electronic Medical Records (often similar to people parallelism)
– Protein or Gene Sequences;
– Material properties, Manufactured Object specifications, etc., in custom dataset
– Modelled entities like vehicles and people
• Sensors – Internet of Things
• Events such as detected anomalies in telescope or credit card data or atmosphere
• (Complex) Nodes in RDF Graph
• Simple nodes as in a learning network
• Tweets, Blogs, Documents, Web Pages, etc.
– And characters/words in them
• Files or data to be backed up, moved or assigned metadata
• Particles/cells/mesh points as in parallel simulations
51
02/04/2016
Features of 51 Use Cases I
• PP (26) “All” Pleasingly Parallel or Map Only
• MR (18) Classic MapReduce MR (add MRStat below for full count)
• MRStat (7) Simple version of MR where key computations are simple
reduction as found in statistical averages such as histograms and
averages
• MRIter (23) Iterative MapReduce or MPI (Spark, Twister)
• Graph (9) Complex graph data structure needed in analysis
• Fusion (11) Integrate diverse data to aid discovery/decision making;
could involve sophisticated algorithms or could just be a portal
• Streaming (41) Some data comes in incrementally and is processed
this way
• Classify (30) Classification: divide data into categories
• S/Q (12) Index, Search and Query
5202/04/2016
Features of 51 Use Cases II
• CF (4) Collaborative Filtering for recommender engines
• LML (36) Local Machine Learning (Independent for each parallel entity) –
application could have GML as well
• GML (23) Global Machine Learning: Deep Learning, Clustering, LDA, PLSI,
MDS,
– Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief
Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can
call EGO or Exascale Global Optimization with scalable parallel algorithm
• Workflow (51) Universal
• GIS (16) Geotagged data and often displayed in ESRI, Microsoft Virtual
Earth, Google Earth, GeoServer etc.
• HPC(5) Classic large-scale simulation of cosmos, materials, etc. generating
(visualization) data
• Agent (2) Simulations of models of data-defined macroscopic entities
represented as agents
5302/04/2016
Local and Global Machine Learning
• Many applications use LML or Local machine Learning where machine
learning (often from R) is run separately on every data item such as on every
image
• But others are GML Global Machine Learning where machine learning is a
single algorithm run over all data items (over all nodes in computer)
– maximum likelihood or 2 with a sum over the N data items –
documents, sequences, items to be sold, images etc. and often links
(point-pairs).
– Graph analytics is typically GML
• Covering clustering/community detection, mixture models, topic
determination, Multidimensional scaling, (Deep) Learning Networks
• PageRank is “just” parallel linear algebra
• Note many Mahout algorithms are sequential – partly as MapReduce
limited; partly because parallelism unclear
– MLLib (Spark based) better
• SVM and Hidden Markov Models do not use large scale parallelization in
practice?
5402/04/2016
13 Image-based Use Cases
• 13-15 Military Sensor Data Analysis/ Intelligence PP, LML, GIS, MR
• 7:Pathology Imaging/ Digital Pathology: PP, LML, MR for search
becoming terabyte 3D images, Global Classification
• 18&35: Computational Bioimaging (Light Sources): PP, LML Also
materials
• 26: Large-scale Deep Learning: GML Stanford ran 10 million images and
11 billion parameters on a 64 GPU HPC; vision (drive car), speech, and
Natural Language Processing
• 27: Organizing large-scale, unstructured collections of photos: GML
Fit position and camera direction to assemble 3D photo ensemble
• 36: Catalina Real-Time Transient Synoptic Sky Survey (CRTS): PP,
LML followed by classification of events (GML)
• 43: Radar Data Analysis for CReSIS Remote Sensing of Ice Sheets: PP,
LML to identify glacier beds; GML for full ice-sheet
• 44: UAVSAR Data Processing, Data Product Delivery, and Data
Services: PP to find slippage from radar images
• 45, 46: Analysis of Simulation visualizations: PP LML ?GML find paths,
classify orbits, classify patterns that signal earthquakes, instabilities,
climate, turbulence
5502/04/2016
Internet of Things and Streaming Apps
• It is projected that there will be 24 (Mobile Industry Group) to 50 (Cisco)
billion devices on the Internet by 2020.
• The cloud natural controller of and resource provider for the Internet of
Things.
• Smart phones/watches, Wearable devices (Smart People), “Intelligent River”
“Smart Homes and Grid” and “Ubiquitous Cities”, Robotics.
• Majority of use cases are streaming – experimental science gathers data in
a stream – sometimes batched as in a field trip. Below is sample
• 10: Cargo Shipping Tracking as in UPS, Fedex PP GIS LML
• 13: Large Scale Geospatial Analysis and Visualization PP GIS LML
• 28: Truthy: Information diffusion research from Twitter Data PP MR for
Search, GML for community determination
• 39: Particle Physics: Analysis of LHC Large Hadron Collider Data:
Discovery of Higgs particle PP for event Processing, Global statistics
• 50: DOE-BER AmeriFlux and FLUXNET Networks PP GIS LML
• 51: Consumption forecasting in Smart Grids PP GIS LML
5602/04/2016
Big Data and Big Simulations
Patterns – the Convergence
Diamonds
02/04/2016 57
Big Data - Big Simulation (Exascale) Convergence
• Lets distinguish Data and Model (e.g. machine learning
analytics) in Big data problems
• Then almost always Data is large but Model varies
– E.g. LDA with many topics or deep learning has large model
– Clustering or Dimension reduction can be quite small
• Simulations can also be considered as Data and Model
– Model is solving particle dynamics or partial differential
equations
– Data could be small when just boundary conditions or
– Data large with data assimilation (weather forecasting) or
when data visualizations produced by simulation
• Data often static between iterations (unless streaming), model
varies between iterations
5802/04/2016
Classifying Big Data and Big Simulation Applications
• “Benchmarks” “kernels” “algorithm” “mini-apps” can serve multiple
purposes
• Motivate hardware and software features
– e.g. collaborative filtering algorithm parallelizes well with MapReduce
and suggests using Hadoop on a cloud
– e.g. deep learning on images dominated by matrix operations; needs
CUDA&MPI and suggests HPC cluster
• Benchmark sets designed cover key features of systems in terms of
features and sizes of “important” applications
• Take 51 uses cases  derive specific features; each use case has multiple
features
• Generalize and systematize with features termed “facets”
• 50 Facets (Big Data) or 64 Facets (Big Simulation and Data) divided
into 4 sets or views where each view has “similar” facets
– Allow one to study coverage of benchmark sets
• Discuss Data and Model together as built around problems which combine
them but we can get insight by separating and this allows better
understanding of Big Data - Big Simulation “convergence”
5902/04/2016
7 Computational Giants of
NRC Massive Data Analysis Report
1) G1: Basic Statistics e.g. MRStat
2) G2: Generalized N-Body Problems
3) G3: Graph-Theoretic Computations
4) G4: Linear Algebraic Computations
5) G5: Optimizations e.g. Linear Programming
6) G6: Integration e.g. LDA and other GML
7) G7: Alignment Problems e.g. BLAST
6002/04/2016
http://www.nap.edu/catalog.php?record_id=18374 Big Data Models?
HPC (Simulation) Benchmark Classics
• Linpack or HPL: Parallel LU factorization
for solution of linear equations
• NPB version 1: Mainly classic HPC solver kernels
– MG: Multigrid
– CG: Conjugate Gradient
– FT: Fast Fourier Transform
– IS: Integer sort
– EP: Embarrassingly Parallel
– BT: Block Tridiagonal
– SP: Scalar Pentadiagonal
– LU: Lower-Upper symmetric Gauss Seidel
6102/04/2016
Simulation Models
13 Berkeley Dwarfs
1) Dense Linear Algebra
2) Sparse Linear Algebra
3) Spectral Methods
4) N-Body Methods
5) Structured Grids
6) Unstructured Grids
7) MapReduce
8) Combinational Logic
9) Graph Traversal
10) Dynamic Programming
11) Backtrack and
Branch-and-Bound
12) Graphical Models
13) Finite State Machines
6202/04/2016
First 6 of these correspond to Colella’s
original. (Classic simulations)
Monte Carlo dropped.
N-body methods are a subset of
Particle in Colella.
Note a little inconsistent in that
MapReduce is a programming model
and spectral method is a numerical
method.
Need multiple facets!
Largely Models for Data or Simulation
6302/04/2016
Pleasingly Parallel
Classic MapReduce
Map-Collective
Map Point-to-Point
Shared Memory
Single Program Multiple Data
Bulk Synchronous Parallel
Fusion
Dataflow
Agents
Workflow
Geospatial Information System
HPC Simulations
Internet of Things
Metadata/Provenance
Shared / Dedicated / Transient / Permanent
Archived/Batched/Streaming
HDFS/Lustre/GPFS
Files/Objects
Enterprise Data Model
SQL/NoSQL/NewSQL
PerformanceMetrics
FlopsperByte;MemoryI/O
ExecutionEnvironment;Corelibraries
Volume
Velocity
Variety
Veracity
CommunicationStructure
DataAbstraction
Metric=M/Non-Metric=N
=NN/=N
Regular=R/Irregular=I
Dynamic=D/Static=S
Visualization
GraphAlgorithms
LinearAlgebraKernels
Alignment
Streaming
OptimizationMethodology
Learning
Classification
Search/Query/Index
BaseStatistics
GlobalAnalytics
LocalAnalytics
Micro-benchmarks
Recommendations
Data Source and Style View
Execution View
Processing View
2
3
4
6
7
8
9
10
11
12
10
9
8
7
6
5
4
3
2
1
1 2 3 4 5 6 7 8 9 10 12 14
9 8 7 5 4 3 2 114 13 12 11 10 6
13
Map Streaming 5
4 Ogre
Views and
50 Facets
Iterative/Simple
11
1
Problem
Architecture
View
6402/04/2016
Local(Analytics/Informatics/Simulations)
2
M
Data Source and Style View
Pleasingly Parallel
Classic MapReduce
Map-Collective
Map Point-to-Point
Shared Memory
Single Program Multiple Data
Bulk Synchronous Parallel
Fusion
Dataflow
Agents
Workflow
Geospatial Information System
HPC Simulations
Internet of Things
Metadata/Provenance
Shared / Dedicated / Transient / Permanent
Archived/Batched/Streaming – S1, S2, S3, S4, S5
HDFS/Lustre/GPFS
Files/Objects
Enterprise Data Model
SQL/NoSQL/NewSQL
1
M
Micro-benchmarks
Execution View
Processing View
1
2
3
4
6
7
8
9
10
11M
12
10D
9
8D
7D
6D
5D
4D
3D
2D
1D
Map Streaming 5
Convergence
Diamonds
Views and
Facets
Problem Architecture View
15
M
CoreLibraries
Visualization14
M GraphAlgorithms
13
M
LinearAlgebraKernels/Manysubclasses
12
M
Global(Analytics/Informatics/Simulations)
3
M
RecommenderEngine
5
M
4
M
BaseDataStatistics
10
M
StreamingDataAlgorithms
OptimizationMethodology
9
M
Learning
8
M
DataClassification
7
M
DataSearch/Query/Index
6
M
11
M
DataAlignment
Big Data Processing
Diamonds
MultiscaleMethod
17
M
16
M
IterativePDESolvers
22
M
Natureofmeshifused
EvolutionofDiscreteSystems
21
M
ParticlesandFields
20
M
N-bodyMethods
19
M
SpectralMethods
18
M
Simulation (Exascale)
Processing Diamonds
DataAbstraction
D
12
ModelAbstraction
M
12
DataMetric=M/Non-Metric=N
D
13
DataMetric=M/Non-Metric=N
M
13
=NN/=N
M
14
Regular=R/Irregular=IModel
M
10
Veracity
7
Iterative/Simple
M
11
CommunicationStructure
M
8
Dynamic=D/Static=S
D
9
Dynamic=D/Static=S
M
9
Regular=R/Irregular=IData
D
10
ModelVariety
M
6
DataVelocity
D
5
PerformanceMetrics
1
DataVariety
D
6
FlopsperByte/MemoryIO/Flopsperwatt
2
ExecutionEnvironment;Corelibraries
3
DataVolume
D
4
ModelSize
M
4
Simulations
Analytics
(Model for
Data)
Both
(All Model)
(Nearly all Data+Model)
(Nearly all Data)
(Mix of Data and Model)
Dwarfs and Ogres give
Convergence Diamonds
• Macropatterns or Problem Architecture View:
Unchanged
• Execution View: Significant changes to separate Data
and Model and add characteristics of Simulation models
• Data Source and Style View: Same for Ogres and
Diamonds – present but less important for Simulations
compared to big data
• Processing View is a mix of Big Data Processing View
and Big Simulation Processing View and includes
some facets like “uses linear algebra” needed in both:
includes specifics of key simulation kernels – includes
NAS Parallel Benchmarks and Berkeley Dwarfs
6502/04/2016
Facets of the Convergence
Diamonds
Problem Architecture
Meta or Macro Aspects of Diamonds
Valid for Big Data or Big Simulations as describes Problem
which is Model-Data combination
02/04/2016 66
Problem Architecture View (Meta or MacroPatterns)
i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-)imagery including
Local Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio-
imagery, radar images (pleasingly parallel but sophisticated local analytics)
ii. Classic MapReduce: Search, Index and Query and Classification algorithms like
collaborative filtering (G1 for MRStat in Features, G7)
iii. Map-Collective: Iterative maps + communication dominated by “collective” operations
as in reduction, broadcast, gather, scatter. Common datamining pattern
iv. Map-Point to Point: Iterative maps + communication dominated by many small point to
point messages as in graph algorithms
v. Map-Streaming: Describes streaming, steering and assimilation problems
vi. Shared Memory: Some problems are asynchronous and are easier to parallelize on
shared rather than distributed memory – see some graph algorithms
vii. SPMD: Single Program Multiple Data, common parallel programming feature
viii. BSP or Bulk Synchronous Processing: well-defined compute-communication phases
ix. Fusion: Knowledge discovery often involves fusion of multiple methods.
x. Dataflow: Important application features often occurring in composite Ogres
xi. Use Agents: as in epidemiology (swarm approaches) This is Model only
xii. Workflow: All applications often involve orchestration (workflow) of multiple
components
6702/04/2016
11 of 12 are properties of Data+Model
Relation of Problem and Machine Architecture
• Problem is Model plus Data
• In my old papers (especially book Parallel Computing Works!), I discussed
computing as multiple complex systems mapped into each other
Problem  Numerical formulation  Software 
Hardware
• Each of these 4 systems has an architecture that can be described in
similar language
• One gets an easy programming model if architecture of problem matches
that of Software
• One gets good performance if architecture of hardware matches that of
software and problem
• So “MapReduce” can be used as architecture of software (programming
model) or “Numerical formulation of problem”
6802/04/2016
6 Forms of
MapReduce
cover “all”
circumstances
Describes
- Problem (Model
reflecting data)
- Machine
- Software
Architecture
6902/04/2016
Data Analysis Problem Architectures
 1) Pleasingly Parallel PP or “map-only” in MapReduce
 BLAST Analysis; Local Machine Learning
 2A) Classic MapReduce MR, Map followed by reduction
 High Energy Physics (HEP) Histograms; Web search; Recommender Engines
 2B) Simple version of classic MapReduce MRStat
 Final reduction is just simple statistics
 3) Iterative MapReduce MRIter
 Expectation maximization Clustering Linear Algebra, PageRank
 4A) Map Point to Point Communication
 Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph
 4B) GPU (Accelerator) enhanced 4A) – especially for deep learning
 5) Map + Streaming + some sort of Communication
 Images from Synchrotron sources; Telescopes; Internet of Things IoT
 Apache Storm is (Map + Dataflow) +Streaming
 Data assimilation is (Map + Point to Point Communication) + Streaming
 6) Shared memory allowing parallel threads which are tricky to program but
lower latency
 Difficult to parallelize asynchronous parallel Graph Algorithms
7002/04/2016
Diamond Facets
Execution Features View
Many similar Features for Big Data and
Simulations
02/04/2016 71
View for Micropatterns or Execution Features
i. Performance Metrics; property found by benchmarking Diamond
ii. Flops per byte; memory or I/O
iii. Execution Environment; Core libraries needed: matrix-matrix/vector algebra, conjugate
gradient, reduction, broadcast; Cloud, HPC etc.
iv. Volume: property of a Diamond instance: a) Data Volume and b) Model Size
v. Velocity: qualitative property of Diamond with value associated with instance. Only Data
vi. Variety: important property especially of composite Diamonds; Data and Model separately
vii. Veracity: important property of applications but not kernels;
viii. Model Communication Structure; Interconnect requirements; Is communication BSP,
Asynchronous, Pub-Sub, Collective, Point to Point?
ix. Is Data and/or Model (graph) static or dynamic?
x. Much Data and/or Models consist of a set of interconnected entities; is this regular as a set
of pixels or is it a complicated irregular graph?
xi. Are Models Iterative or not?
xii. Data Abstraction: key-value, pixel, graph(G3), vector, bags of words or items; Model can
have same or different abstractions e.g. mesh points, finite element, Convolutional Network
xiii. Are data points in metric or non-metric spaces? Data and Model separately?
xiv. Is Model algorithm O(N2) or O(N) (up to logs) for N points per iteration (G2)
7202/04/2016
Comparison of Data Analytics with Simulation I
• Simulations produce big data as visualization of results – they are data
source
– Or consume often smallish data to define a simulation problem
– HPC simulation in weather data assimilation is data + model
• Pleasingly parallel often important in both
• Both are often SPMD and BSP
• Non-iterative MapReduce is major big data paradigm
– not a common simulation paradigm except where “Reduce” summarizes
pleasingly parallel execution as in Some Monte Carlos
• Big Data often has large collective communication
– Classic simulation has a lot of smallish point-to-point messages
• Simulations characterized often by difference or differential operators
• Simulation dominantly sparse (nearest neighbor) data structures
– Some important data analytics involves full matrix algorithm but
– “Bag of words (users, rankings, images..)” algorithms are sparse, as is
PageRank
02/04/2016
73
“Force Diagrams” for macromolecules and
Facebook
02/04/2016
74
Comparison of Data Analytics with Simulation II
• There are similarities between some graph problems and particle
simulations with a strange cutoff force.
– Both Map-Communication
• Note many big data problems are “long range force” (as in gravitational
simulations) as all points are linked.
– Easiest to parallelize. Often full matrix algorithms
– e.g. in DNA sequence studies, distance (i, j) defined by BLAST, Smith-
Waterman, etc., between all sequences i, j.
– Opportunity for “fast multipole” ideas in big data. See NRC report
• In image-based deep learning, neural network weights are block sparse
(corresponding to links to pixel blocks) but can be formulated as full matrix
operations on GPUs and MPI in blocks.
• In HPC benchmarking, Linpack being challenged by a new sparse conjugate
gradient benchmark HPCG, while I am diligently using non- sparse
conjugate gradient solvers in clustering and Multi-dimensional scaling.
02/04/2016
75
Convergence Diamond Facets
Big Data and Big Simulation
Processing View
All Model Properties but differences
between Big Data and Big Simulation
02/04/2016 76
Diamond Facets in Processing (runtime) View I
used in Big Data and Big Simulation
• Pr-1M Micro-benchmarks ogres that exercise simple features of hardware
such as communication, disk I/O, CPU, memory performance
• Pr-2M Local Analytics executed on a single core or perhaps node
• Pr-3M Global Analytics requiring iterative programming models (G5,G6)
across multiple nodes of a parallel system
• Pr-12M Uses Linear Algebra common in Big Data and simulations
– Subclasses like Full Matrix
– Conjugate Gradient, Krylov, Arnoldi iterative subspace methods
– Structured and unstructured sparse matrix methods
• Pr-13M Graph Algorithms (G3) Clear important class of algorithms -- as
opposed to vector, grid, bag of words etc. – often hard especially in parallel
• Pr-14M Visualization is key application capability for big data and
simulations
• Pr-15M Core Libraries Functions of general value such as Sorting, Math
functions, Hashing
7702/04/2016
Diamond Facets in Processing (runtime) View II
used in Big Data
• Pr-4M Basic Statistics (G1): MRStat in NIST problem features
• Pr-5M Recommender Engine: core to many e-commerce, media businesses;
collaborative filtering key technology
• Pr-6M Search/Query/Index: Classic database which is well studied (Baru, Rabl
tutorial)
• Pr-7M Data Classification: assigning items to categories based on many methods
– MapReduce good in Alignment, Basic statistics, S/Q/I, Recommender, Classification
• Pr-8M Learning of growing importance due to Deep Learning success in speech
recognition etc..
• Pr-9M Optimization Methodology: overlapping categories including
– Machine Learning, Nonlinear Optimization (G6), Maximum Likelihood or 2 least
squares minimizations, Expectation Maximization (often Steepest descent),
Combinatorial Optimization, Linear/Quadratic Programming (G5), Dynamic
Programming
• Pr-10M Streaming Data or online Algorithms. Related to DDDAS (Dynamic Data-
Driven Application Systems)
• Pr-11M Data Alignment (G7) as in BLAST compares samples with repository
7802/04/2016
Diamond Facets in Processing (runtime) View III
used in Big Simulation
• Pr-16M Iterative PDE Solvers: Jacobi, Gauss Seidel etc.
• Pr-17M Multiscale Method? Multigrid and other variable
resolution approaches
• Pr-18M Spectral Methods as in Fast Fourier Transform
• Pr-19M N-body Methods as in Fast multipole, Barnes-Hut
• Pr-20M Both Particles and Fields as in Particle in Cell method
• Pr-21M Evolution of Discrete Systems as in simulation of
Electrical Grids, Chips, Biological Systems, Epidemiology.
Needs Ordinary Differential Equation solvers
• Pr-22M Nature of Mesh if used: Structured, Unstructured,
Adaptive
7902/04/2016
Covers NAS Parallel Benchmarks and Berkeley Dwarfs
Facets of the Ogres
Data Source and Style Aspects
add streaming from Processing view here
Present but often less important for
Simulations (that use and produce data)
02/04/2016 80
Data Source and Style Diamond View I
i. SQL NewSQL or NoSQL: NoSQL includes Document,
Column, Key-value, Graph, Triple store; NewSQL is SQL redone to exploit
NoSQL performance
ii. Other Enterprise data systems: 10 examples from NIST integrate
SQL/NoSQL
iii. Set of Files or Objects: as managed in iRODS and extremely common in
scientific research
iv. File systems, Object, Blob and Data-parallel (HDFS) raw storage:
Separated from computing or colocated? HDFS v Lustre v. Openstack
Swift v. GPFS
v. Archive/Batched/Streaming: Streaming is incremental update of datasets
with new algorithms to achieve real-time response (G7); Before data gets
to compute system, there is often an initial data gathering phase which is
characterized by a block size and timing. Block size varies from month
(Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real
time control, streaming)
• Streaming divided into categories overleaf
8102/04/2016
Data Source and Style Diamond View II
• Streaming divided into 5 categories depending on event size and
synchronization and integration
• Set of independent events where precise time sequencing unimportant.
• Time series of connected small events where time ordering important.
• Set of independent large events where each event needs parallel processing with time sequencing not
critical
• Set of connected large events where each event needs parallel processing with time sequencing critical.
• Stream of connected small or large events to be integrated in a complex way.
vi. Shared/Dedicated/Transient/Permanent: qualitative property of data; Other
characteristics are needed for permanent auxiliary/comparison datasets and these
could be interdisciplinary, implying nontrivial data movement/replication
vii. Metadata/Provenance: Clear qualitative property but not for kernels as important
aspect of data collection process
viii. Internet of Things: 24 to 50 Billion devices on Internet by 2020
ix. HPC simulations: generate major (visualization) output that often needs to be
mined
x. Using GIS: Geographical Information Systems provide attractive access to
geospatial data
8202/04/2016
2. Perform real time analytics on data source
streams and notify users when specified
events occur
8302/04/2016
Storm, Kafka, Hbase, Zookeeper
Streaming Data
Streaming Data
Streaming Data
Posted Data Identified Events
Filter Identifying
Events
Repository
Specify filter
Archive
Post Selected
Events
Fetch streamed
Data
5. Perform interactive analytics on data in
analytics-optimized database
8402/04/2016
Hadoop, Spark, Giraph, Pig …
Data Storage: HDFS, Hbase
Data, Streaming, Batch …..
Mahout, R
5A. Perform interactive analytics on
observational scientific data
8502/04/2016
Grid or Many Task Software, Hadoop, Spark, Giraph, Pig …
Data Storage: HDFS, Hbase, File Collection
Streaming Twitter data for
Social Networking
Science Analysis Code,
Mahout, R
Transport batch of data to primary
analysis data system
Record Scientific Data in
“field”
Local
Accumulate
and initial
computing
Direct Transfer
NIST examples include
LHC, Remote Sensing,
Astronomy and
Bioinformatics
Benchmarks and Ogres
02/04/2016 86
Benchmarks/Mini-apps spanning Facets
• Look at NSF SPIDAL Project, NIST 51 use cases, Baru-Rabl review
• Catalog facets of benchmarks and choose entries to cover “all facets”
• Micro Benchmarks: SPEC, EnhancedDFSIO (HDFS), Terasort,
Wordcount, Grep, MPI, Basic Pub-Sub ….
• SQL and NoSQL Data systems, Search, Recommenders: TPC (-C to x–
HS for Hadoop), BigBench, Yahoo Cloud Serving, Berkeley Big Data,
HiBench, BigDataBench, Cloudsuite, Linkbench
– includes MapReduce cases Search, Bayes, Random Forests, Collaborative Filtering
• Spatial Query: select from image or earth data
• Alignment: Biology as in BLAST
• Streaming: Online classifiers, Cluster tweets, Robotics, Industrial Internet of
Things, Astronomy; BGBenchmark.
• Pleasingly parallel (Local Analytics): as in initial steps of LHC, Pathology,
Bioimaging (differ in type of data analysis)
• Global Analytics: Outlier, Clustering, LDA, SVM, Deep Learning, MDS,
PageRank, Levenberg-Marquardt, Graph 500 entries
• Workflow and Composite (analytics on xSQL) linking above
02/04/2016
87
Big Data Exascale convergence
8802/04/2016
Big Data and (Exascale) Simulation Convergence I
• Our approach to Convergence is built around two ideas that avoid addressing
the hardware directly as with modern DevOps technology it isn’t hard to
retarget applications between different hardware systems.
• Rather we approach Convergence through applications and software. This
talk has described the Convergence Diamonds Convergence that unify Big
Simulation and Big Data applications and so allow one to more easily identify
good approaches to implement Big Data and Exascale applications in a
uniform fashion.
• The software approach builds on the HPC-ABDS High Performance
Computing enhanced Apache Big Data Software Stack concept
(http://dsc.soic.indiana.edu/publications/HPC-ABDSDescribed_final.pdf,
http://hpc-abds.org/kaleidoscope/ )
• This arranges key HPC and ABDS software together in 21 layers showing
where HPC and ABDS overlap. It for example, introduces a communication
layer to allow ABDS runtime like Hadoop Storm Spark and Flink to use the
richest high performance capabilities shared with MPI Generally it proposes
how to use HPC and ABDS software together.
– Layered Architecture offers some protection to rapid ABDS technology
change (for ABDS independent of HPC)
8902/04/2016
Dual Convergence Architecture
• Running same HPC-ABDS across all platforms but data management has
different balance in I/O, Network and Compute from “model” machine
9002/04/2016
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
Data Management Model for Big Data
and Big Simulation
Things to do for Big Data and (Exascale)
Simulation Convergence II
• Converge Applications: Separate data and model to classify
Applications and Benchmarks across Big Data and Big
Simulations to give Convergence Diamonds with many
facets
– Indicated how to extend Big Data Ogres to Big Simulations
by looking separately at model and data in Ogres
– Diamonds will have five views or collections of facets:
Problem Architecture; Execution; Data Source and Style;
Big Data Processing; Big Simulation Processing
– Facets cover data, model or their combination – the
problem or application
– Note Simulation Processing View has similarities to old
parallel computing benchmarks
9102/04/2016
Things to do for Big Data and (Exascale)
Simulation Convergence III
• Convergence Benchmarks: we will use benchmarks that cover the facets of the
convergence diamonds i.e. cover big data and simulations;
– As we separate data and model, compute intensive simulation benchmarks (e.g.
solve partial differential equation) will be linked with data analytics (the model in
big data)
– IU focus SPIDAL (Scalable Parallel Interoperable Data Analytics Library) with
high performance clustering, dimension reduction, graphs, image processing as
well as MLlib will be linked to core PDE solvers to explore the communication
layer of parallel middleware
– Maybe integrating data and simulation is an interesting idea in benchmark sets
• Convergence Programming Model
– Note parameter servers used in machine learning will be mimicked by collective
operators invoked on distributed parameter (model) storage
– E.g. Harp as Hadoop HPC Plug-in
– There should be interest in using Big Data software systems to support exascale
simulations
– Streaming solutions from IoT to analysis of astronomy and LHC data will drive
high performance versions of Apache streaming systems
9202/04/2016
Things to do for Big Data and (Exascale)
Simulation Convergence IV
• Converge Language: Make Java run as fast as C++ (Java
Grande) for computing and communication – see following
slide
– Surprising that so much Big Data work in industry but basic
high performance Java methodology and tools missing
– Needs some work as no agreed OpenMP for Java parallel
threads
– OpenMPI supports Java but needs enhancements to get
best performance on needed collectives (For C++ and
Java)
– Convergence Language Grande should support Python,
Java (Scala), C/C++ (Fortran)
9302/04/2016

Mais conteúdo relacionado

Mais procurados

HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC Geoffrey Fox
 
51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data StackGeoffrey Fox
 
The evolution of data analytics
The evolution of data analyticsThe evolution of data analytics
The evolution of data analyticsNatalino Busa
 
Big Tools for Big Data
Big Tools for Big DataBig Tools for Big Data
Big Tools for Big DataLewis Crawford
 
introduction to big data frameworks
introduction to big data frameworksintroduction to big data frameworks
introduction to big data frameworksAmal Targhi
 
Applying Noisy Knowledge Graphs to Real Problems
Applying Noisy Knowledge Graphs to Real ProblemsApplying Noisy Knowledge Graphs to Real Problems
Applying Noisy Knowledge Graphs to Real ProblemsDataWorks Summit
 
Jubatus: Realtime deep analytics for BIgData@Rakuten Technology Conference 2012
Jubatus: Realtime deep analytics for BIgData@Rakuten Technology Conference 2012Jubatus: Realtime deep analytics for BIgData@Rakuten Technology Conference 2012
Jubatus: Realtime deep analytics for BIgData@Rakuten Technology Conference 2012Preferred Networks
 
Big Data Analytics Using Hadoop
Big Data Analytics Using HadoopBig Data Analytics Using Hadoop
Big Data Analytics Using HadoopSrikanth VNV
 
Data Science in Future Tense
Data Science in Future TenseData Science in Future Tense
Data Science in Future TensePaco Nathan
 
Network Engineering for High Speed Data Sharing
Network Engineering for High Speed Data SharingNetwork Engineering for High Speed Data Sharing
Network Engineering for High Speed Data SharingGlobus
 
Machine Learning Deep Learning AI and Data Science
Machine Learning Deep Learning AI and Data Science Machine Learning Deep Learning AI and Data Science
Machine Learning Deep Learning AI and Data Science Venkata Reddy Konasani
 
Big Data - A brief introduction
Big Data - A brief introductionBig Data - A brief introduction
Big Data - A brief introductionFrans van Noort
 
Eecs6893 big dataanalytics-lecture1
Eecs6893 big dataanalytics-lecture1Eecs6893 big dataanalytics-lecture1
Eecs6893 big dataanalytics-lecture1Aravindharamanan S
 
袁晓如:大数据时代可视化和可视分析的机遇与挑战
袁晓如:大数据时代可视化和可视分析的机遇与挑战袁晓如:大数据时代可视化和可视分析的机遇与挑战
袁晓如:大数据时代可视化和可视分析的机遇与挑战hdhappy001
 
From hadoop to spark
From hadoop to sparkFrom hadoop to spark
From hadoop to sparksteccami
 
Introduction to BIg Data and Hadoop
Introduction to BIg Data and HadoopIntroduction to BIg Data and Hadoop
Introduction to BIg Data and HadoopAmir Shaikh
 

Mais procurados (20)

HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
 
51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack
 
The evolution of data analytics
The evolution of data analyticsThe evolution of data analytics
The evolution of data analytics
 
Big data analytics
Big data analyticsBig data analytics
Big data analytics
 
Are you ready for BIG DATA?
Are you ready for BIG DATA?Are you ready for BIG DATA?
Are you ready for BIG DATA?
 
Big Tools for Big Data
Big Tools for Big DataBig Tools for Big Data
Big Tools for Big Data
 
Big data frameworks
Big data frameworksBig data frameworks
Big data frameworks
 
introduction to big data frameworks
introduction to big data frameworksintroduction to big data frameworks
introduction to big data frameworks
 
Applying Noisy Knowledge Graphs to Real Problems
Applying Noisy Knowledge Graphs to Real ProblemsApplying Noisy Knowledge Graphs to Real Problems
Applying Noisy Knowledge Graphs to Real Problems
 
Jubatus: Realtime deep analytics for BIgData@Rakuten Technology Conference 2012
Jubatus: Realtime deep analytics for BIgData@Rakuten Technology Conference 2012Jubatus: Realtime deep analytics for BIgData@Rakuten Technology Conference 2012
Jubatus: Realtime deep analytics for BIgData@Rakuten Technology Conference 2012
 
Big Data Analytics Using Hadoop
Big Data Analytics Using HadoopBig Data Analytics Using Hadoop
Big Data Analytics Using Hadoop
 
Data Science in Future Tense
Data Science in Future TenseData Science in Future Tense
Data Science in Future Tense
 
Network Engineering for High Speed Data Sharing
Network Engineering for High Speed Data SharingNetwork Engineering for High Speed Data Sharing
Network Engineering for High Speed Data Sharing
 
Machine Learning Deep Learning AI and Data Science
Machine Learning Deep Learning AI and Data Science Machine Learning Deep Learning AI and Data Science
Machine Learning Deep Learning AI and Data Science
 
Big Data - A brief introduction
Big Data - A brief introductionBig Data - A brief introduction
Big Data - A brief introduction
 
Eecs6893 big dataanalytics-lecture1
Eecs6893 big dataanalytics-lecture1Eecs6893 big dataanalytics-lecture1
Eecs6893 big dataanalytics-lecture1
 
袁晓如:大数据时代可视化和可视分析的机遇与挑战
袁晓如:大数据时代可视化和可视分析的机遇与挑战袁晓如:大数据时代可视化和可视分析的机遇与挑战
袁晓如:大数据时代可视化和可视分析的机遇与挑战
 
Big Data
Big Data Big Data
Big Data
 
From hadoop to spark
From hadoop to sparkFrom hadoop to spark
From hadoop to spark
 
Introduction to BIg Data and Hadoop
Introduction to BIg Data and HadoopIntroduction to BIg Data and Hadoop
Introduction to BIg Data and Hadoop
 

Semelhante a Big Data HPC Convergence and a bunch of other things

AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...Geoffrey Fox
 
BSC and Integrating Persistent Data and Parallel Programming Models
BSC and Integrating Persistent Data and Parallel Programming ModelsBSC and Integrating Persistent Data and Parallel Programming Models
BSC and Integrating Persistent Data and Parallel Programming Modelsinside-BigData.com
 
big_data_casestudies_2.ppt
big_data_casestudies_2.pptbig_data_casestudies_2.ppt
big_data_casestudies_2.pptvishal choudhary
 
2013 DataCite Summer Meeting - DOIs and Supercomputing (Terry Jones - Oak Rid...
2013 DataCite Summer Meeting - DOIs and Supercomputing (Terry Jones - Oak Rid...2013 DataCite Summer Meeting - DOIs and Supercomputing (Terry Jones - Oak Rid...
2013 DataCite Summer Meeting - DOIs and Supercomputing (Terry Jones - Oak Rid...datacite
 
Big Data and Computer Science Education
Big Data and Computer Science EducationBig Data and Computer Science Education
Big Data and Computer Science EducationJames Hendler
 
Data-intensive bioinformatics on HPC and Cloud
Data-intensive bioinformatics on HPC and CloudData-intensive bioinformatics on HPC and Cloud
Data-intensive bioinformatics on HPC and CloudOla Spjuth
 
Introduction to Cloud computing and Big Data-Hadoop
Introduction to Cloud computing and  Big Data-HadoopIntroduction to Cloud computing and  Big Data-Hadoop
Introduction to Cloud computing and Big Data-HadoopNagarjuna D.N
 
High Performance Data Analytics and a Java Grande Run Time
High Performance Data Analytics and a Java Grande Run TimeHigh Performance Data Analytics and a Java Grande Run Time
High Performance Data Analytics and a Java Grande Run TimeGeoffrey Fox
 
Big Data in Action : Operations, Analytics and more
Big Data in Action : Operations, Analytics and moreBig Data in Action : Operations, Analytics and more
Big Data in Action : Operations, Analytics and moreSoftweb Solutions
 
Lecture 3.31 3.32.pptx
Lecture 3.31  3.32.pptxLecture 3.31  3.32.pptx
Lecture 3.31 3.32.pptxRATISHKUMAR32
 
Rpi talk foster september 2011
Rpi talk foster september 2011Rpi talk foster september 2011
Rpi talk foster september 2011Ian Foster
 
A Workflow-Driven Discovery and Training Ecosystem for Distributed Analysis o...
A Workflow-Driven Discovery and Training Ecosystem for Distributed Analysis o...A Workflow-Driven Discovery and Training Ecosystem for Distributed Analysis o...
A Workflow-Driven Discovery and Training Ecosystem for Distributed Analysis o...Ilkay Altintas, Ph.D.
 
Shared services - the future of HPC and big data facilities for UK research
Shared services - the future of HPC and big data facilities for UK researchShared services - the future of HPC and big data facilities for UK research
Shared services - the future of HPC and big data facilities for UK researchMartin Hamilton
 
Bigdataissueschallengestoolsngoodpractices 141130054740-conversion-gate01
Bigdataissueschallengestoolsngoodpractices 141130054740-conversion-gate01Bigdataissueschallengestoolsngoodpractices 141130054740-conversion-gate01
Bigdataissueschallengestoolsngoodpractices 141130054740-conversion-gate01Soujanya V
 
Growth of relational model: Interdependence and complementary to big data
Growth of relational model: Interdependence and complementary to big data Growth of relational model: Interdependence and complementary to big data
Growth of relational model: Interdependence and complementary to big data IJECEIAES
 
Software and Education at NSF/ACI
Software and Education at NSF/ACISoftware and Education at NSF/ACI
Software and Education at NSF/ACIDaniel S. Katz
 
CC LECTURE NOTES (1).pdf
CC LECTURE NOTES (1).pdfCC LECTURE NOTES (1).pdf
CC LECTURE NOTES (1).pdfHasanAfwaaz1
 
BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE...
BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE...BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE...
BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE...Alex Liu
 
Cloud Programming Models: eScience, Big Data, etc.
Cloud Programming Models: eScience, Big Data, etc.Cloud Programming Models: eScience, Big Data, etc.
Cloud Programming Models: eScience, Big Data, etc.Alexandru Iosup
 

Semelhante a Big Data HPC Convergence and a bunch of other things (20)

AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
 
BSC and Integrating Persistent Data and Parallel Programming Models
BSC and Integrating Persistent Data and Parallel Programming ModelsBSC and Integrating Persistent Data and Parallel Programming Models
BSC and Integrating Persistent Data and Parallel Programming Models
 
big_data_casestudies_2.ppt
big_data_casestudies_2.pptbig_data_casestudies_2.ppt
big_data_casestudies_2.ppt
 
2013 DataCite Summer Meeting - DOIs and Supercomputing (Terry Jones - Oak Rid...
2013 DataCite Summer Meeting - DOIs and Supercomputing (Terry Jones - Oak Rid...2013 DataCite Summer Meeting - DOIs and Supercomputing (Terry Jones - Oak Rid...
2013 DataCite Summer Meeting - DOIs and Supercomputing (Terry Jones - Oak Rid...
 
Big Data and Computer Science Education
Big Data and Computer Science EducationBig Data and Computer Science Education
Big Data and Computer Science Education
 
Data-intensive bioinformatics on HPC and Cloud
Data-intensive bioinformatics on HPC and CloudData-intensive bioinformatics on HPC and Cloud
Data-intensive bioinformatics on HPC and Cloud
 
Introduction to Cloud computing and Big Data-Hadoop
Introduction to Cloud computing and  Big Data-HadoopIntroduction to Cloud computing and  Big Data-Hadoop
Introduction to Cloud computing and Big Data-Hadoop
 
High Performance Data Analytics and a Java Grande Run Time
High Performance Data Analytics and a Java Grande Run TimeHigh Performance Data Analytics and a Java Grande Run Time
High Performance Data Analytics and a Java Grande Run Time
 
Big Data in Action : Operations, Analytics and more
Big Data in Action : Operations, Analytics and moreBig Data in Action : Operations, Analytics and more
Big Data in Action : Operations, Analytics and more
 
Big data
Big dataBig data
Big data
 
Lecture 3.31 3.32.pptx
Lecture 3.31  3.32.pptxLecture 3.31  3.32.pptx
Lecture 3.31 3.32.pptx
 
Rpi talk foster september 2011
Rpi talk foster september 2011Rpi talk foster september 2011
Rpi talk foster september 2011
 
A Workflow-Driven Discovery and Training Ecosystem for Distributed Analysis o...
A Workflow-Driven Discovery and Training Ecosystem for Distributed Analysis o...A Workflow-Driven Discovery and Training Ecosystem for Distributed Analysis o...
A Workflow-Driven Discovery and Training Ecosystem for Distributed Analysis o...
 
Shared services - the future of HPC and big data facilities for UK research
Shared services - the future of HPC and big data facilities for UK researchShared services - the future of HPC and big data facilities for UK research
Shared services - the future of HPC and big data facilities for UK research
 
Bigdataissueschallengestoolsngoodpractices 141130054740-conversion-gate01
Bigdataissueschallengestoolsngoodpractices 141130054740-conversion-gate01Bigdataissueschallengestoolsngoodpractices 141130054740-conversion-gate01
Bigdataissueschallengestoolsngoodpractices 141130054740-conversion-gate01
 
Growth of relational model: Interdependence and complementary to big data
Growth of relational model: Interdependence and complementary to big data Growth of relational model: Interdependence and complementary to big data
Growth of relational model: Interdependence and complementary to big data
 
Software and Education at NSF/ACI
Software and Education at NSF/ACISoftware and Education at NSF/ACI
Software and Education at NSF/ACI
 
CC LECTURE NOTES (1).pdf
CC LECTURE NOTES (1).pdfCC LECTURE NOTES (1).pdf
CC LECTURE NOTES (1).pdf
 
BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE...
BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE...BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE...
BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE...
 
Cloud Programming Models: eScience, Big Data, etc.
Cloud Programming Models: eScience, Big Data, etc.Cloud Programming Models: eScience, Big Data, etc.
Cloud Programming Models: eScience, Big Data, etc.
 

Mais de Geoffrey Fox

Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Geoffrey Fox
 
Big Data HPC Convergence
Big Data HPC ConvergenceBig Data HPC Convergence
Big Data HPC ConvergenceGeoffrey Fox
 
Data Science and Online Education
Data Science and Online EducationData Science and Online Education
Data Science and Online EducationGeoffrey Fox
 
High Performance Processing of Streaming Data
High Performance Processing of Streaming DataHigh Performance Processing of Streaming Data
High Performance Processing of Streaming DataGeoffrey Fox
 
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Geoffrey Fox
 
Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Geoffrey Fox
 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Geoffrey Fox
 
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...Geoffrey Fox
 
Data Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityData Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityGeoffrey Fox
 
Experience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyExperience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyGeoffrey Fox
 
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesMatching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesGeoffrey Fox
 
Big Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationBig Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationGeoffrey Fox
 
Classification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsClassification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsGeoffrey Fox
 
FutureGrid Computing Testbed as a Service
 FutureGrid Computing Testbed as a Service FutureGrid Computing Testbed as a Service
FutureGrid Computing Testbed as a ServiceGeoffrey Fox
 
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Geoffrey Fox
 
NIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGNIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGGeoffrey Fox
 
Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Geoffrey Fox
 
CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2Geoffrey Fox
 
CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1Geoffrey Fox
 

Mais de Geoffrey Fox (20)

Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
 
Big Data HPC Convergence
Big Data HPC ConvergenceBig Data HPC Convergence
Big Data HPC Convergence
 
Data Science and Online Education
Data Science and Online EducationData Science and Online Education
Data Science and Online Education
 
High Performance Processing of Streaming Data
High Performance Processing of Streaming DataHigh Performance Processing of Streaming Data
High Performance Processing of Streaming Data
 
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
 
Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel 
 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...
 
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
 
Data Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityData Science Curriculum at Indiana University
Data Science Curriculum at Indiana University
 
Experience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyExperience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC Technology
 
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software ArchitecturesMatching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software Architectures
 
Big Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationBig Data and Clouds: Research and Education
Big Data and Clouds: Research and Education
 
Classification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different FacetsClassification of Big Data Use Cases by different Facets
Classification of Big Data Use Cases by different Facets
 
Remarks on MOOC's
Remarks on MOOC'sRemarks on MOOC's
Remarks on MOOC's
 
FutureGrid Computing Testbed as a Service
 FutureGrid Computing Testbed as a Service FutureGrid Computing Testbed as a Service
FutureGrid Computing Testbed as a Service
 
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
 
NIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGNIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWG
 
Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore
 
CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2
 
CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1
 

Último

From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfhans926745
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 

Último (20)

From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 

Big Data HPC Convergence and a bunch of other things

  • 1. Big Data HPC Convergence and a bunch of other things JSU/CSET’s BIG DATA | SPRING 2016 Thought Leaders Colloquium 1 Geoffrey Fox February 4, 2016 gcf@indiana.edu http://www.dsc.soic.indiana.edu/, http://spidal.org/ http://hpc-abds.org/kaleidoscope/ Department of Intelligent Systems Engineering School of Informatics and Computing, Digital Science Center Indiana University Bloomington 02/04/2016
  • 2. Abstract • Two major trends in computing systems are the growth in high performance computing (HPC) with an international exascale initiative, and the big data phenomenon with an accompanying cloud infrastructure of well publicized dramatic and increasing size and sophistication. We survey these trends focusing on Big Data due to its pervasive importance. Then we look at linking these trends together, where one needs to consider multiple aspects: hardware, software, applications/algorithms and even broader issues like business model and education. We study in detail a convergence (of big data and HPC/big simulations) approach for software and applications/algorithms and show what hardware architectures it suggests. We start by dividing applications into data plus model components and classifying each component (whether from Big Data or Big Simulations) in the same way. These leads to 64 properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View. We discuss convergence software built around HPC- ABDS (High Performance Computing enhanced Apache Big Data Stack) http://hpc- abds.org/kaleidoscope/ and show how one can merge Big Data and HPC (Big Simulation) concepts into a single stack. We give examples of data analytics running on HPC systems including details on persuading Java to run fast. Some details can be found at http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf 202/04/2016
  • 4. Background of the School of Informatics and Computing SOIC • The School of Informatics was established in 2000 as first of its kind in the United States. • Computer Science was established in 1971 and became part of the school in 2005. • Library and Information Science was established in 1951 and became part of the school in 2013. • Now named the School of Informatics and Computing. • Data Science added January 2014 – Masters now • Engineering to be added Fall 2016 2/6/2016 4
  • 5. Data Science Definition from NIST Public Working Group • Data Science is the extraction of actionable knowledge directly from data through a process of discovery, hypothesis, and analytical hypothesis analysis. • A Data Scientist is a practitioner who has sufficient knowledge of the overlapping regimes of expertise in business needs, domain knowledge, analytical skills and programming expertise to manage the end-to-end scientific method process through each stage in the big data lifecycle. See Big Data Definitions in http://bigdatawg.nist.gov/V1_output_docs.php 2/6/2016 5 Misses library science part like curation
  • 6. Data Science Summary • We have strong curriculum – Online 4 course certificate – Online Residential Hybrid masters started Spring 2015 – Adding PhD • Fall 2015 Data Science total enrollment 178 – 34 Online Certificate – 82 Online Masters – 62 Residential Masters • Spring 2016 – total applicants:175 – Residential 74(58) These are admits (accepts) – Online 60(51) – Certificate 5(5) • Note high acceptance rate • This is “program” not a department 2/6/2016 6
  • 7. Computational Science • Computational science has important similarities to data science but with a simulation rather than data analysis flavor. • Although a great deal of effort went into with meetings and several academic curricula/programs, it didn’t take off – In my experience not a lot of students were interested and – The academic job opportunities were not great • Data science has more jobs; maybe it will do better? • Can we usefully link these concepts? • PS both use parallel computing! • In days gone by, I did research in particle physics phenomenology which in retrospect was an early form of data science using models extensively 2/6/2016 7
  • 8. Some Online Data Science Classes by Fox • BDAA: Big Data Applications & Analytics – Used to be called X-Informatics – ~40 hours of video mainly discussing applications (The X in X-Informatics or X-Analytics) in context of big data and clouds https://bigdatacourse.appspot.com/course • BDOSSP: Big Data Open Source Software and Projects http://bigdataopensourceprojects.soic.indiana.edu/ – ~27 Hours of video discussing HPC-ABDS and use on FutureSystems for Big Data software • Both divided into sections (coherent topics), units (~lectures) and lessons (5-20 minutes) in which student is meant to stay awake 2/6/2016 8
  • 9. 9 Intelligent Systems Engineering ISE Structure The focus is on engineering of systems of small scale, often mobile devices that draw upon modern information technology techniques including intelligent systems, big data and user interface design. The foundation of these devices include sensor and detector technologies, signal processing, and information and control theory. End to end Engineering in 6 areas (Starting Fall 2016 IU Bloomington is the only university among AAU’s 62 member institutions that does not have any type of engineering program.
  • 10. Introduction What is Big Data What is Big Simulation 02/04/2016 10
  • 11. Big Simulations 1102/04/2016 Computational Fluid Dynamics Flow in an aircraft engine Complete model of the Kv1.2 channel. The atomic model comprises 1,560 amino acids, 645 lipid molecules, 80,850 water molecules and ~300K+ and Cl- ion pairs. In total, there are more than 350,000 atoms in the system
  • 12. The LHC produces some 15 petabytes of data per year of all varieties and with the exact value depending on duty factor of accelerator (which is reduced simply to cut electricity cost but also due to malfunction of one or more of the many complex systems) and experiments. The raw data produced by experiments is processed on the LHC Computing Grid, which has some 350,000 Cores arranged in a three level structure. Tier-0 is CERN itself, Tier 1 are national facilities and Tier 2 are regional systems. For example one LHC experiment (CMS) has 7 Tier-1 and 50 Tier-2 facilities. This analysis raw data  reconstructed data  AOD and TAGS  Physics is performed on the multi-tier LHC Computing Grid. Note that every event can be analyzed independently so that many events can be processed in parallel with some concentration operations such as those to gather entries in a histogram. This implies that both Grid and Cloud solutions work with this type of data with currently Grids being the only implementation today. Higgs Event http://grids.ucs.indiana.edu/ptliupages/publications/Where%20does%20all%20the%20data%20come%20from%20v7.pdf Note LHC lies in a tunnel 27 kilometres (17 mi) in circumference ATLAS Expt
  • 15. Ruh VP Software GE http://fisheritcenter.haas.berkeley.edu/Big_Data/index.html
  • 18. http://www.kpcb.com/internet-trends Note that translates NOW into smaller devices In PAST translated into faster devices of same form factor
  • 21. http://www.kpcb.com/internet-trends My Research focus is Science Big Data but largest science ~100 petabytes = 0.000025 total Science should take notice of commodity Converse not clearly true? Note 7 ZB (7. 1021) is about a terabyte (1012) for each person in world
  • 22. Amazon Web Services 2202/04/2016 • Apple use is 10% AWS; will spend $1B in AWS in 2016 but building its own cloud; Netflix another major user • AWS 30%, Microsoft 12%, IBM 7%, and Google had 6% of global public cloud market
  • 23. Top 500 Supercomputers • Exponential increase tailing off but such glitches seen before and “corrected” • Fastest machine ~ 100x #500 and 0.1 Sum 2302/04/2016
  • 24. Clouds v Supercomputers • Clouds and Supercomputers are both collections of computers networked together in a data center • Top Supercomputers Intel MIC chip, NVIDIA+AMD, IBM Blue Gene – #3 Sequoia Blue Gene Q at LLNL 16.32 Petaflop/s on the Linpack benchmark using 98,304 CPU compute chips with 1.6 million processor cores and 1.6 Petabyte of memory in 96 racks covering an area of about 3,000 square feet – 7.9 Megawatts power • Largest (cloud) computing data centers up to 100,000 servers at ~200 watts per CPU chip • Each of 3 major cloud vendors has ~2 million servers • Total clouds 100 times performance of largest supercomputer – Clouds have different networking, I/O and CPU trade-offs than supercomputers – Cloud workloads data oriented and less closely coupled than supercomputers but still principles of parallel computing same on both 24
  • 25.
  • 28. Job Trends Big Data much larger than data science 19 May 2015 Jobs 3475 for “data science“ 2277 for “data scientist“ 19488 for “big data” 7 Dec 2015 Jobs 5014 for “data science“ 2830 for “data scientist“ 22388 for “big data” http://www.indeed.com/jobtrends? q=%22Data+science%22%2C+% 22data+scientist%22%2C+%22bi g+data%22%2C&l= 2/6/2016 28 Charts Jan 6 2016
  • 29. The 25 Hottest Skills of 2015 on LinkedIn -- Global • #1: Cloud Computing • #2 Data Science 2902/04/2016 http://www.slideshare. net/linkedin/the-25- skills-that-could-get- you-hired-in-2016
  • 30.
  • 33. 3302/04/2016 Big Data and (Exascale) Simulation Convergence IIKaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies Cross- Cutting Functions 1) Message and Data Protocols: Avro, Thrift, Protobuf 2) Distributed Coordination : Google Chubby, Zookeeper, Giraffe, JGroups 3) Security & Privacy: InCommon, Eduroam OpenStack Keystone, LDAP, Sentry, Sqrrl, OpenID, SAML OAuth 4) Monitoring: Ambari, Ganglia, Nagios, Inca 17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad, Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA), Jitterbit, Talend, Pentaho, Apatar, Docker Compose, KeystoneML 16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, PLASMA MAGMA, Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, IBM System G, GraphBuilder(Intel), TinkerPop, Parasol, Dream:Lab, Google Fusion Tables, CINET, NWB, Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree, DC.js, TensorFlow, CNTK 15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT, Agave, Atmosphere 15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq, Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird 14B) Streams: Storm, S4, Samza, Granules, Neptune, Google MillWheel, Amazon Kinesis, LinkedIn, Twitter Heron, Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream Analytics, Floe, Spark Streaming, Flink Streaming, DataTurbine 14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, MR-MPI, Stratosphere (Apache Flink), Reef, Disco, Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi, Galois, Medusa-GPU, MapGraph, Totem 13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, HPX-5, Argo BEAST HPX-5 BEAST PULSAR, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Marionette Collective, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs 12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan, VoltDB, H-Store 12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC 12) Extraction Tools: UIMA, Tika 11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB, Spark SQL 11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, ZHT, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB, Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J, graphdb, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame Public Cloud: Azure Table, Amazon Dynamo, Google DataStore 11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet 10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST 9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm, Torque, Globus Tools, Pilot Jobs 8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage 7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis 6) DevOps: Docker (Machine, Swarm), Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Sahara, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive, OpenTOSCA, Winery, CloudML, Blueprints, Terraform, DevOpSlang, Any2Api 5) IaaS Management from HPC to hypervisors: Xen, KVM, QEMU, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula, Eucalyptus, Nimbus, CloudStack, CoreOS, rkt, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds Networking: Google Cloud DNS, Amazon Route 53 21 layers Over 350 Software Packages January 29 2016
  • 34. Functionality of 21 HPC-ABDS Layers 1) Message Protocols: 2) Distributed Coordination: 3) Security & Privacy: 4) Monitoring: 5) IaaS Management from HPC to hypervisors: 6) DevOps: 7) Interoperability: 8) File systems: 9) Cluster Resource Management: 10) Data Transport: 11) A) File management B) NoSQL C) SQL 12) In-memory databases&caches / Object-relational mapping / Extraction Tools 13) Inter process communication Collectives, point-to-point, publish-subscribe, MPI: 14) A) Basic Programming model and runtime, SPMD, MapReduce: B) Streaming: 15) A) High level Programming: B) Frameworks 16) Application and Analytics: 17) Workflow-Orchestration: 34 Here are 21 functionalities. (including 11, 14, 15 subparts) 4 Cross cutting at top 17 in order of layered diagram starting at bottom
  • 35. 35 HPC-ABDS Integrated Software Big Data ABDS HPC, Cluster 17. Orchestration Crunch, Tez, Cloud Dataflow Kepler, Pegasus, Taverna 16. Libraries MLlib/Mahout, R, Python ScaLAPACK, PETSc, Matlab 15A. High Level Programming Pig, Hive, Drill Domain-specific Languages 15B. Platform as a Service App Engine, BlueMix, Elastic Beanstalk XSEDE Software Stack Languages Java, Erlang, Scala, Clojure, SQL, SPARQL, Python Fortran, C/C++, Python 14B. Streaming Storm, Kafka, Kinesis 13,14A. Parallel Runtime Hadoop, MapReduce MPI/OpenMP/OpenCL 2. Coordination Zookeeper 12. Caching Memcached 11. Data Management Hbase, Accumulo, Neo4J, MySQL iRODS 10. Data Transfer Sqoop GridFTP 9. Scheduling Yarn Slurm 8. File Systems HDFS, Object Stores Lustre 1, 11A Formats Thrift, Protobuf FITS, HDF 5. IaaS OpenStack, Docker Linux, Bare-metal, SR-IOV Infrastructure CLOUDS SUPERCOMPUTERS CUDA, Exascale Runtime
  • 36. Java Grande Revisited on 3 data analytics codes Clustering Multidimensional Scaling Latent Dirichlet Allocation all sophisticated algorithms 36
  • 38. Protein Universe Browser for COG Sequences with a few illustrative biologically identified clusters 38
  • 39. Heatmap of Original distances vs 3D Euclidean Distances 39 Proteomics (Needleman-Wunsch) Stock market: Annual Change 2004 y=x is perfection
  • 40. 3D Phylogenetic Tree from WDA SMACOF 40
  • 41. July 21 2007 Positions End 2008 Positions 41 10 year US Stock daily price time series mapped to 3D (work in progress) 3400 stocks Sector Groupings
  • 42. Java MPI performs better than Threads I 128 24 core Haswell nodes Default MPI much worse than threads Optimized MPI using shared memory node-based messaging is much better than threads 4202/04/2016
  • 43. Java MPI performs better than Threads II 128 24 core Haswell nodes 4302/04/2016 200K Dataset Speedup
  • 44. NIST Big Data Initiative Led by Chaitin Baru, Bob Marcus, Wo Chang And Big Data Application Analysis 02/04/2016 44
  • 45. NBD-PWG (NIST Big Data Public Working Group) Subgroups & Co-Chairs • There were 5 Subgroups – Note mainly industry • Requirements and Use Cases Sub Group – Geoffrey Fox, Indiana U.; Joe Paiva, VA; Tsegereda Beyene, Cisco • Definitions and Taxonomies SG – Nancy Grady, SAIC; Natasha Balac, SDSC; Eugene Luster, R2AD • Reference Architecture Sub Group – Orit Levin, Microsoft; James Ketner, AT&T; Don Krapohl, Augmented Intelligence • Security and Privacy Sub Group – Arnab Roy, CSA/Fujitsu Nancy Landreville, U. MD Akhil Manchanda, GE • Technology Roadmap Sub Group – Carl Buffington, Vistronix; Dan McClary, Oracle; David Boyd, Data Tactics • See http://bigdatawg.nist.gov/usecases.php • and http://bigdatawg.nist.gov/V1_output_docs.php 4502/04/2016
  • 46. Use Case Template • 26 fields completed for 51 apps • Government Operation: 4 • Commercial: 8 • Defense: 3 • Healthcare and Life Sciences: 10 • Deep Learning and Social Media: 6 • The Ecosystem for Research: 4 • Astronomy and Physics: 5 • Earth, Environmental and Polar Science: 10 • Energy: 1 • Now an online form 4602/04/2016
  • 49. 51 Detailed Use Cases: Contributed July-September 2013 Covers goals, data features such as 3 V’s, software, hardware • http://bigdatawg.nist.gov/usecases.php • https://bigdatacoursespring2014.appspot.com/course (Section 5) • Government Operation(4): National Archives and Records Administration, Census Bureau • Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search, Digital Materials, Cargo shipping (as in UPS) • Defense(3): Sensors, Image surveillance, Situation Assessment • Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis, Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity • Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd Sourcing, Network Science, NIST benchmark datasets • The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source experiments • Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron Collider at CERN, Belle Accelerator II in Japan • Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake, Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to watersheds), AmeriFlux and FLUXNET gas sensors • Energy(1): Smart grid 49 26 Features for each use case Biased to science 02/04/2016
  • 51. 51 Use Cases: What is Parallelism Over? • People: either the users (but see below) or subjects of application and often both • Decision makers like researchers or doctors (users of application) • Items such as Images, EMR, Sequences below; observations or contents of online store – Images or “Electronic Information nuggets” – EMR: Electronic Medical Records (often similar to people parallelism) – Protein or Gene Sequences; – Material properties, Manufactured Object specifications, etc., in custom dataset – Modelled entities like vehicles and people • Sensors – Internet of Things • Events such as detected anomalies in telescope or credit card data or atmosphere • (Complex) Nodes in RDF Graph • Simple nodes as in a learning network • Tweets, Blogs, Documents, Web Pages, etc. – And characters/words in them • Files or data to be backed up, moved or assigned metadata • Particles/cells/mesh points as in parallel simulations 51 02/04/2016
  • 52. Features of 51 Use Cases I • PP (26) “All” Pleasingly Parallel or Map Only • MR (18) Classic MapReduce MR (add MRStat below for full count) • MRStat (7) Simple version of MR where key computations are simple reduction as found in statistical averages such as histograms and averages • MRIter (23) Iterative MapReduce or MPI (Spark, Twister) • Graph (9) Complex graph data structure needed in analysis • Fusion (11) Integrate diverse data to aid discovery/decision making; could involve sophisticated algorithms or could just be a portal • Streaming (41) Some data comes in incrementally and is processed this way • Classify (30) Classification: divide data into categories • S/Q (12) Index, Search and Query 5202/04/2016
  • 53. Features of 51 Use Cases II • CF (4) Collaborative Filtering for recommender engines • LML (36) Local Machine Learning (Independent for each parallel entity) – application could have GML as well • GML (23) Global Machine Learning: Deep Learning, Clustering, LDA, PLSI, MDS, – Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can call EGO or Exascale Global Optimization with scalable parallel algorithm • Workflow (51) Universal • GIS (16) Geotagged data and often displayed in ESRI, Microsoft Virtual Earth, Google Earth, GeoServer etc. • HPC(5) Classic large-scale simulation of cosmos, materials, etc. generating (visualization) data • Agent (2) Simulations of models of data-defined macroscopic entities represented as agents 5302/04/2016
  • 54. Local and Global Machine Learning • Many applications use LML or Local machine Learning where machine learning (often from R) is run separately on every data item such as on every image • But others are GML Global Machine Learning where machine learning is a single algorithm run over all data items (over all nodes in computer) – maximum likelihood or 2 with a sum over the N data items – documents, sequences, items to be sold, images etc. and often links (point-pairs). – Graph analytics is typically GML • Covering clustering/community detection, mixture models, topic determination, Multidimensional scaling, (Deep) Learning Networks • PageRank is “just” parallel linear algebra • Note many Mahout algorithms are sequential – partly as MapReduce limited; partly because parallelism unclear – MLLib (Spark based) better • SVM and Hidden Markov Models do not use large scale parallelization in practice? 5402/04/2016
  • 55. 13 Image-based Use Cases • 13-15 Military Sensor Data Analysis/ Intelligence PP, LML, GIS, MR • 7:Pathology Imaging/ Digital Pathology: PP, LML, MR for search becoming terabyte 3D images, Global Classification • 18&35: Computational Bioimaging (Light Sources): PP, LML Also materials • 26: Large-scale Deep Learning: GML Stanford ran 10 million images and 11 billion parameters on a 64 GPU HPC; vision (drive car), speech, and Natural Language Processing • 27: Organizing large-scale, unstructured collections of photos: GML Fit position and camera direction to assemble 3D photo ensemble • 36: Catalina Real-Time Transient Synoptic Sky Survey (CRTS): PP, LML followed by classification of events (GML) • 43: Radar Data Analysis for CReSIS Remote Sensing of Ice Sheets: PP, LML to identify glacier beds; GML for full ice-sheet • 44: UAVSAR Data Processing, Data Product Delivery, and Data Services: PP to find slippage from radar images • 45, 46: Analysis of Simulation visualizations: PP LML ?GML find paths, classify orbits, classify patterns that signal earthquakes, instabilities, climate, turbulence 5502/04/2016
  • 56. Internet of Things and Streaming Apps • It is projected that there will be 24 (Mobile Industry Group) to 50 (Cisco) billion devices on the Internet by 2020. • The cloud natural controller of and resource provider for the Internet of Things. • Smart phones/watches, Wearable devices (Smart People), “Intelligent River” “Smart Homes and Grid” and “Ubiquitous Cities”, Robotics. • Majority of use cases are streaming – experimental science gathers data in a stream – sometimes batched as in a field trip. Below is sample • 10: Cargo Shipping Tracking as in UPS, Fedex PP GIS LML • 13: Large Scale Geospatial Analysis and Visualization PP GIS LML • 28: Truthy: Information diffusion research from Twitter Data PP MR for Search, GML for community determination • 39: Particle Physics: Analysis of LHC Large Hadron Collider Data: Discovery of Higgs particle PP for event Processing, Global statistics • 50: DOE-BER AmeriFlux and FLUXNET Networks PP GIS LML • 51: Consumption forecasting in Smart Grids PP GIS LML 5602/04/2016
  • 57. Big Data and Big Simulations Patterns – the Convergence Diamonds 02/04/2016 57
  • 58. Big Data - Big Simulation (Exascale) Convergence • Lets distinguish Data and Model (e.g. machine learning analytics) in Big data problems • Then almost always Data is large but Model varies – E.g. LDA with many topics or deep learning has large model – Clustering or Dimension reduction can be quite small • Simulations can also be considered as Data and Model – Model is solving particle dynamics or partial differential equations – Data could be small when just boundary conditions or – Data large with data assimilation (weather forecasting) or when data visualizations produced by simulation • Data often static between iterations (unless streaming), model varies between iterations 5802/04/2016
  • 59. Classifying Big Data and Big Simulation Applications • “Benchmarks” “kernels” “algorithm” “mini-apps” can serve multiple purposes • Motivate hardware and software features – e.g. collaborative filtering algorithm parallelizes well with MapReduce and suggests using Hadoop on a cloud – e.g. deep learning on images dominated by matrix operations; needs CUDA&MPI and suggests HPC cluster • Benchmark sets designed cover key features of systems in terms of features and sizes of “important” applications • Take 51 uses cases  derive specific features; each use case has multiple features • Generalize and systematize with features termed “facets” • 50 Facets (Big Data) or 64 Facets (Big Simulation and Data) divided into 4 sets or views where each view has “similar” facets – Allow one to study coverage of benchmark sets • Discuss Data and Model together as built around problems which combine them but we can get insight by separating and this allows better understanding of Big Data - Big Simulation “convergence” 5902/04/2016
  • 60. 7 Computational Giants of NRC Massive Data Analysis Report 1) G1: Basic Statistics e.g. MRStat 2) G2: Generalized N-Body Problems 3) G3: Graph-Theoretic Computations 4) G4: Linear Algebraic Computations 5) G5: Optimizations e.g. Linear Programming 6) G6: Integration e.g. LDA and other GML 7) G7: Alignment Problems e.g. BLAST 6002/04/2016 http://www.nap.edu/catalog.php?record_id=18374 Big Data Models?
  • 61. HPC (Simulation) Benchmark Classics • Linpack or HPL: Parallel LU factorization for solution of linear equations • NPB version 1: Mainly classic HPC solver kernels – MG: Multigrid – CG: Conjugate Gradient – FT: Fast Fourier Transform – IS: Integer sort – EP: Embarrassingly Parallel – BT: Block Tridiagonal – SP: Scalar Pentadiagonal – LU: Lower-Upper symmetric Gauss Seidel 6102/04/2016 Simulation Models
  • 62. 13 Berkeley Dwarfs 1) Dense Linear Algebra 2) Sparse Linear Algebra 3) Spectral Methods 4) N-Body Methods 5) Structured Grids 6) Unstructured Grids 7) MapReduce 8) Combinational Logic 9) Graph Traversal 10) Dynamic Programming 11) Backtrack and Branch-and-Bound 12) Graphical Models 13) Finite State Machines 6202/04/2016 First 6 of these correspond to Colella’s original. (Classic simulations) Monte Carlo dropped. N-body methods are a subset of Particle in Colella. Note a little inconsistent in that MapReduce is a programming model and spectral method is a numerical method. Need multiple facets! Largely Models for Data or Simulation
  • 63. 6302/04/2016 Pleasingly Parallel Classic MapReduce Map-Collective Map Point-to-Point Shared Memory Single Program Multiple Data Bulk Synchronous Parallel Fusion Dataflow Agents Workflow Geospatial Information System HPC Simulations Internet of Things Metadata/Provenance Shared / Dedicated / Transient / Permanent Archived/Batched/Streaming HDFS/Lustre/GPFS Files/Objects Enterprise Data Model SQL/NoSQL/NewSQL PerformanceMetrics FlopsperByte;MemoryI/O ExecutionEnvironment;Corelibraries Volume Velocity Variety Veracity CommunicationStructure DataAbstraction Metric=M/Non-Metric=N =NN/=N Regular=R/Irregular=I Dynamic=D/Static=S Visualization GraphAlgorithms LinearAlgebraKernels Alignment Streaming OptimizationMethodology Learning Classification Search/Query/Index BaseStatistics GlobalAnalytics LocalAnalytics Micro-benchmarks Recommendations Data Source and Style View Execution View Processing View 2 3 4 6 7 8 9 10 11 12 10 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 9 10 12 14 9 8 7 5 4 3 2 114 13 12 11 10 6 13 Map Streaming 5 4 Ogre Views and 50 Facets Iterative/Simple 11 1 Problem Architecture View
  • 64. 6402/04/2016 Local(Analytics/Informatics/Simulations) 2 M Data Source and Style View Pleasingly Parallel Classic MapReduce Map-Collective Map Point-to-Point Shared Memory Single Program Multiple Data Bulk Synchronous Parallel Fusion Dataflow Agents Workflow Geospatial Information System HPC Simulations Internet of Things Metadata/Provenance Shared / Dedicated / Transient / Permanent Archived/Batched/Streaming – S1, S2, S3, S4, S5 HDFS/Lustre/GPFS Files/Objects Enterprise Data Model SQL/NoSQL/NewSQL 1 M Micro-benchmarks Execution View Processing View 1 2 3 4 6 7 8 9 10 11M 12 10D 9 8D 7D 6D 5D 4D 3D 2D 1D Map Streaming 5 Convergence Diamonds Views and Facets Problem Architecture View 15 M CoreLibraries Visualization14 M GraphAlgorithms 13 M LinearAlgebraKernels/Manysubclasses 12 M Global(Analytics/Informatics/Simulations) 3 M RecommenderEngine 5 M 4 M BaseDataStatistics 10 M StreamingDataAlgorithms OptimizationMethodology 9 M Learning 8 M DataClassification 7 M DataSearch/Query/Index 6 M 11 M DataAlignment Big Data Processing Diamonds MultiscaleMethod 17 M 16 M IterativePDESolvers 22 M Natureofmeshifused EvolutionofDiscreteSystems 21 M ParticlesandFields 20 M N-bodyMethods 19 M SpectralMethods 18 M Simulation (Exascale) Processing Diamonds DataAbstraction D 12 ModelAbstraction M 12 DataMetric=M/Non-Metric=N D 13 DataMetric=M/Non-Metric=N M 13 =NN/=N M 14 Regular=R/Irregular=IModel M 10 Veracity 7 Iterative/Simple M 11 CommunicationStructure M 8 Dynamic=D/Static=S D 9 Dynamic=D/Static=S M 9 Regular=R/Irregular=IData D 10 ModelVariety M 6 DataVelocity D 5 PerformanceMetrics 1 DataVariety D 6 FlopsperByte/MemoryIO/Flopsperwatt 2 ExecutionEnvironment;Corelibraries 3 DataVolume D 4 ModelSize M 4 Simulations Analytics (Model for Data) Both (All Model) (Nearly all Data+Model) (Nearly all Data) (Mix of Data and Model)
  • 65. Dwarfs and Ogres give Convergence Diamonds • Macropatterns or Problem Architecture View: Unchanged • Execution View: Significant changes to separate Data and Model and add characteristics of Simulation models • Data Source and Style View: Same for Ogres and Diamonds – present but less important for Simulations compared to big data • Processing View is a mix of Big Data Processing View and Big Simulation Processing View and includes some facets like “uses linear algebra” needed in both: includes specifics of key simulation kernels – includes NAS Parallel Benchmarks and Berkeley Dwarfs 6502/04/2016
  • 66. Facets of the Convergence Diamonds Problem Architecture Meta or Macro Aspects of Diamonds Valid for Big Data or Big Simulations as describes Problem which is Model-Data combination 02/04/2016 66
  • 67. Problem Architecture View (Meta or MacroPatterns) i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-)imagery including Local Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio- imagery, radar images (pleasingly parallel but sophisticated local analytics) ii. Classic MapReduce: Search, Index and Query and Classification algorithms like collaborative filtering (G1 for MRStat in Features, G7) iii. Map-Collective: Iterative maps + communication dominated by “collective” operations as in reduction, broadcast, gather, scatter. Common datamining pattern iv. Map-Point to Point: Iterative maps + communication dominated by many small point to point messages as in graph algorithms v. Map-Streaming: Describes streaming, steering and assimilation problems vi. Shared Memory: Some problems are asynchronous and are easier to parallelize on shared rather than distributed memory – see some graph algorithms vii. SPMD: Single Program Multiple Data, common parallel programming feature viii. BSP or Bulk Synchronous Processing: well-defined compute-communication phases ix. Fusion: Knowledge discovery often involves fusion of multiple methods. x. Dataflow: Important application features often occurring in composite Ogres xi. Use Agents: as in epidemiology (swarm approaches) This is Model only xii. Workflow: All applications often involve orchestration (workflow) of multiple components 6702/04/2016 11 of 12 are properties of Data+Model
  • 68. Relation of Problem and Machine Architecture • Problem is Model plus Data • In my old papers (especially book Parallel Computing Works!), I discussed computing as multiple complex systems mapped into each other Problem  Numerical formulation  Software  Hardware • Each of these 4 systems has an architecture that can be described in similar language • One gets an easy programming model if architecture of problem matches that of Software • One gets good performance if architecture of hardware matches that of software and problem • So “MapReduce” can be used as architecture of software (programming model) or “Numerical formulation of problem” 6802/04/2016
  • 69. 6 Forms of MapReduce cover “all” circumstances Describes - Problem (Model reflecting data) - Machine - Software Architecture 6902/04/2016
  • 70. Data Analysis Problem Architectures  1) Pleasingly Parallel PP or “map-only” in MapReduce  BLAST Analysis; Local Machine Learning  2A) Classic MapReduce MR, Map followed by reduction  High Energy Physics (HEP) Histograms; Web search; Recommender Engines  2B) Simple version of classic MapReduce MRStat  Final reduction is just simple statistics  3) Iterative MapReduce MRIter  Expectation maximization Clustering Linear Algebra, PageRank  4A) Map Point to Point Communication  Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph  4B) GPU (Accelerator) enhanced 4A) – especially for deep learning  5) Map + Streaming + some sort of Communication  Images from Synchrotron sources; Telescopes; Internet of Things IoT  Apache Storm is (Map + Dataflow) +Streaming  Data assimilation is (Map + Point to Point Communication) + Streaming  6) Shared memory allowing parallel threads which are tricky to program but lower latency  Difficult to parallelize asynchronous parallel Graph Algorithms 7002/04/2016
  • 71. Diamond Facets Execution Features View Many similar Features for Big Data and Simulations 02/04/2016 71
  • 72. View for Micropatterns or Execution Features i. Performance Metrics; property found by benchmarking Diamond ii. Flops per byte; memory or I/O iii. Execution Environment; Core libraries needed: matrix-matrix/vector algebra, conjugate gradient, reduction, broadcast; Cloud, HPC etc. iv. Volume: property of a Diamond instance: a) Data Volume and b) Model Size v. Velocity: qualitative property of Diamond with value associated with instance. Only Data vi. Variety: important property especially of composite Diamonds; Data and Model separately vii. Veracity: important property of applications but not kernels; viii. Model Communication Structure; Interconnect requirements; Is communication BSP, Asynchronous, Pub-Sub, Collective, Point to Point? ix. Is Data and/or Model (graph) static or dynamic? x. Much Data and/or Models consist of a set of interconnected entities; is this regular as a set of pixels or is it a complicated irregular graph? xi. Are Models Iterative or not? xii. Data Abstraction: key-value, pixel, graph(G3), vector, bags of words or items; Model can have same or different abstractions e.g. mesh points, finite element, Convolutional Network xiii. Are data points in metric or non-metric spaces? Data and Model separately? xiv. Is Model algorithm O(N2) or O(N) (up to logs) for N points per iteration (G2) 7202/04/2016
  • 73. Comparison of Data Analytics with Simulation I • Simulations produce big data as visualization of results – they are data source – Or consume often smallish data to define a simulation problem – HPC simulation in weather data assimilation is data + model • Pleasingly parallel often important in both • Both are often SPMD and BSP • Non-iterative MapReduce is major big data paradigm – not a common simulation paradigm except where “Reduce” summarizes pleasingly parallel execution as in Some Monte Carlos • Big Data often has large collective communication – Classic simulation has a lot of smallish point-to-point messages • Simulations characterized often by difference or differential operators • Simulation dominantly sparse (nearest neighbor) data structures – Some important data analytics involves full matrix algorithm but – “Bag of words (users, rankings, images..)” algorithms are sparse, as is PageRank 02/04/2016 73
  • 74. “Force Diagrams” for macromolecules and Facebook 02/04/2016 74
  • 75. Comparison of Data Analytics with Simulation II • There are similarities between some graph problems and particle simulations with a strange cutoff force. – Both Map-Communication • Note many big data problems are “long range force” (as in gravitational simulations) as all points are linked. – Easiest to parallelize. Often full matrix algorithms – e.g. in DNA sequence studies, distance (i, j) defined by BLAST, Smith- Waterman, etc., between all sequences i, j. – Opportunity for “fast multipole” ideas in big data. See NRC report • In image-based deep learning, neural network weights are block sparse (corresponding to links to pixel blocks) but can be formulated as full matrix operations on GPUs and MPI in blocks. • In HPC benchmarking, Linpack being challenged by a new sparse conjugate gradient benchmark HPCG, while I am diligently using non- sparse conjugate gradient solvers in clustering and Multi-dimensional scaling. 02/04/2016 75
  • 76. Convergence Diamond Facets Big Data and Big Simulation Processing View All Model Properties but differences between Big Data and Big Simulation 02/04/2016 76
  • 77. Diamond Facets in Processing (runtime) View I used in Big Data and Big Simulation • Pr-1M Micro-benchmarks ogres that exercise simple features of hardware such as communication, disk I/O, CPU, memory performance • Pr-2M Local Analytics executed on a single core or perhaps node • Pr-3M Global Analytics requiring iterative programming models (G5,G6) across multiple nodes of a parallel system • Pr-12M Uses Linear Algebra common in Big Data and simulations – Subclasses like Full Matrix – Conjugate Gradient, Krylov, Arnoldi iterative subspace methods – Structured and unstructured sparse matrix methods • Pr-13M Graph Algorithms (G3) Clear important class of algorithms -- as opposed to vector, grid, bag of words etc. – often hard especially in parallel • Pr-14M Visualization is key application capability for big data and simulations • Pr-15M Core Libraries Functions of general value such as Sorting, Math functions, Hashing 7702/04/2016
  • 78. Diamond Facets in Processing (runtime) View II used in Big Data • Pr-4M Basic Statistics (G1): MRStat in NIST problem features • Pr-5M Recommender Engine: core to many e-commerce, media businesses; collaborative filtering key technology • Pr-6M Search/Query/Index: Classic database which is well studied (Baru, Rabl tutorial) • Pr-7M Data Classification: assigning items to categories based on many methods – MapReduce good in Alignment, Basic statistics, S/Q/I, Recommender, Classification • Pr-8M Learning of growing importance due to Deep Learning success in speech recognition etc.. • Pr-9M Optimization Methodology: overlapping categories including – Machine Learning, Nonlinear Optimization (G6), Maximum Likelihood or 2 least squares minimizations, Expectation Maximization (often Steepest descent), Combinatorial Optimization, Linear/Quadratic Programming (G5), Dynamic Programming • Pr-10M Streaming Data or online Algorithms. Related to DDDAS (Dynamic Data- Driven Application Systems) • Pr-11M Data Alignment (G7) as in BLAST compares samples with repository 7802/04/2016
  • 79. Diamond Facets in Processing (runtime) View III used in Big Simulation • Pr-16M Iterative PDE Solvers: Jacobi, Gauss Seidel etc. • Pr-17M Multiscale Method? Multigrid and other variable resolution approaches • Pr-18M Spectral Methods as in Fast Fourier Transform • Pr-19M N-body Methods as in Fast multipole, Barnes-Hut • Pr-20M Both Particles and Fields as in Particle in Cell method • Pr-21M Evolution of Discrete Systems as in simulation of Electrical Grids, Chips, Biological Systems, Epidemiology. Needs Ordinary Differential Equation solvers • Pr-22M Nature of Mesh if used: Structured, Unstructured, Adaptive 7902/04/2016 Covers NAS Parallel Benchmarks and Berkeley Dwarfs
  • 80. Facets of the Ogres Data Source and Style Aspects add streaming from Processing view here Present but often less important for Simulations (that use and produce data) 02/04/2016 80
  • 81. Data Source and Style Diamond View I i. SQL NewSQL or NoSQL: NoSQL includes Document, Column, Key-value, Graph, Triple store; NewSQL is SQL redone to exploit NoSQL performance ii. Other Enterprise data systems: 10 examples from NIST integrate SQL/NoSQL iii. Set of Files or Objects: as managed in iRODS and extremely common in scientific research iv. File systems, Object, Blob and Data-parallel (HDFS) raw storage: Separated from computing or colocated? HDFS v Lustre v. Openstack Swift v. GPFS v. Archive/Batched/Streaming: Streaming is incremental update of datasets with new algorithms to achieve real-time response (G7); Before data gets to compute system, there is often an initial data gathering phase which is characterized by a block size and timing. Block size varies from month (Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real time control, streaming) • Streaming divided into categories overleaf 8102/04/2016
  • 82. Data Source and Style Diamond View II • Streaming divided into 5 categories depending on event size and synchronization and integration • Set of independent events where precise time sequencing unimportant. • Time series of connected small events where time ordering important. • Set of independent large events where each event needs parallel processing with time sequencing not critical • Set of connected large events where each event needs parallel processing with time sequencing critical. • Stream of connected small or large events to be integrated in a complex way. vi. Shared/Dedicated/Transient/Permanent: qualitative property of data; Other characteristics are needed for permanent auxiliary/comparison datasets and these could be interdisciplinary, implying nontrivial data movement/replication vii. Metadata/Provenance: Clear qualitative property but not for kernels as important aspect of data collection process viii. Internet of Things: 24 to 50 Billion devices on Internet by 2020 ix. HPC simulations: generate major (visualization) output that often needs to be mined x. Using GIS: Geographical Information Systems provide attractive access to geospatial data 8202/04/2016
  • 83. 2. Perform real time analytics on data source streams and notify users when specified events occur 8302/04/2016 Storm, Kafka, Hbase, Zookeeper Streaming Data Streaming Data Streaming Data Posted Data Identified Events Filter Identifying Events Repository Specify filter Archive Post Selected Events Fetch streamed Data
  • 84. 5. Perform interactive analytics on data in analytics-optimized database 8402/04/2016 Hadoop, Spark, Giraph, Pig … Data Storage: HDFS, Hbase Data, Streaming, Batch ….. Mahout, R
  • 85. 5A. Perform interactive analytics on observational scientific data 8502/04/2016 Grid or Many Task Software, Hadoop, Spark, Giraph, Pig … Data Storage: HDFS, Hbase, File Collection Streaming Twitter data for Social Networking Science Analysis Code, Mahout, R Transport batch of data to primary analysis data system Record Scientific Data in “field” Local Accumulate and initial computing Direct Transfer NIST examples include LHC, Remote Sensing, Astronomy and Bioinformatics
  • 87. Benchmarks/Mini-apps spanning Facets • Look at NSF SPIDAL Project, NIST 51 use cases, Baru-Rabl review • Catalog facets of benchmarks and choose entries to cover “all facets” • Micro Benchmarks: SPEC, EnhancedDFSIO (HDFS), Terasort, Wordcount, Grep, MPI, Basic Pub-Sub …. • SQL and NoSQL Data systems, Search, Recommenders: TPC (-C to x– HS for Hadoop), BigBench, Yahoo Cloud Serving, Berkeley Big Data, HiBench, BigDataBench, Cloudsuite, Linkbench – includes MapReduce cases Search, Bayes, Random Forests, Collaborative Filtering • Spatial Query: select from image or earth data • Alignment: Biology as in BLAST • Streaming: Online classifiers, Cluster tweets, Robotics, Industrial Internet of Things, Astronomy; BGBenchmark. • Pleasingly parallel (Local Analytics): as in initial steps of LHC, Pathology, Bioimaging (differ in type of data analysis) • Global Analytics: Outlier, Clustering, LDA, SVM, Deep Learning, MDS, PageRank, Levenberg-Marquardt, Graph 500 entries • Workflow and Composite (analytics on xSQL) linking above 02/04/2016 87
  • 88. Big Data Exascale convergence 8802/04/2016
  • 89. Big Data and (Exascale) Simulation Convergence I • Our approach to Convergence is built around two ideas that avoid addressing the hardware directly as with modern DevOps technology it isn’t hard to retarget applications between different hardware systems. • Rather we approach Convergence through applications and software. This talk has described the Convergence Diamonds Convergence that unify Big Simulation and Big Data applications and so allow one to more easily identify good approaches to implement Big Data and Exascale applications in a uniform fashion. • The software approach builds on the HPC-ABDS High Performance Computing enhanced Apache Big Data Software Stack concept (http://dsc.soic.indiana.edu/publications/HPC-ABDSDescribed_final.pdf, http://hpc-abds.org/kaleidoscope/ ) • This arranges key HPC and ABDS software together in 21 layers showing where HPC and ABDS overlap. It for example, introduces a communication layer to allow ABDS runtime like Hadoop Storm Spark and Flink to use the richest high performance capabilities shared with MPI Generally it proposes how to use HPC and ABDS software together. – Layered Architecture offers some protection to rapid ABDS technology change (for ABDS independent of HPC) 8902/04/2016
  • 90. Dual Convergence Architecture • Running same HPC-ABDS across all platforms but data management has different balance in I/O, Network and Compute from “model” machine 9002/04/2016 C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C D C C C C C C C C C C C C C C C C Data Management Model for Big Data and Big Simulation
  • 91. Things to do for Big Data and (Exascale) Simulation Convergence II • Converge Applications: Separate data and model to classify Applications and Benchmarks across Big Data and Big Simulations to give Convergence Diamonds with many facets – Indicated how to extend Big Data Ogres to Big Simulations by looking separately at model and data in Ogres – Diamonds will have five views or collections of facets: Problem Architecture; Execution; Data Source and Style; Big Data Processing; Big Simulation Processing – Facets cover data, model or their combination – the problem or application – Note Simulation Processing View has similarities to old parallel computing benchmarks 9102/04/2016
  • 92. Things to do for Big Data and (Exascale) Simulation Convergence III • Convergence Benchmarks: we will use benchmarks that cover the facets of the convergence diamonds i.e. cover big data and simulations; – As we separate data and model, compute intensive simulation benchmarks (e.g. solve partial differential equation) will be linked with data analytics (the model in big data) – IU focus SPIDAL (Scalable Parallel Interoperable Data Analytics Library) with high performance clustering, dimension reduction, graphs, image processing as well as MLlib will be linked to core PDE solvers to explore the communication layer of parallel middleware – Maybe integrating data and simulation is an interesting idea in benchmark sets • Convergence Programming Model – Note parameter servers used in machine learning will be mimicked by collective operators invoked on distributed parameter (model) storage – E.g. Harp as Hadoop HPC Plug-in – There should be interest in using Big Data software systems to support exascale simulations – Streaming solutions from IoT to analysis of astronomy and LHC data will drive high performance versions of Apache streaming systems 9202/04/2016
  • 93. Things to do for Big Data and (Exascale) Simulation Convergence IV • Converge Language: Make Java run as fast as C++ (Java Grande) for computing and communication – see following slide – Surprising that so much Big Data work in industry but basic high performance Java methodology and tools missing – Needs some work as no agreed OpenMP for Java parallel threads – OpenMPI supports Java but needs enhancements to get best performance on needed collectives (For C++ and Java) – Convergence Language Grande should support Python, Java (Scala), C/C++ (Fortran) 9302/04/2016