SlideShare uma empresa Scribd logo
1 de 34
Empower Data-Driven Organizations
with HPE and Hadoop
Gilles Noisette – HPE EMEA Big Data CoE
04/13/2016
Agenda
• A Data-driven world
• HPE Contribution to Spark
• HPE Innovations for Hadoop
• Enterprise Grade SQL Analytics for Hadoop
• Data-centric Security for Hadoop
• HPE Data Discovery service
to help you pull together these innovations
Transform
to a hybrid
infrastructure
Enable
workplace
productivity
Protect
your digital
enterprise
Empower
the data-driven
organization
Transform
to a hybrid
infrastructure
Enable
workplace
productivity
Protect
your digital
enterprise
Empower the data-
driven organization
Harness 100% of your
relevant data to empower
people with actionable
insights that drive superior
business outcomes.
Enterprise Spark at scale
HP Labs is helping make Apache Spark better
HPE and Hortonworks joint announcement
Hortonworks announcement event on March 1st
7
HPE CTO Martin Fink on stage
HPE Contribution to Apache Spark
Martin Fink announcement
Hortonworks and HP Labs join forces to boost Spark
Hewlett Packard Labs is working with Hortonworks to enhance the efficiency and scale of memory for
the enterprise and to dramatically improve memory utilization
– Enhanced shuffle engine technologies. Faster sorting and in-memory
computations, which has the potential to dramatically improve Spark performance
– Better memory utilization. Improved performance and usage for broader
scalability, which will help enable new large-scale use cases
“We're hoping to enable the Spark community to derive insight more rapidly,
from much larger data sets, without having to change a single line of code”
Martin Fink, CTO & Director HPLabs
8
Tested with customers from the Financial services industry
Provides from 3x to 15x performance increases
HPE Innovations for Hadoop
Optimized Infrastructure and Architecture
10
HPE Servers and Architectures for Hadoop
Traditional
• Tried-and-True Platform
• Corp standard: “I buy DL380’s”
• Small to large deployments
(very often ~20 nodes)
• Linear growth of balanced
workloads
Optimized
• Purpose-Built for Big Data
• Mid-size to large deployments
• Single, resource-intensive
workload
• Workload optimized
• Multi-temperate storage
• “Optimized traditional”
• Higher density, lower TCO
Converged
• MPP DBMS approach + open source
• Mid-size to large deployments
• Non-linear storage and
compute/memory growth
• Multiple workloads, latency demands
• Isolate workload hot spots
• Scale compute and storage
separately, elastically
• Innovative, TCO-driven approach
ProLiant
DL380
Gen9
UID
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
ProLiant
DL380
Gen9
UID
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
UID UID
21
UID
Apollo
4500
Gen9
UID
Tray 2
22191613
24211815
10741
12963
Tray 1
Pull for tray 2Pull for tray 2
Apollo
4200 Gen9
10
9
8
7
6
14
13
12
11
19
18
17
16
15
24
23
22
21
20
5
4
3
2
1
UID
Apollo
2000 System
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
UID UID UID
21
UID
21
UID
21
UID
Apollo
4500
Gen9
10
9
8
7
6
14
13
12
11
19
18
17
16
15
24
23
22
21
20
5
4
3
2
1
UID
Apollo
2000 System
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
ProLiant
DL380
Gen9
UID
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
ProLiant
DL380
Gen9
UID
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
SATA
7.2K
3.0 TB
Symmetric Architectures Asymmetric Architecture
Conventional Wisdom Forward-thinking
UID
28
30
29
31
33
21
34
36
35
37
39
38
40
42
41
43
45
44
1
3
2
4
6
5
7
9
8
10
12
11
13
15
14
16
18
17
19
21
20
22
24
23
25
27
26
BA
Moonshot
1500
DL380 Gen9
Apollo 4xxx
Moonshot & Apollo
HPE Reference Architecture(s) for Hadoop
• Scaling from 4 to thousands of HPE Servers
• Sized to customer’s workload and storage needs
• Impressive Processor and Storage density
A set of pre-tested hardware components
• Processor, Drives, Network, 1TB/8TB disk size etc ...
Breakthrough economics, density, simplicity
Flexible, pre-approved & optimized configurations
HPE Apollo 4000
example
24 x HPE
ProLiant
Apollo 4530
Worker Nodes
HPE 5900 10GbE
HPE 5930 10GbE x 2
Network Switches
3 x DL360 Gen9
Head Nodes
Apollo 4510
3.5 PB raw storage
900 TB Hadoop usable
960 Xeon E5 cores
for a full rack
Apollo 4530
UID
ProLiant
DL380e
Gen8
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
DL 380
2.46 PB raw storage
630 TB Hadoop usable
756 Xeon E5 cores
for a full rack
UID
ProLiant
DL380e
Gen8
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
SATA
7.2K
2.0 TB
Apollo 4200
4.6 PB raw storage
1 PB Hadoop usable
756 Xeon E5 cores
for a full rack
UID
10 134 71
11 145 82
12 156 93
UID
10 134 71
11 145 82
12 156 93
UID
10 134 71
11 145 82
12 156 93
UID UID UID
ProLiant
SL4540
Gen8
SATA
7.2K
500GB
SATA
7.2K
500GB
SATA
7.2K
500GB
SATA
7.2K
500GB
SATA
7.2K
500GB
SATA
7.2K
500GB
5.3 PB raw storage
1.3 PB Hadoop usable
320 Xeon E3 cores
for a full rack
HPE Apollo 4200 - Bringing Big Data storage server density to enterprise
Used as standard Hadoop Worker node and BDRA Asymmetric Storage node
Storage density
28 LFF Data drives
DataCenter
Plug and play
Performance and
efficiency
Divide by 2 the number of server
Divide by 2 the number of Network ports
Divide by 2 the needed square meters
Lower the number of needed licenses/subscriptions
Highest storage density in a traditional 2U rack server - 224 TB  up to 4.6PB / rack
Perfect core/spindle ratio of 1 with 28 cores (2 x 14) and 28 drive spindles
Enterprise bridge
Fits traditional enterprise/SME rack server data centers
Lower the electric power needs
Configuration flexibility
Balanced capacity, performance and throughput with
flexible options - Disks, CPUs , I/O and interconnects
Hadoop on HPE Moonshot
What would be a good server cartridge for Hadoop ?
Processing
– Number of Xeon cores : 8
– very efficient I/Os
Memory
– Memory : 128GB
Storage
– Data storage : 2TB m.2 (SSD)
Network
– Fast network (2 x 10GbE)
– Low latency chassis interconnect
14
Impala
SQL on Hadoop
45 x 128GB = 5.6TB RAM - 45 x 2TB = 90TB fast Data storage in 4U
45 servers per enclosure
HPE Asymmetric Architecture for Hadoop
HPE Vertica SQL on Hadoop
Enterprise-Grade Hadoop
15
HPE Big Data Reference Architecture
HPE Brings Enterprise Data Center Architecture to Hadoop
Traditional Hadoop Cluster Architecture
– Compute and storage are always co-located
– All servers are identical
– Data is partitioned across servers on direct attached storage
HPE Big Data Reference Architecture
– Separate, optimized compute and storage tiers
connected by high speed networking
– Standard Hadoop installed with storage components on the
storage servers and applications on the compute servers
– Enabled and optimized by purpose-selected HPE Moonshot and
Apollo servers and HPE/Hortonworks workload management
software (contributed to the community)
17
Servers
Applications,
data files
Compute Servers
Storage Servers
Applications,
intermediate data
Data files
Symmetric architecture
Asymmetric architecture
10
9
8
7
6
14
13
12
11
19
18
17
16
15
24
23
22
21
20
5
4
3
2
1
UID
Apollo
2000 System
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
10
9
8
7
6
14
13
12
11
19
18
17
16
15
24
23
22
21
20
5
4
3
2
1
UID
Apollo
2000 System
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
10
9
8
7
6
14
13
12
11
19
18
17
16
15
24
23
22
21
20
5
4
3
2
1
UID
Apollo
2000 System
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
10
9
8
7
6
14
13
12
11
19
18
17
16
15
24
23
22
21
20
5
4
3
2
1
UID
Apollo
2000 System
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
SAS
900 GB
10K
Benefits of HPE Big Data Reference Architecture for Hadoop
Delivering value to the business
18
High Speed
Network
Data Consolidation
Hosting Multiple Workloads
Maximum Elasticity and
Workload Isolation
Balance and Scale Compute and Storage
Independently
Breakthrough Density and
TCO
HPE Moonshot or HPE Apollo
HPE Apollo 4xx0
Advantages* of HPE Big Data Reference Architecture
Room to Grow - The same performance in half the space
19
* Normalized on performance, based on Terasort testing
HPE Big Data
Reference Architecture
Traditional Architecture
Traditional Big Data
Architecture
HPE Big Data
Reference Architecture
Hadoop performance Equivalent
Density >2x more dense
Network bandwidth 40Gbit versus 10Gbit
HDFS Storage
performance
2x greater
Power (watts) Half the power
Independent scaling of compute and storage
Grow to match your workload and data sources
20
Hot (Compute) Configuration Cold (Storage) Configuration
HPE Big Data Reference ArchitectureTraditional Architecture
2.8x compute
97% of the storage capacity
4x the memory
1.6x compute
1.5x the storage capacity
2.5x the memory
90% of the compute
2.1x the storage capacity
1.5x the memory
HPE Big Data Reference Architecture
Hadoop and its ecosystem take advantage of the BDRA
Network Switches
East - West Networking
Impala
SSD based Hard Disk based Archive
High Speed
Network
Enterprise Grade SQL Analytics for Hadoop
• Develop your own analytical applications with
full-functionality ANSI SQL
• Vertica Inside - Powerful and Proven SQL
Query Engine
• Installs in Hadoop cluster, supporting Ambari,
YARN-ready
• Enterprise-Ready,
Stable with full ANSI SQL capabilities,
Predictive analytics
HPE Vertica SQL on Hadoop
YARN Apps
HDFS, ORC,
Parquet
Compute optimized Servers
Storage optimized Servers
SQL on Hadoop
First commercially available
columnar database
Native Advanced Analytics to
deliver insight at the speed of
business
Native Hadoop Integration
SaaS and AMI Cloud options
Support for new open source
architectures including
Kafka and Spark.
Core Vertica SQL Engine
Advanced Analytics
Open ANSI SQL Standards ++
R, Python, Java, ScalaCore is Key
Same core Vertica engine
delivers advanced analytics
wherever your enterprise
needs demand — today
and tomorrow.
HP Vertica for
SQL on Hadoop
Native support for ORC, Parquet
Supports all distributions
No helper node or single
point of failure
HP Vertica
Enterprise Edition
Columnar storage and
advanced compression
Industry leading performance
and scalability
Vertica Community Edition
Free up to 1 TB
Build a data-centric foundation
HPE Vertica Advanced Analytics Family– with enterprise-grade reliability and scalability
HP Vertica OnDemand
Get up and running in < 1HR
Pay by the TB or Query
HP Vertica AMI
Hundreds of TB deployed
Bring your own license to
Amazon Web Services
HPE Big Data Architecture long term view
Evolve to support multiple compute and storage blocks
Low Cost Nodes
SSD Nodes Disk Nodes Archive Nodes
Multi-temperate Storage using HDFS Tiering and ObjectStores
GPU Nodes FPGA Nodes Big Memory Nodes
Workload Optimized compute nodes to accelerate various big data software
Data-centric security for Hadoop
Enterprise-Grade Hadoop
25
HPE SecureData provides the missing data protection
26
Traditional IT
Infrastructure Security
Disk encryption
Database encryption
SSL/TLS/firewalls
Authentication
Management
Threats to
Data
Malware,
Insiders
SQL injection,
Malware
Traffic
Interceptors
Malware,
Insiders
Credential
Compromise
Security
Gaps
HPE SecureData
Data-centric Security
SSL/TLS/firewalls
Datasecuritycoverage
End-to-endProtection
Middleware/Network
Storage
Databases
File Systems
Data & Applications
Data
Ecosystem
Security gap
Security gap
Security gap
Security gap
HPE SecureData
Protecting sensitive and regulated data in Hadoop
– Stateless Key Management
– No key database to store or manage
– High performance, unlimited scalability
– Both encryption and tokenization technologies
– Customize solution to meet exact requirements
– Broad platform support
– On-premise / Cloud / Big Data
– Structured / Unstructured
– Hadoop, HPE Vertica, Linux, Windows, AWS, HPE NonStop,
Teradata, IBM z/OS, etc.
– Quick time-to-value
– Complete end-to-end protection within a common platform
– Format-preservation dramatically reduces implementation effort
27
HPE SecureData
Management Console
HPE SecureData
Web Services API
HPE SecureData
Native APIs
(C, Java, C#./NET)
HPE SecureData
Command Lines
HPE SecureData
Key Servers
HPE SecureData
File Processor
28
Field level, format-preserving, reversible data de-identification
Customizable to granular requirements addressed by encryption & tokenization
Credit card
1234 5678 8765 4321
SSN/ID
934-72-2356
Email
bob@voltage.com
DOB
31-07-1966
Full 8736 5533 4678 9453 347-98-8309 hry@ghohawd.jiw 20-05-1972
Partial 1234 5681 5310 4321 634-34-2356 hry@ghohawd.jiw 20-05-1972
Obvious 1234 56AZ UYTZ 4321 AZS-UD-2356 hry@ghohawd.jiw 20-05-1972
FPE**SST*
*Secure Stateless Tokenization (SST)
**Format-Preserving Encryption (FPE)
Data Discovery service
Discover the value of your Data
29
Align business goals and
challenges with the relevant
data
How to discover the value of your data
Evaluate your data and
quickly test, learn, and iterate
ideas to discover value
Create a strategic roadmap
based on learnings
Key HPE solutions
Data Discovery
Data Driven Transformation Planning
Business benefits
Agile execution to impactful projects
Maximize alignment to value
• To help you with your journey, HPE Data Discovery
Solution provides an end-to-end approach to
realizing the value of your data
• Includes experienced consultants, proven
processes, modern big data analytics platforms
and infrastructure, and convenient delivery options.
• Empowers you to realize:
• Clear path to business insights and value
• Rapid exploration and real-time access
• Lower risk
• Lower costs
Business value metrics
• Improve business processes
• Enable better operations performance
• Understand customer better
• Increase market share, margin, and/or revenue
Business Value HPE Data Discovery Solution Framework
 Discovery Workshop
HPE Vertica, HPE IDOL, Hadoop, SAP HANA
Premises Cloud
 Discovery Experience
 Discovery Production Implementation
Discovery Lab
HPE Servers and Storage
Rapid, low-risk, securely designed path to big data value delivered as-a-service
in the HPE Cloud or on Client premises
Expertise
HPE data
scientists,
technology
experts, industry
SMEs
Big data
platforms
HPE Haven,
Hadoop, SAP
HANA, etc.
Platform
flexibility
On premise or
cloud-based
delivery
models
Guided process
Proven processes
to accelerate time-
to-value
Use case library
Industry and
business function
examples
Discovery Production
Implementation
Operationalize and monetize the
new insights by implementing them
into your business processes
Discovery Workshop
One to two-day workshop to align
business and IT, discuss opportunities
and determine priorities
Discovery Experience
A private, secure and low risk big
data “test-drive” functional and
technical environment
HPE Data Discovery Service
Big data
infrastructure
HPE Moonshot,
HPE Apollo,
HPE 3PAR, HPE
ProLiant
Data
discovery lab
Rapid
deployment of
data discovery
labs
  
Summary
35
HPE Solution for Hadoop
36
BigData
AnalyticsRA
HPE Vertica SQL for
Hadoop
SAP HANA HPE IDOL
Hadoop Reference Architectures for MapR, Hortonworks & Cloudera
HPE Information
Governance
Hadoop
HPE Apollo + Moonshot + ProLiant
HPE Analytics Consulting Services for Hadoop
HPEIntegration
Services
On-Premise and Hybrid Cloud deployment options
Flexible, Purpose-built
Infrastructure
High-Performing
Analytics Engines
Consulting &
Implementation Services
High performance computing
2x Hadoop performance
or 50% less space
HPE Infrastructure Big Data
Reference Architecture
Analyze at scale and speed
100% of your data
10x to 1,000x faster
HPE Big Data platform
Powered by Vertica & IDOL
Secure and govern
Protect and manage
your data and reputation
HPE Security and Governance
Solutions for Hadoop
Data management, data discovery and governance services
Build a Data Centric Foundation
Hadoop for the Enterprise
Why Hewlett Packard Enterprise?
Enterprise Scale with Hadoop
Solution leadership Market leadership Flexible and OpenExperience and
expertise
3000+ global analytics and
data management
professionals
Hundreds of data scientists
Proven analytics and
compute platforms for all
data, environments, and
analytics
Services to deliver value
from discovery to
achieving business
outcomes
Gartner’s Magic Quadrant
leader for:
— Enterprise Data
Warehouse and Data
Management Solutions
for Analytics (2015)
— eDiscovery (2015)
Solutions built on open-
standards, offering choice
and flexibility
Strong strategic alliances
complementing HPE
solutions
THANK YOU
39

Mais conteúdo relacionado

Mais procurados

Stinger Initiative - Deep Dive
Stinger Initiative - Deep DiveStinger Initiative - Deep Dive
Stinger Initiative - Deep DiveHortonworks
 
Hadoop in the Cloud – The What, Why and How from the Experts
Hadoop in the Cloud – The What, Why and How from the ExpertsHadoop in the Cloud – The What, Why and How from the Experts
Hadoop in the Cloud – The What, Why and How from the ExpertsDataWorks Summit/Hadoop Summit
 
HPE Hadoop Solutions - From use cases to proposal
HPE Hadoop Solutions - From use cases to proposalHPE Hadoop Solutions - From use cases to proposal
HPE Hadoop Solutions - From use cases to proposalDataWorks Summit
 
Apache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community UpdateApache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community UpdateDataWorks Summit
 
Dealing with Changed Data in Hadoop
Dealing with Changed Data in HadoopDealing with Changed Data in Hadoop
Dealing with Changed Data in HadoopDataWorks Summit
 
A New "Sparkitecture" for modernizing your data warehouse
A New "Sparkitecture" for modernizing your data warehouseA New "Sparkitecture" for modernizing your data warehouse
A New "Sparkitecture" for modernizing your data warehouseDataWorks Summit/Hadoop Summit
 
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache TezYahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache TezDataWorks Summit
 
Accelerating Big Data Insights
Accelerating Big Data InsightsAccelerating Big Data Insights
Accelerating Big Data InsightsDataWorks Summit
 
Introduction to Apache Amaterasu (Incubating): CD Framework For Your Big Data...
Introduction to Apache Amaterasu (Incubating): CD Framework For Your Big Data...Introduction to Apache Amaterasu (Incubating): CD Framework For Your Big Data...
Introduction to Apache Amaterasu (Incubating): CD Framework For Your Big Data...DataWorks Summit
 
Hadoop Infrastructure @Uber Past, Present and Future
Hadoop Infrastructure @Uber Past, Present and FutureHadoop Infrastructure @Uber Past, Present and Future
Hadoop Infrastructure @Uber Past, Present and FutureDataWorks Summit
 
HBaseCon 2013: Being Smarter Than the Smart Meter
HBaseCon 2013: Being Smarter Than the Smart MeterHBaseCon 2013: Being Smarter Than the Smart Meter
HBaseCon 2013: Being Smarter Than the Smart MeterCloudera, Inc.
 
Operationalizing YARN based Hadoop Clusters in the Cloud
Operationalizing YARN based Hadoop Clusters in the CloudOperationalizing YARN based Hadoop Clusters in the Cloud
Operationalizing YARN based Hadoop Clusters in the CloudDataWorks Summit/Hadoop Summit
 
Hadoop from Hive with Stinger to Tez
Hadoop from Hive with Stinger to TezHadoop from Hive with Stinger to Tez
Hadoop from Hive with Stinger to TezJan Pieter Posthuma
 
Near Real-Time Network Anomaly Detection and Traffic Analysis using Spark bas...
Near Real-Time Network Anomaly Detection and Traffic Analysis using Spark bas...Near Real-Time Network Anomaly Detection and Traffic Analysis using Spark bas...
Near Real-Time Network Anomaly Detection and Traffic Analysis using Spark bas...DataWorks Summit/Hadoop Summit
 
Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...
Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...
Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...DataWorks Summit
 

Mais procurados (20)

Big Data Platform Industrialization
Big Data Platform Industrialization Big Data Platform Industrialization
Big Data Platform Industrialization
 
Stinger Initiative - Deep Dive
Stinger Initiative - Deep DiveStinger Initiative - Deep Dive
Stinger Initiative - Deep Dive
 
How do you decide where your customer was?
How do you decide where your customer was?How do you decide where your customer was?
How do you decide where your customer was?
 
Hadoop in the Cloud – The What, Why and How from the Experts
Hadoop in the Cloud – The What, Why and How from the ExpertsHadoop in the Cloud – The What, Why and How from the Experts
Hadoop in the Cloud – The What, Why and How from the Experts
 
HPE Hadoop Solutions - From use cases to proposal
HPE Hadoop Solutions - From use cases to proposalHPE Hadoop Solutions - From use cases to proposal
HPE Hadoop Solutions - From use cases to proposal
 
Apache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community UpdateApache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community Update
 
Dealing with Changed Data in Hadoop
Dealing with Changed Data in HadoopDealing with Changed Data in Hadoop
Dealing with Changed Data in Hadoop
 
The Heterogeneous Data lake
The Heterogeneous Data lakeThe Heterogeneous Data lake
The Heterogeneous Data lake
 
A New "Sparkitecture" for modernizing your data warehouse
A New "Sparkitecture" for modernizing your data warehouseA New "Sparkitecture" for modernizing your data warehouse
A New "Sparkitecture" for modernizing your data warehouse
 
Curb your insecurity with HDP
Curb your insecurity with HDPCurb your insecurity with HDP
Curb your insecurity with HDP
 
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache TezYahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
Yahoo - Moving beyond running 100% of Apache Pig jobs on Apache Tez
 
Accelerating Big Data Insights
Accelerating Big Data InsightsAccelerating Big Data Insights
Accelerating Big Data Insights
 
Introduction to Apache Amaterasu (Incubating): CD Framework For Your Big Data...
Introduction to Apache Amaterasu (Incubating): CD Framework For Your Big Data...Introduction to Apache Amaterasu (Incubating): CD Framework For Your Big Data...
Introduction to Apache Amaterasu (Incubating): CD Framework For Your Big Data...
 
Hadoop Infrastructure @Uber Past, Present and Future
Hadoop Infrastructure @Uber Past, Present and FutureHadoop Infrastructure @Uber Past, Present and Future
Hadoop Infrastructure @Uber Past, Present and Future
 
HBaseCon 2013: Being Smarter Than the Smart Meter
HBaseCon 2013: Being Smarter Than the Smart MeterHBaseCon 2013: Being Smarter Than the Smart Meter
HBaseCon 2013: Being Smarter Than the Smart Meter
 
Spark + HBase
Spark + HBase Spark + HBase
Spark + HBase
 
Operationalizing YARN based Hadoop Clusters in the Cloud
Operationalizing YARN based Hadoop Clusters in the CloudOperationalizing YARN based Hadoop Clusters in the Cloud
Operationalizing YARN based Hadoop Clusters in the Cloud
 
Hadoop from Hive with Stinger to Tez
Hadoop from Hive with Stinger to TezHadoop from Hive with Stinger to Tez
Hadoop from Hive with Stinger to Tez
 
Near Real-Time Network Anomaly Detection and Traffic Analysis using Spark bas...
Near Real-Time Network Anomaly Detection and Traffic Analysis using Spark bas...Near Real-Time Network Anomaly Detection and Traffic Analysis using Spark bas...
Near Real-Time Network Anomaly Detection and Traffic Analysis using Spark bas...
 
Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...
Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...
Startup Case Study: Leveraging the Broad Hadoop Ecosystem to Develop World-Fi...
 

Destaque

Hadoop in the Cloud: Real World Lessons from Enterprise Customers
Hadoop in the Cloud: Real World Lessons from Enterprise CustomersHadoop in the Cloud: Real World Lessons from Enterprise Customers
Hadoop in the Cloud: Real World Lessons from Enterprise CustomersDataWorks Summit/Hadoop Summit
 
Overview of Apache Flink: the 4G of Big Data Analytics Frameworks
Overview of Apache Flink: the 4G of Big Data Analytics FrameworksOverview of Apache Flink: the 4G of Big Data Analytics Frameworks
Overview of Apache Flink: the 4G of Big Data Analytics FrameworksDataWorks Summit/Hadoop Summit
 
Get Ready to Modernize the Core
Get Ready to Modernize the CoreGet Ready to Modernize the Core
Get Ready to Modernize the CoreCapgemini
 
Implementing the Business Catalog in the Modern Enterprise: Bridging Traditio...
Implementing the Business Catalog in the Modern Enterprise: Bridging Traditio...Implementing the Business Catalog in the Modern Enterprise: Bridging Traditio...
Implementing the Business Catalog in the Modern Enterprise: Bridging Traditio...DataWorks Summit/Hadoop Summit
 
Next Generation Spend Analytics & Data Visualization
Next Generation Spend Analytics & Data VisualizationNext Generation Spend Analytics & Data Visualization
Next Generation Spend Analytics & Data VisualizationJosh Stancil
 
HP ProLiant DL380 Gen9 consigue un récord mundial general para servidores de ...
HP ProLiant DL380 Gen9 consigue un récord mundial general para servidores de ...HP ProLiant DL380 Gen9 consigue un récord mundial general para servidores de ...
HP ProLiant DL380 Gen9 consigue un récord mundial general para servidores de ...RaGaZoMe
 
HP Enterprises in Hana Pankaj Jain May 2016
HP Enterprises in Hana Pankaj Jain May 2016HP Enterprises in Hana Pankaj Jain May 2016
HP Enterprises in Hana Pankaj Jain May 2016INDUSCommunity
 
Hp proliant slideshow_v2
Hp proliant slideshow_v2Hp proliant slideshow_v2
Hp proliant slideshow_v2sarahdoran
 
HPE presentation at SAPPHIRE 2016 in SUSE Mini-theatre
HPE presentation at SAPPHIRE 2016 in SUSE Mini-theatreHPE presentation at SAPPHIRE 2016 in SUSE Mini-theatre
HPE presentation at SAPPHIRE 2016 in SUSE Mini-theatreMike Nelson
 
Big SQL 3.0 - Fast and easy SQL on Hadoop
Big SQL 3.0 - Fast and easy SQL on HadoopBig SQL 3.0 - Fast and easy SQL on Hadoop
Big SQL 3.0 - Fast and easy SQL on HadoopWilfried Hoge
 
The HR 3.0 Framework
The HR 3.0 FrameworkThe HR 3.0 Framework
The HR 3.0 FrameworkSonnie Santos
 
HP Gen9: Leading in the coumpute era
HP Gen9: Leading in the coumpute eraHP Gen9: Leading in the coumpute era
HP Gen9: Leading in the coumpute eraHP Enterprise Italia
 
Are you paying attention
Are you paying attentionAre you paying attention
Are you paying attentionHiba Hamdan
 
Analytics 3.0 Measurable business impact from analytics & big data
Analytics 3.0 Measurable business impact from analytics & big dataAnalytics 3.0 Measurable business impact from analytics & big data
Analytics 3.0 Measurable business impact from analytics & big dataMicrosoft
 
Why PTC for SLM?
Why PTC for SLM?Why PTC for SLM?
Why PTC for SLM?Tom Kenslea
 
The Right Foundation for SMB Networks: Smart Switches vs. Fully Managed Switches
The Right Foundation for SMB Networks: Smart Switches vs. Fully Managed SwitchesThe Right Foundation for SMB Networks: Smart Switches vs. Fully Managed Switches
The Right Foundation for SMB Networks: Smart Switches vs. Fully Managed Switchesdigitallibrary
 

Destaque (20)

Ingest and Stream Processing - What will you choose?
Ingest and Stream Processing - What will you choose?Ingest and Stream Processing - What will you choose?
Ingest and Stream Processing - What will you choose?
 
Hadoop Everywhere
Hadoop EverywhereHadoop Everywhere
Hadoop Everywhere
 
Hadoop in the Cloud: Real World Lessons from Enterprise Customers
Hadoop in the Cloud: Real World Lessons from Enterprise CustomersHadoop in the Cloud: Real World Lessons from Enterprise Customers
Hadoop in the Cloud: Real World Lessons from Enterprise Customers
 
Overview of Apache Flink: the 4G of Big Data Analytics Frameworks
Overview of Apache Flink: the 4G of Big Data Analytics FrameworksOverview of Apache Flink: the 4G of Big Data Analytics Frameworks
Overview of Apache Flink: the 4G of Big Data Analytics Frameworks
 
Get Ready to Modernize the Core
Get Ready to Modernize the CoreGet Ready to Modernize the Core
Get Ready to Modernize the Core
 
Implementing the Business Catalog in the Modern Enterprise: Bridging Traditio...
Implementing the Business Catalog in the Modern Enterprise: Bridging Traditio...Implementing the Business Catalog in the Modern Enterprise: Bridging Traditio...
Implementing the Business Catalog in the Modern Enterprise: Bridging Traditio...
 
Cloudbreak - Technical Deep Dive
Cloudbreak - Technical Deep DiveCloudbreak - Technical Deep Dive
Cloudbreak - Technical Deep Dive
 
Next Generation Spend Analytics & Data Visualization
Next Generation Spend Analytics & Data VisualizationNext Generation Spend Analytics & Data Visualization
Next Generation Spend Analytics & Data Visualization
 
HP ProLiant DL380 Gen9 consigue un récord mundial general para servidores de ...
HP ProLiant DL380 Gen9 consigue un récord mundial general para servidores de ...HP ProLiant DL380 Gen9 consigue un récord mundial general para servidores de ...
HP ProLiant DL380 Gen9 consigue un récord mundial general para servidores de ...
 
HP Enterprises in Hana Pankaj Jain May 2016
HP Enterprises in Hana Pankaj Jain May 2016HP Enterprises in Hana Pankaj Jain May 2016
HP Enterprises in Hana Pankaj Jain May 2016
 
Hp proliant slideshow_v2
Hp proliant slideshow_v2Hp proliant slideshow_v2
Hp proliant slideshow_v2
 
HPE presentation at SAPPHIRE 2016 in SUSE Mini-theatre
HPE presentation at SAPPHIRE 2016 in SUSE Mini-theatreHPE presentation at SAPPHIRE 2016 in SUSE Mini-theatre
HPE presentation at SAPPHIRE 2016 in SUSE Mini-theatre
 
Big SQL 3.0 - Fast and easy SQL on Hadoop
Big SQL 3.0 - Fast and easy SQL on HadoopBig SQL 3.0 - Fast and easy SQL on Hadoop
Big SQL 3.0 - Fast and easy SQL on Hadoop
 
The HR 3.0 Framework
The HR 3.0 FrameworkThe HR 3.0 Framework
The HR 3.0 Framework
 
HP Gen9: Leading in the coumpute era
HP Gen9: Leading in the coumpute eraHP Gen9: Leading in the coumpute era
HP Gen9: Leading in the coumpute era
 
Are you paying attention
Are you paying attentionAre you paying attention
Are you paying attention
 
LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 
Analytics 3.0 Measurable business impact from analytics & big data
Analytics 3.0 Measurable business impact from analytics & big dataAnalytics 3.0 Measurable business impact from analytics & big data
Analytics 3.0 Measurable business impact from analytics & big data
 
Why PTC for SLM?
Why PTC for SLM?Why PTC for SLM?
Why PTC for SLM?
 
The Right Foundation for SMB Networks: Smart Switches vs. Fully Managed Switches
The Right Foundation for SMB Networks: Smart Switches vs. Fully Managed SwitchesThe Right Foundation for SMB Networks: Smart Switches vs. Fully Managed Switches
The Right Foundation for SMB Networks: Smart Switches vs. Fully Managed Switches
 

Semelhante a HPE Empower Data-Driven Organizations with Hadoop

HPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY
 
Delivering Apache Hadoop for the Modern Data Architecture
Delivering Apache Hadoop for the Modern Data Architecture Delivering Apache Hadoop for the Modern Data Architecture
Delivering Apache Hadoop for the Modern Data Architecture Hortonworks
 
Eng systems oracle_overview
Eng systems oracle_overviewEng systems oracle_overview
Eng systems oracle_overviewFran Navarro
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkThe Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkLenovo Data Center
 
Hp Converged Systems and Hortonworks - Webinar Slides
Hp Converged Systems and Hortonworks - Webinar SlidesHp Converged Systems and Hortonworks - Webinar Slides
Hp Converged Systems and Hortonworks - Webinar SlidesHortonworks
 
Meta scale kognitio hadoop webinar
Meta scale kognitio hadoop webinarMeta scale kognitio hadoop webinar
Meta scale kognitio hadoop webinarKognitio
 
HPE Solutions for Challenges in AI and Big Data
HPE Solutions for Challenges in AI and Big DataHPE Solutions for Challenges in AI and Big Data
HPE Solutions for Challenges in AI and Big DataLviv Startup Club
 
Saviak lviv ai-2019-e-mail (1)
Saviak lviv ai-2019-e-mail (1)Saviak lviv ai-2019-e-mail (1)
Saviak lviv ai-2019-e-mail (1)Lviv Startup Club
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
clusterstor-hadoop-data-sheet
clusterstor-hadoop-data-sheetclusterstor-hadoop-data-sheet
clusterstor-hadoop-data-sheetAndrei Khurshudov
 
Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Ontico
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFSUSE Italy
 
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataHitoshi Sato
 
Accelerating analytics in the cloud with the Starburst Presto + Alluxio stack
Accelerating analytics in the cloud with the Starburst Presto + Alluxio stackAccelerating analytics in the cloud with the Starburst Presto + Alluxio stack
Accelerating analytics in the cloud with the Starburst Presto + Alluxio stackAlluxio, Inc.
 
OpenDrives_-_Product_Sheet_v13D (2) (1)
OpenDrives_-_Product_Sheet_v13D (2) (1)OpenDrives_-_Product_Sheet_v13D (2) (1)
OpenDrives_-_Product_Sheet_v13D (2) (1)Scott Eiser
 
Exadata architecture and internals presentation
Exadata architecture and internals presentationExadata architecture and internals presentation
Exadata architecture and internals presentationSanjoy Dasgupta
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
 
Powering Real-Time Big Data Analytics with a Next-Gen GPU Database
Powering Real-Time Big Data Analytics with a Next-Gen GPU DatabasePowering Real-Time Big Data Analytics with a Next-Gen GPU Database
Powering Real-Time Big Data Analytics with a Next-Gen GPU DatabaseKinetica
 
Big Data and its emergence
Big Data and its emergenceBig Data and its emergence
Big Data and its emergencekoolkalpz
 

Semelhante a HPE Empower Data-Driven Organizations with Hadoop (20)

HPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big Data
 
Delivering Apache Hadoop for the Modern Data Architecture
Delivering Apache Hadoop for the Modern Data Architecture Delivering Apache Hadoop for the Modern Data Architecture
Delivering Apache Hadoop for the Modern Data Architecture
 
Eng systems oracle_overview
Eng systems oracle_overviewEng systems oracle_overview
Eng systems oracle_overview
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkThe Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
 
Hp Converged Systems and Hortonworks - Webinar Slides
Hp Converged Systems and Hortonworks - Webinar SlidesHp Converged Systems and Hortonworks - Webinar Slides
Hp Converged Systems and Hortonworks - Webinar Slides
 
Empower Data-Driven Organizations with HPE and Hadoop
Empower Data-Driven Organizations with HPE and HadoopEmpower Data-Driven Organizations with HPE and Hadoop
Empower Data-Driven Organizations with HPE and Hadoop
 
Meta scale kognitio hadoop webinar
Meta scale kognitio hadoop webinarMeta scale kognitio hadoop webinar
Meta scale kognitio hadoop webinar
 
HPE Solutions for Challenges in AI and Big Data
HPE Solutions for Challenges in AI and Big DataHPE Solutions for Challenges in AI and Big Data
HPE Solutions for Challenges in AI and Big Data
 
Saviak lviv ai-2019-e-mail (1)
Saviak lviv ai-2019-e-mail (1)Saviak lviv ai-2019-e-mail (1)
Saviak lviv ai-2019-e-mail (1)
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
clusterstor-hadoop-data-sheet
clusterstor-hadoop-data-sheetclusterstor-hadoop-data-sheet
clusterstor-hadoop-data-sheet
 
Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
 
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
 
Accelerating analytics in the cloud with the Starburst Presto + Alluxio stack
Accelerating analytics in the cloud with the Starburst Presto + Alluxio stackAccelerating analytics in the cloud with the Starburst Presto + Alluxio stack
Accelerating analytics in the cloud with the Starburst Presto + Alluxio stack
 
OpenDrives_-_Product_Sheet_v13D (2) (1)
OpenDrives_-_Product_Sheet_v13D (2) (1)OpenDrives_-_Product_Sheet_v13D (2) (1)
OpenDrives_-_Product_Sheet_v13D (2) (1)
 
Exadata architecture and internals presentation
Exadata architecture and internals presentationExadata architecture and internals presentation
Exadata architecture and internals presentation
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
 
Powering Real-Time Big Data Analytics with a Next-Gen GPU Database
Powering Real-Time Big Data Analytics with a Next-Gen GPU DatabasePowering Real-Time Big Data Analytics with a Next-Gen GPU Database
Powering Real-Time Big Data Analytics with a Next-Gen GPU Database
 
Big Data and its emergence
Big Data and its emergenceBig Data and its emergence
Big Data and its emergence
 

Mais de DataWorks Summit/Hadoop Summit

Unleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerUnleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerDataWorks Summit/Hadoop Summit
 
Enabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformEnabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformDataWorks Summit/Hadoop Summit
 
Double Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDouble Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDataWorks Summit/Hadoop Summit
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...DataWorks Summit/Hadoop Summit
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...DataWorks Summit/Hadoop Summit
 
Mool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLMool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLDataWorks Summit/Hadoop Summit
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)DataWorks Summit/Hadoop Summit
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...DataWorks Summit/Hadoop Summit
 

Mais de DataWorks Summit/Hadoop Summit (20)

Running Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in ProductionRunning Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in Production
 
State of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache ZeppelinState of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache Zeppelin
 
Unleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerUnleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache Ranger
 
Enabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformEnabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science Platform
 
Revolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and ZeppelinRevolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and Zeppelin
 
Double Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDouble Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSense
 
Hadoop Crash Course
Hadoop Crash CourseHadoop Crash Course
Hadoop Crash Course
 
Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Apache Spark Crash Course
Apache Spark Crash CourseApache Spark Crash Course
Apache Spark Crash Course
 
Dataflow with Apache NiFi
Dataflow with Apache NiFiDataflow with Apache NiFi
Dataflow with Apache NiFi
 
Schema Registry - Set you Data Free
Schema Registry - Set you Data FreeSchema Registry - Set you Data Free
Schema Registry - Set you Data Free
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
 
Mool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLMool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and ML
 
How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient
 
HBase in Practice
HBase in Practice HBase in Practice
HBase in Practice
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
 
Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
 

Último

GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 

Último (20)

GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 

HPE Empower Data-Driven Organizations with Hadoop

  • 1. Empower Data-Driven Organizations with HPE and Hadoop Gilles Noisette – HPE EMEA Big Data CoE 04/13/2016
  • 2. Agenda • A Data-driven world • HPE Contribution to Spark • HPE Innovations for Hadoop • Enterprise Grade SQL Analytics for Hadoop • Data-centric Security for Hadoop • HPE Data Discovery service to help you pull together these innovations
  • 3. Transform to a hybrid infrastructure Enable workplace productivity Protect your digital enterprise Empower the data-driven organization
  • 4. Transform to a hybrid infrastructure Enable workplace productivity Protect your digital enterprise Empower the data- driven organization Harness 100% of your relevant data to empower people with actionable insights that drive superior business outcomes.
  • 5. Enterprise Spark at scale HP Labs is helping make Apache Spark better
  • 6. HPE and Hortonworks joint announcement Hortonworks announcement event on March 1st 7 HPE CTO Martin Fink on stage
  • 7. HPE Contribution to Apache Spark Martin Fink announcement Hortonworks and HP Labs join forces to boost Spark Hewlett Packard Labs is working with Hortonworks to enhance the efficiency and scale of memory for the enterprise and to dramatically improve memory utilization – Enhanced shuffle engine technologies. Faster sorting and in-memory computations, which has the potential to dramatically improve Spark performance – Better memory utilization. Improved performance and usage for broader scalability, which will help enable new large-scale use cases “We're hoping to enable the Spark community to derive insight more rapidly, from much larger data sets, without having to change a single line of code” Martin Fink, CTO & Director HPLabs 8 Tested with customers from the Financial services industry Provides from 3x to 15x performance increases
  • 8. HPE Innovations for Hadoop Optimized Infrastructure and Architecture 10
  • 9. HPE Servers and Architectures for Hadoop Traditional • Tried-and-True Platform • Corp standard: “I buy DL380’s” • Small to large deployments (very often ~20 nodes) • Linear growth of balanced workloads Optimized • Purpose-Built for Big Data • Mid-size to large deployments • Single, resource-intensive workload • Workload optimized • Multi-temperate storage • “Optimized traditional” • Higher density, lower TCO Converged • MPP DBMS approach + open source • Mid-size to large deployments • Non-linear storage and compute/memory growth • Multiple workloads, latency demands • Isolate workload hot spots • Scale compute and storage separately, elastically • Innovative, TCO-driven approach ProLiant DL380 Gen9 UID SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB UID UID 21 UID Apollo 4500 Gen9 UID Tray 2 22191613 24211815 10741 12963 Tray 1 Pull for tray 2Pull for tray 2 Apollo 4200 Gen9 10 9 8 7 6 14 13 12 11 19 18 17 16 15 24 23 22 21 20 5 4 3 2 1 UID Apollo 2000 System SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K UID UID UID 21 UID 21 UID 21 UID Apollo 4500 Gen9 10 9 8 7 6 14 13 12 11 19 18 17 16 15 24 23 22 21 20 5 4 3 2 1 UID Apollo 2000 System SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K ProLiant DL380 Gen9 UID SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB ProLiant DL380 Gen9 UID SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB SATA 7.2K 3.0 TB Symmetric Architectures Asymmetric Architecture Conventional Wisdom Forward-thinking UID 28 30 29 31 33 21 34 36 35 37 39 38 40 42 41 43 45 44 1 3 2 4 6 5 7 9 8 10 12 11 13 15 14 16 18 17 19 21 20 22 24 23 25 27 26 BA Moonshot 1500 DL380 Gen9 Apollo 4xxx Moonshot & Apollo
  • 10. HPE Reference Architecture(s) for Hadoop • Scaling from 4 to thousands of HPE Servers • Sized to customer’s workload and storage needs • Impressive Processor and Storage density A set of pre-tested hardware components • Processor, Drives, Network, 1TB/8TB disk size etc ... Breakthrough economics, density, simplicity Flexible, pre-approved & optimized configurations HPE Apollo 4000 example 24 x HPE ProLiant Apollo 4530 Worker Nodes HPE 5900 10GbE HPE 5930 10GbE x 2 Network Switches 3 x DL360 Gen9 Head Nodes Apollo 4510 3.5 PB raw storage 900 TB Hadoop usable 960 Xeon E5 cores for a full rack Apollo 4530 UID ProLiant DL380e Gen8 SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB DL 380 2.46 PB raw storage 630 TB Hadoop usable 756 Xeon E5 cores for a full rack UID ProLiant DL380e Gen8 SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB SATA 7.2K 2.0 TB Apollo 4200 4.6 PB raw storage 1 PB Hadoop usable 756 Xeon E5 cores for a full rack UID 10 134 71 11 145 82 12 156 93 UID 10 134 71 11 145 82 12 156 93 UID 10 134 71 11 145 82 12 156 93 UID UID UID ProLiant SL4540 Gen8 SATA 7.2K 500GB SATA 7.2K 500GB SATA 7.2K 500GB SATA 7.2K 500GB SATA 7.2K 500GB SATA 7.2K 500GB 5.3 PB raw storage 1.3 PB Hadoop usable 320 Xeon E3 cores for a full rack
  • 11. HPE Apollo 4200 - Bringing Big Data storage server density to enterprise Used as standard Hadoop Worker node and BDRA Asymmetric Storage node Storage density 28 LFF Data drives DataCenter Plug and play Performance and efficiency Divide by 2 the number of server Divide by 2 the number of Network ports Divide by 2 the needed square meters Lower the number of needed licenses/subscriptions Highest storage density in a traditional 2U rack server - 224 TB  up to 4.6PB / rack Perfect core/spindle ratio of 1 with 28 cores (2 x 14) and 28 drive spindles Enterprise bridge Fits traditional enterprise/SME rack server data centers Lower the electric power needs Configuration flexibility Balanced capacity, performance and throughput with flexible options - Disks, CPUs , I/O and interconnects
  • 12. Hadoop on HPE Moonshot What would be a good server cartridge for Hadoop ? Processing – Number of Xeon cores : 8 – very efficient I/Os Memory – Memory : 128GB Storage – Data storage : 2TB m.2 (SSD) Network – Fast network (2 x 10GbE) – Low latency chassis interconnect 14 Impala SQL on Hadoop 45 x 128GB = 5.6TB RAM - 45 x 2TB = 90TB fast Data storage in 4U 45 servers per enclosure
  • 13. HPE Asymmetric Architecture for Hadoop HPE Vertica SQL on Hadoop Enterprise-Grade Hadoop 15
  • 14. HPE Big Data Reference Architecture HPE Brings Enterprise Data Center Architecture to Hadoop Traditional Hadoop Cluster Architecture – Compute and storage are always co-located – All servers are identical – Data is partitioned across servers on direct attached storage HPE Big Data Reference Architecture – Separate, optimized compute and storage tiers connected by high speed networking – Standard Hadoop installed with storage components on the storage servers and applications on the compute servers – Enabled and optimized by purpose-selected HPE Moonshot and Apollo servers and HPE/Hortonworks workload management software (contributed to the community) 17 Servers Applications, data files Compute Servers Storage Servers Applications, intermediate data Data files Symmetric architecture Asymmetric architecture
  • 15. 10 9 8 7 6 14 13 12 11 19 18 17 16 15 24 23 22 21 20 5 4 3 2 1 UID Apollo 2000 System SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K 10 9 8 7 6 14 13 12 11 19 18 17 16 15 24 23 22 21 20 5 4 3 2 1 UID Apollo 2000 System SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K 10 9 8 7 6 14 13 12 11 19 18 17 16 15 24 23 22 21 20 5 4 3 2 1 UID Apollo 2000 System SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K 10 9 8 7 6 14 13 12 11 19 18 17 16 15 24 23 22 21 20 5 4 3 2 1 UID Apollo 2000 System SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K SAS 900 GB 10K Benefits of HPE Big Data Reference Architecture for Hadoop Delivering value to the business 18 High Speed Network Data Consolidation Hosting Multiple Workloads Maximum Elasticity and Workload Isolation Balance and Scale Compute and Storage Independently Breakthrough Density and TCO HPE Moonshot or HPE Apollo HPE Apollo 4xx0
  • 16. Advantages* of HPE Big Data Reference Architecture Room to Grow - The same performance in half the space 19 * Normalized on performance, based on Terasort testing HPE Big Data Reference Architecture Traditional Architecture Traditional Big Data Architecture HPE Big Data Reference Architecture Hadoop performance Equivalent Density >2x more dense Network bandwidth 40Gbit versus 10Gbit HDFS Storage performance 2x greater Power (watts) Half the power
  • 17. Independent scaling of compute and storage Grow to match your workload and data sources 20 Hot (Compute) Configuration Cold (Storage) Configuration HPE Big Data Reference ArchitectureTraditional Architecture 2.8x compute 97% of the storage capacity 4x the memory 1.6x compute 1.5x the storage capacity 2.5x the memory 90% of the compute 2.1x the storage capacity 1.5x the memory
  • 18. HPE Big Data Reference Architecture Hadoop and its ecosystem take advantage of the BDRA Network Switches East - West Networking Impala SSD based Hard Disk based Archive High Speed Network
  • 19. Enterprise Grade SQL Analytics for Hadoop • Develop your own analytical applications with full-functionality ANSI SQL • Vertica Inside - Powerful and Proven SQL Query Engine • Installs in Hadoop cluster, supporting Ambari, YARN-ready • Enterprise-Ready, Stable with full ANSI SQL capabilities, Predictive analytics HPE Vertica SQL on Hadoop YARN Apps HDFS, ORC, Parquet Compute optimized Servers Storage optimized Servers SQL on Hadoop
  • 20. First commercially available columnar database Native Advanced Analytics to deliver insight at the speed of business Native Hadoop Integration SaaS and AMI Cloud options Support for new open source architectures including Kafka and Spark. Core Vertica SQL Engine Advanced Analytics Open ANSI SQL Standards ++ R, Python, Java, ScalaCore is Key Same core Vertica engine delivers advanced analytics wherever your enterprise needs demand — today and tomorrow. HP Vertica for SQL on Hadoop Native support for ORC, Parquet Supports all distributions No helper node or single point of failure HP Vertica Enterprise Edition Columnar storage and advanced compression Industry leading performance and scalability Vertica Community Edition Free up to 1 TB Build a data-centric foundation HPE Vertica Advanced Analytics Family– with enterprise-grade reliability and scalability HP Vertica OnDemand Get up and running in < 1HR Pay by the TB or Query HP Vertica AMI Hundreds of TB deployed Bring your own license to Amazon Web Services
  • 21. HPE Big Data Architecture long term view Evolve to support multiple compute and storage blocks Low Cost Nodes SSD Nodes Disk Nodes Archive Nodes Multi-temperate Storage using HDFS Tiering and ObjectStores GPU Nodes FPGA Nodes Big Memory Nodes Workload Optimized compute nodes to accelerate various big data software
  • 22. Data-centric security for Hadoop Enterprise-Grade Hadoop 25
  • 23. HPE SecureData provides the missing data protection 26 Traditional IT Infrastructure Security Disk encryption Database encryption SSL/TLS/firewalls Authentication Management Threats to Data Malware, Insiders SQL injection, Malware Traffic Interceptors Malware, Insiders Credential Compromise Security Gaps HPE SecureData Data-centric Security SSL/TLS/firewalls Datasecuritycoverage End-to-endProtection Middleware/Network Storage Databases File Systems Data & Applications Data Ecosystem Security gap Security gap Security gap Security gap
  • 24. HPE SecureData Protecting sensitive and regulated data in Hadoop – Stateless Key Management – No key database to store or manage – High performance, unlimited scalability – Both encryption and tokenization technologies – Customize solution to meet exact requirements – Broad platform support – On-premise / Cloud / Big Data – Structured / Unstructured – Hadoop, HPE Vertica, Linux, Windows, AWS, HPE NonStop, Teradata, IBM z/OS, etc. – Quick time-to-value – Complete end-to-end protection within a common platform – Format-preservation dramatically reduces implementation effort 27 HPE SecureData Management Console HPE SecureData Web Services API HPE SecureData Native APIs (C, Java, C#./NET) HPE SecureData Command Lines HPE SecureData Key Servers HPE SecureData File Processor
  • 25. 28 Field level, format-preserving, reversible data de-identification Customizable to granular requirements addressed by encryption & tokenization Credit card 1234 5678 8765 4321 SSN/ID 934-72-2356 Email bob@voltage.com DOB 31-07-1966 Full 8736 5533 4678 9453 347-98-8309 hry@ghohawd.jiw 20-05-1972 Partial 1234 5681 5310 4321 634-34-2356 hry@ghohawd.jiw 20-05-1972 Obvious 1234 56AZ UYTZ 4321 AZS-UD-2356 hry@ghohawd.jiw 20-05-1972 FPE**SST* *Secure Stateless Tokenization (SST) **Format-Preserving Encryption (FPE)
  • 26. Data Discovery service Discover the value of your Data 29
  • 27. Align business goals and challenges with the relevant data How to discover the value of your data Evaluate your data and quickly test, learn, and iterate ideas to discover value Create a strategic roadmap based on learnings Key HPE solutions Data Discovery Data Driven Transformation Planning Business benefits Agile execution to impactful projects Maximize alignment to value
  • 28. • To help you with your journey, HPE Data Discovery Solution provides an end-to-end approach to realizing the value of your data • Includes experienced consultants, proven processes, modern big data analytics platforms and infrastructure, and convenient delivery options. • Empowers you to realize: • Clear path to business insights and value • Rapid exploration and real-time access • Lower risk • Lower costs Business value metrics • Improve business processes • Enable better operations performance • Understand customer better • Increase market share, margin, and/or revenue Business Value HPE Data Discovery Solution Framework  Discovery Workshop HPE Vertica, HPE IDOL, Hadoop, SAP HANA Premises Cloud  Discovery Experience  Discovery Production Implementation Discovery Lab HPE Servers and Storage
  • 29. Rapid, low-risk, securely designed path to big data value delivered as-a-service in the HPE Cloud or on Client premises Expertise HPE data scientists, technology experts, industry SMEs Big data platforms HPE Haven, Hadoop, SAP HANA, etc. Platform flexibility On premise or cloud-based delivery models Guided process Proven processes to accelerate time- to-value Use case library Industry and business function examples Discovery Production Implementation Operationalize and monetize the new insights by implementing them into your business processes Discovery Workshop One to two-day workshop to align business and IT, discuss opportunities and determine priorities Discovery Experience A private, secure and low risk big data “test-drive” functional and technical environment HPE Data Discovery Service Big data infrastructure HPE Moonshot, HPE Apollo, HPE 3PAR, HPE ProLiant Data discovery lab Rapid deployment of data discovery labs   
  • 31. HPE Solution for Hadoop 36 BigData AnalyticsRA HPE Vertica SQL for Hadoop SAP HANA HPE IDOL Hadoop Reference Architectures for MapR, Hortonworks & Cloudera HPE Information Governance Hadoop HPE Apollo + Moonshot + ProLiant HPE Analytics Consulting Services for Hadoop HPEIntegration Services On-Premise and Hybrid Cloud deployment options Flexible, Purpose-built Infrastructure High-Performing Analytics Engines Consulting & Implementation Services
  • 32. High performance computing 2x Hadoop performance or 50% less space HPE Infrastructure Big Data Reference Architecture Analyze at scale and speed 100% of your data 10x to 1,000x faster HPE Big Data platform Powered by Vertica & IDOL Secure and govern Protect and manage your data and reputation HPE Security and Governance Solutions for Hadoop Data management, data discovery and governance services Build a Data Centric Foundation Hadoop for the Enterprise
  • 33. Why Hewlett Packard Enterprise? Enterprise Scale with Hadoop Solution leadership Market leadership Flexible and OpenExperience and expertise 3000+ global analytics and data management professionals Hundreds of data scientists Proven analytics and compute platforms for all data, environments, and analytics Services to deliver value from discovery to achieving business outcomes Gartner’s Magic Quadrant leader for: — Enterprise Data Warehouse and Data Management Solutions for Analytics (2015) — eDiscovery (2015) Solutions built on open- standards, offering choice and flexibility Strong strategic alliances complementing HPE solutions

Notas do Editor

  1. Session duration : 40 min 22 slides 30 to 35 min presentation 5 to 10 min Q&A
  2. Empower Data-Driven Organizations with HPE and Hadoop Data is the fuel for the idea economy, and being data-driven is essential for businesses to be competitive. HPE works with our partner Hortonworks to deliver a total solution for all your big data initiatives, accelerating the value of Hadoop. Join us in this session and you'll hear about: -– HPE Spark Optimizer – 15x performance improvement for Spark? Yes please – Hortonworks/HPE labs collaboration on enhancing spark for workloads with large shared pools of memory --– Data Discovery - Quickly discover the value of your data with the help of Analytics experts, starting with a data lab on your premise or delivered through the cloud –- Enterprise Grade Hadoop - Innovative asymmetrical compute and storage architecture with better performance per square feet and power utilization for unprecedented elasticity and scalability –- Security for Hadoop - HPE SecureData is a data-centric framework that protects sensitive data at rest, in motion, and in use in Hadoop and other Big Data systems –- SQL on Hadoop - analytics made easier, bridge your EDW legacy systems through tight integrations to Kafka, R, Python, and Apache Spark
  3. We are living in a digital world where everyone is connected, everywhere. We’re living in an Idea Economy, where the ability to turn an idea into a new product or service has never been easier. Anyone with an idea can actually change the world.   Of course, ideas have always been the root of progress and business success. They’ve launched companies, created markets and built industries. But there’s a difference today. In this hyper-connected, technology-driven world, it takes more than good ideas to be successful. Today, the tools that enable disruption – things like cloud computing, mobile technology, big data analytics – are so easily accessible and affordable, they have given rise to a new class of entrepreneurs. And, these challengers of the status quo are revolutionizing entire industries at a pace and scale never seen before. In the Idea Economy, no industry is immune to disruption. Whether in energy, healthcare, manufacturing or telecommunications, companies – be they start-ups or large enterprises – can only survive if they have both the vision and technological agility to respond to market opportunities and threats and quickly turn ideas into reality. Today, an entrepreneur with a good idea has access to all of the infrastructure and resources that a traditional Fortune 1000 company would have…and they can pay for it all with a credit card. They can rent compute on demand, get a SAAS ERP system, use PayPal or Square for transactions, they can market using Facebook or Google, and have FedEx run their supply chain.   The days of needing millions of dollars to launch a new company or bring a new idea to market are fading fast.   You don’t have to look any further than more recent companies such as Vimeo, One Kings Lane or Dock to Dish – all HPE customers and partners. . . or with more common names like Salesforce, Airbnb, Netflix and Pandora to see how the Idea Economy is exploding.   And how about Uber? Uber’s impact has been dramatic since it launched its application to connect riders and drivers in 2009. Without owning a single car, it now serves more than 250 cities in 55 countries and has completely disrupted the taxi industry.   San Francisco Municipal Transportation Agency says that cab use has dropped 65 percent in San Francisco in two years. Ideas have always fueled business success but it’s how fast you can turn an idea into reality. Ask yourself, how quickly can I capitalize on a new idea, seize a new business opportunity or respond to a competitor that threatens my business?
  4. 4
  5. 5
  6. Early results of the collaboration include the following: Enhanced shuffle engine technologies: Faster sorting and in-memory computations, which has the potential to dramatically improve Spark performance.    Better memory utilization: Improved performance and usage for broader scalability, which will help enable new large-scale use cases.
  7. https://www.youtube.com/watch?v=40U3l1IeDcE&feature=player_embedded#t=0
  8. Apollo 4200 Storage calculation per rack if keeping two drives for the OS : 26 LFF Data drives x 8TB = 208TB per server 208TB x 21 = 4368TB = 4.26PB 1092TB = 1.066PB 21 x 18 cores x 2 = 756 cores Apollo 4530 : Number of servers per rack : 10 enclosure = 40U = 10 x 3 = 30 server nodes 30 x 15 drives = 450 drives x 8TB = 3600TB = 3.5PB 3600TB / 4 = 900 TB 30 x 16 cores x 2 = 480 x 2 = 960 cores DL 380 : 18 cores x 2 x 21 = 756 cores 15 drives x 8TB x 21 = 2520 TB = 2.46PB 2520 / 4 = 630TB Hadoop usable
  9. 13
  10. The last trend about where Hadoop is going, and this is very important to the Minotaur solution, is that Hadoop is becoming more asymmetric. Two things went into the core Hadoop trunk in the 2.6 release (December 2014). One is the concept of tiering within the file system, so that I can define disks as standard disks, SSDs or archival tier. Very basic functionality – anyone that’s been in the storage business probably looks at this and says, “hey, they’ve got a long way to go”. Open source software isn’t better, it’s just open source. The enemy of great is good enough, and the enemy of good enough is open source.  The interesting aspect of this is that Hadoop will now allow you to configure servers which are full of large disk drives with very little compute power to use for archival purposes. So, no longer is every node in the cluster the same; we start to have nodes which are more skewed towards certain types of functions and workloads. At the same time, there’s a feature that went into the YARN container environment called labels. We actually helped to do this – we contributed code and contributed to the spec through our relationship with Hortonworks. Labels lets you take the nodes in the cluster and assign them a label. Then, when you run an application under YARN, you can tell it which labels you want to run on. So, maybe I have a pool of nodes with a lot of memory…I give them a label called “lots of memory”, and I can now run all of my Spark (in-memory) jobs on these nodes. Hadoop used to be this completely symmetric environment, and was all about taking the work to the data. No shoveling data around, moving it from place to place; I’ll move the work to the node where the data resides and run it against the data (sitting in internal storage) on that node. Now, we’re starting to see more of this asymmetry, perhaps reaching out to the node next to us to grab data from here that I’m running on a different node. This asymmetry is important to our architecture – our architecture embraces the asymmetry and optimizes for it.
  11. Hadoop management software (YARN node labels)
  12. Key Takeaway: HP Big Data Reference Architecture is another innovation from HP that leverages the strength of HP’s portfolio to deliver value for our customers via a differentiated solution that combines HP Moonshot server and HP Apollo storage servers. The Value Proposition and the HOW Traditional scale-up infrastructures separate compute and storage for flexibility of scaling them independently, but at the cost of management complexity and cost Scale-out architectures, and new technologies that use DAS storage within a server, lose this ability to scale independently by combining compute and storage in one box – a tradeoff for achieving hyper-scalability and simple management HP Big Data Reference architecture deploys standard Hadoop distribution in an asymmetric fashion running the storage related components such as Hadoop Distributed File System (HDFS) and Hbase (open source non-relational distributed database) on Apollo Density Optimized servers and compute related components running under Yarn on Moonshot Hyperscale servers. This essentially provides the best-of-both worlds; ability to scale compute and storage independently without losing the benefits of scale-out infrastructure. In order to make this more flexible HP worked with Hortonworks to create a new feature in Hadoop called Yarn Labels – innovation that we contributed to Open Source! Yarn Labels allows us to create pools of compute nodes where applications run so it is possible to dynamically provision clusters without repartitioning data (since data can be shared across compute nodes) We can scale compute and storage independently by simply adding compute nodes or storage nodes to scale performance linearly This fundamentally changes the economics of the solution across scale, performance and cost efficiency to meet specific use case and workload needs!
  13. Extremely Elastic Nodes can be allocated by time of day or even for a single job without redistributing data Vertica, Autonomy and our partners have high performance access to HDFS Hadoop data can be efficiently shared Can use the best platform for each task Low power Moonshot, Compute intense DL380, Big Memory Superdome, etc No Longer committed to CPU/Storage ratios Compute cores can be allocated as needed Better Capacity management Compute nodes can be provisioned on the fly. Storage nodes are a smaller subset of the cluster thus less costly to overprovision
  14. Same core engine as HP Vertica with Hadoop as the data storage layer Perform analytics regardless of the format of data or Hadoop distribution used Robust, enterprise-ready solution with world-class enterprise support and services Open APIs and developer tools with a vibrant ecosystem of partners to support your big data project Ease management of big data – solution is part of a greater HP Enterprise Software Platform – Haven
  15. Unique types of hardware coming to play : we think that in the near future we are going to see FPGA and GPUs and other types of silicon accelerations become very common in Hadoop We can take those kinds of new hardware into this architecture and let them run alongside of the existing hardware. And then just designate to workload that can take advantage of those platforms to those systems. Ability to consolidate clusters – workloads isolation Application will run unchanged. Being very community driven is important for us. CI : Converged Infrastructure
  16. Format-Preserving Encryption (FPE) Secure Stateless Tokenization (SST) HPE Format-Preserving Encryption: Encryption and Masking
  17. So how to you discover the value of your data? Aligning business goals and challenges with the right impact levers. If you’re business goal is increasing customer loyalty – what are the impact levers that influences the outcome…customer sentiment, product performance, customer service productivity – and how can you use data and insights to positively affect those levers? Evaluating your data to quickly test, learn, and iterate ideas to discover value. Speed and agility are key – and you don’t have time nor resources to make large bets without proving value first. So how can you test ideas with the least upfront investments? Creating a prioritized roadmap of projects. Project execution has to be agile, but you have to have the goal in mind and align to it, but of course this is not about hardsetting a 10-yr plan, but so you have to have flexibility built in to pivot when necessary. Through a consultative approach, HPE helps you aligns people, processes, and technologies through out Data Discovery workshops, and quickly evaluate your data through on-demand offerings of our analytics platforms (Vertica + IDOL) The outcome is agile execution of projects and maximizing alignment to value
  18. Emmi Business Need – Emmi is a Switzerland top dairy processor. The measurability of the effectiveness and efficiency of marketing campaigns in the age of “digital cross-media” represented a challenge and they wanted to increase market presence, identify customer needs to increase potential revenue, and build brand awareness. They knew that there is customer interaction data available on the web from, but didn’t know how to leverage it. HPE Solution - HPE provided Big Data Discovery Experience Services to collect data on Emmi's customers from a wide range of sources and analyze it via a secure HPE cloud analytics environment based on IDOL. Business Outcome - Emmi now has a 360-degree view of its customers, consumers and influencers and can address them in a more targeted manner, Marketing activities are now targeted and used in real time enabling Emmi to effectively reach customers and ultimately reduce costs, and Emmi realized that the correct communication, marketing activity and customer experience would help improve their business Global Mining Service Provider Business Need – This company wanted to gain insight into sensor data to help improve equipment maintenance, operational losses, and safety practices. They wanted to also see how they can leverage these insights to innovate their business model. HPE Solution – providing the analytics capability through its Big Data Discovery Experience services and environment Business Outcome – able to detect equipment failures early as part of predictive analytics analysis and Identify operational losses Blabla Car Business Need – This is an Idea Economy company that is revolutionizing transportation and car sharing. To support its growth, BlaBlaCar needed a way measure the effectiveness of its website design and understand customer usage patterns in order make it easier for customers to complete a transaction. HPE Solution – HPE Vertica. This is a “data discovery” customer because they first utilized the Community Edition, then once they were ready to scale, they switched to Enterprise Edition. Community Edition gave them a platform to discover the value of their data. Business Outcome - HPE helped to build BlaBlaCar’s data-centric foundation, allowing them to improve their customers’ online experience by analyzing massive amounts of both structured and unstructured data at up to 1,000 times faster than traditional data warehouse solutions. By rapidly collecting and processing clickstream data, BlaBlaCar was able to measure and improve the effect of changes made to their websites and find new ways to engage their customers
  19. Core across these is to break down the existing silos between your various EDWs, ECMs, and BI tools, accumulated through game-of-throne fiefdoms and acquisitions. Many companies are trying to extract the data from these systems and dump them into an open standard repository that’s capable of running analytics and can scale into Petabyte ranges, Hadoop. But as good as Hadoop is, you really need a partner that can make it great, scale it, secure it, enable it to hit top marks in every use case. HP Haven can allow you to unleash the power of Hadoop and realize its full potential. It starts with high performance computing. We have helped customers see 2x improvements in Hadoop performance while using 50% less space by using HP big data reference architectures running on the entire range of our state of the art servers from the DL380 all the way up to Apollo and Moonshot. Analyze at speed and scale. It is not unusual to see performance improvements of one thousand fold by using HP Haven alongside Hadoop. With Vertica we can approach SQL query speeds of SAP Hana but over Petabytes of data instead of sub 100TB ranges. Don’t get me wrong, there are clearly times when you need that extra speed of in-memory and here we have very both the services and hardware to support, likewise for Microsoft PDW. Equally as important, you must govern and protect the data. Here we have HP Control Point, Records Manager, Data Integrator for governance with products like Data Protector, HP Voltage for at rest encryption of all the data you put into your Hadoop Smart Data lake. In essence, we’ve innovated on Hadoop with Haven so that you can better innovate. Claim: 2x Hadoop performance, or 50% less space Source: http://www8.hp.com/de/de/hp-news/press-release.html?id=1964038#.VWT5lc9Viko Claim: 100% of your data, 10x-1000x faster Sources: 100% of your data – we can analyze machine data, human data & business data; 3 out of 3 = 100%. Get answers up thousands of time faster answers, e.g. the Game Show Network delivers “A/B test” results 2700x faster than on MySQL - 12 seconds vs 9 hours, watch video here: http://h30614.www3.hp.com/Discover/OnDemand/LasVegas2013/SessionDetail/55cdb60b-ef50-42eb-9008-80f358da2a11;
  20. Why HPE for Empowering the Data-Driven Organization? HPE can help you become a data-driven organization and maximize your outcomes, no matter where you are in your transformation to a data-driven organization. HPE will help you to discover the value of your data, help you to build your data-centric foundation and enable you to achieve superior business outcomes, and build a sustainable, integrated approach that empowers a data-driven organization.   Proven Experience & Expertise ·          10,000 customer engagements ·          Hundreds of data scientists ·          3000+ dedicated global analytics and data management professionals   Technology Leadership ·          Best-in-class analytics and compute platforms for all use cases   Market Leadership ·          HPE is recognized as a leader in the Gartner’s leaders quadrant in the Enterprise Data Warehouse and Data Management Solutions for Analytics (2015) ·          HPE is recognized as a leader information governance through in Gartner’s Magic Quadrant report for eDiscovery (2015) ·          Forrester Wave Report: forthcoming Flexible & Open ·          HPE solutions built on open-standards, offering choice and flexibility ·          Integrated rich partner ecosystem