SlideShare uma empresa Scribd logo
1 de 19
Baixar para ler offline
Gary Howard, Senior Solution Architect, Data Centers
1
DCG Storage Group 2
Driven by:
 Mobile
 Social Media
 Internet of Things
 Big Data and Cloud
Source: IDC – The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things - April 2014
StorageCostStructureNeedsaFundamentalShift
1 2 3 5 7
12
19
30
48
77
125
2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023
Storage Capacity
in Terabytes
62%CAGR
2%
IT Budget
CAGR
As data volumes continue to grow, IT needs storage solutions that can reduce costs
while still meeting other storage requirements
3
STORAGE
TRANSFORMING
KEY DRIVERS
CLOUD
CLOUD SERVICES
1
STORAGE MEDIA
TRANSITIONS
3 3D XPOINT™
TECHNOLOGY
NEXT GENERATION
ARCHITECTURES
2
CLOUD
CLOUD SERVICES
NEXT GENERATION
ARCHITECTURES
STORAGE MEDIA
TRANSITIONS
3D XPOINT™
TECHNOLOGY
DCG Storage Group 4
WHYSTORAGEMODERNIZATIONISESSENTIAL
Internet of Things
Media Transition Scale Out
Scale Up
Storage Modernization: Seamless data access anywhere,
at anytime, on any device at the required performance
AGILE, Automated & Secure Infrastructures
and Business Models
Cloud
Enterprise
01
1010101
101010101010
010101010101
101010101010
Orchestration
DCG Storage Group
MovefromScale-uptoScale-OutArchitectures
5
Supports storage capacity growth cost effectively
Standard Ethernet
network
Data distributed across
multiple nodes or clusters
Flexible design to support
multiple workloads
Separate,
dedicated networks
Data stored in proprietary
storage hardware
Optimized to run only a
specific workload
Compute nodes working together
external network
Scalesperformance&capacity
Scale-OutScale-up
Single System
internal network
SCALESBYADDINGDISKSFORCAPACITY
DCG Storage Group
OpenSourceScale-OutCephwithoptimalcommercialsupport
6
Clients
Ceph Storage Cluster
Server 1
Ceph
OS
CPU
Memory
Disks
Network
Server 2
Ceph
OS
CPU
Memory
Disks
Network
Server 3
Ceph
OS
CPU
Memory
Disks
Network
Server 4
Ceph
OS
CPU
Memory
Disks
Network
Object Storage Block Storage File Storage
One platform that provides
Object Storage, Block
Storage, S3 interface and File
7
Intel’sroleinstorage
AdvancetheIndustry
OpenSource&Standards
BuildanOpenEcosystem
Intel®StorageBuilders
Endusersolutions
Cloud,Enterprise
IntelTechnologyLeadership
Storage Optimized Platforms
Intel® Xeon® E5-2600 v4 Platform
Intel® Xeon® Processor D-1500 Platform
Intel® Converged Network Adapters 10/40GbE
Intel® SSDs for DC & Cloud
Storage Optimized Software
Intel® Intelligent Storage Acceleration Library
Storage Performance Development Kit
Intel® Cache Acceleration Software
SSD & Non-Volatile Memory
Interfaces: SATA , NVMe PCIe,
Form Factors: 2.5”, M.2, U.2, PCIe AIC
New Technologies: 3D NAND, Intel® Optane™
Ceph community contributions on
workload profiling, latency analysis
and performance optimizations
90+ partners
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific
computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you
in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
Storage solutions architectures
Intel solution architects have deep
expertise on Ceph for low cost
and high performance usage
helpingcustomerstoenableamodernstorageinfrastructure
How well can
Ceph perform?
Which of my workloads
can it handle?
Which server
hardware is required
for it to perform well?
Optimization
Criteria
Properties Example Uses
IOPS-optimized  Lowest cost per IOPS
 Highest IOPS
 Meets minimum fault domain recommend-
ation (single server is less than or equal to
10% of the cluster)
 Typically block storage
 3x replication on hard disk
drive (HDD) or 2x replication
on Intel® SSD DC Series
 MySQL on OpenStack clouds
Throughput-
optimized
 Lowest cost per given unit of throughput
 Highest throughput
 Highest throughput per BTU
 Highest throughput per watt
 Meets minimum fault domain recommend-
ation (single server is less than or equal to
10% of the cluster)
 Block or object storage
 3x replication
 Active performance storage
for video, audio, and images
 Streaming media
Capacity-
optimized
 Lowest cost per TB
 Lowest BTU per TB
 Lowest watt per TB
 Meets minimum fault domain recommend-
ation (single server is less than or equal to
15% of the cluster)
 Typically object storage
 Erasure coding common for
maximizing usable capacity
 Object archive
 Video, audio, and image
object archive repositories
Open Stack Starter S M L
64TB 256TB + 1PB + 2PB +
IOPS
Optimized
• Ceph block (RBD)
• Intel® P3700s w/ co-located write journals
OR
• Intel® S3610 w/ P3700 write journals in a 4:1 ratio
• Multiple OSDs per flash drive
• 10 Intel® Xeon® E5 cores per P3700; 4 per S3610
• 2x or 3x replication (with backup)
Throughput
Optimized
• Ceph block or object (RBD or RGW)
• HDDs w/ P3700 or S3710 write journals
• 1 Xeon core per 2 HDDs (e.g. with 24 HDDs, 12 core Intel® Xeon® E5-2650 v4)
• Single OSD per HDD
• 10GbE  40GbE with >12 HDD per chassis
• 3x replication
Cost-Capacity
Optimized
• Ceph object (RGW)
• HDD drives with no SSD journal
• 1 Xeon core per 2 HDD
• Single OSD per HDD
• Erasure-coded
11
Recent Intel testing shows it’s
possible to reach 1.4M IOPS
Cephfordatabaseworkloads
Using a MySQL database
workload, we measured 400K
OLTP QPS using a mix of
70/30% Select/Update
DCG Storage Group
Cephperformancerange
12
Perceived performance range
Actual performance range
(measured)
DCG Storage Group
Availableresources
13
https://www.redhat.com/en/resources/
red-hat-ceph-storage-servers-intel®-
processors-and-ssds
https://www.percona.com/resources
/videos/mysql-cloud-head-head-
performance-lab
https://rh2016.smarteventscloud.
com/connect/sessionDetail.ww?S
ESSION_ID=88853&tclass=popup
Watch for it posted to
www.intel.com by Friday!
SummaryandNextSteps
 Ceph can help deliver on the promise of the cloud, using next
generation storage architectures
 Flash technology enables new capabilities in small footprints
 Ceph and MySQL provide a compelling case for converged
storage that can support latency sensitive analytics workloads
 Next steps:
 Access the available resources to learn more about how Ceph can be
deployed for your workloads
 Consider a Ceph pilot with Red Hat and Intel
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or software design or configuration may affect actual performance. See
configuration slides in backup for details on software configuration and test benchmark parameters.
14
DCG Storage Group
Storagemodernization
16
½ latency
2x throughput
50% CapEX reduction
Storage
Intel® Differentiated
Storage ServicesApps
Intel® CAS
“Web”
“Analytics’”
..
.
1IDF 2015 Class SDS002 http://intelstudios.edgesuite.net//idf/2015/sf/aep/SSDS002/SSDS002.html
IDF2015 Technology Demo: https://www.youtube.com/watch?v=vtIIbxO4Zlk
Results of Yahoo* internal benchmark testing - Ruiping Sun, Principal Architect, Yahoo*. Hardware/Software Config: 8 OSD Nodes, each: HP ProLiant DL180 G6 ySPEC 39.5,
2x Xeon X5650 2.67GHz (HT enabled, total 12 cores, 24 threads), Intel 5520 IOH-36D B3 (Tylersburg), 48GB 1333MHz DDR3 (12x4GB PC3-10600 Samsung DDR3-1333
ECC Registered CL9 2Rx4), 10*8TB 7200 RPM SATA HDDs, 1*1.6TB Intel P3600 SSD (10GB journal per OSD, 1.5TB cache) (CAS config only), 2*HP NC362i/Intel 82576
Gigabit NICs, 2*Intel 82599EB 10GbE NICs, RHEL 6.5 w/kernel 3.10.0-123.4.4.el7
Pushing Buying Behaviors Away from Norm
Cost efficiency
Speed of provisioning
Vendor lock-in
Storage scaling
Admin
Governance
IOPS per GB
better
faster
less lock-in
elastic pool
more expertise
more control
broad options
better
faster
more lock-in
silos*
less expertise
less control
narrow options
public cloud
storage
NAS/SAN
storage appliances
software defined storage
commodity servers
Illustrating 10x Range of Ceph Performance
1 DVD movie per second with 3-node cluster
(standard 2U Ceph servers, hybrid)
E-Book
MP3 Song
Audio CD
DVD Movie
HD Movie
Blue-ray Movie
UHD Movie
1 Blue-ray movie per second with 3-node cluster
(standard 2U Ceph servers, all-flash)
Range of Ceph Use-Case Examples
Workload
IO Profiles
Workload
Examples
Searchable
Examples
Workload IO
Characteristics
Hardware
Characteristics
IOPS MySQL
MariaDB
PostgreSQL
Medallia High IOPS/GB
Smaller random IO
Read/write mix
Sled-based chassis
Intel® DC P3700
10 Xeon® cores per P3700
10GbE
Balanced/Throughpu
t
Digital media distro
Server virtualization
(OpenStack Cinder)
Bloomberg
Target, Walmart
yahoo!, Comcast
High MBps/TB
Larger sequential IO
Read/write mix
Standard -> dense chassis
HDD -> Intel® DC P3700
Balanced core : drive ratio
10GbE -> 40GbE
Capacity-Archive Digital media archive
Object archive
Big Data archive
yahoo!
CERN
Low cost/GB
Sequential IO
Write mostly
Dense -> ultra-dense chassis
HDD
Low core : drive ratio
10GbE

Mais conteúdo relacionado

Mais procurados

Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Community
 
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red_Hat_Storage
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red_Hat_Storage
 

Mais procurados (20)

Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
 
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
Red Hat Storage Day Seattle: Why Software-Defined Storage MattersRed Hat Storage Day Seattle: Why Software-Defined Storage Matters
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage MattersRed Hat Storage Day Atlanta - Why Software Defined Storage Matters
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
 
Walk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoCWalk Through a Software Defined Everything PoC
Walk Through a Software Defined Everything PoC
 
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 

Destaque

Destaque (14)

ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Deploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the BankDeploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the Bank
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Ceph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to DeploymentCeph@MIMOS: Growing Pains from R&D to Deployment
Ceph@MIMOS: Growing Pains from R&D to Deployment
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Bluestore
BluestoreBluestore
Bluestore
 
Which Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on CephWhich Hypervisor Is Best? My SQL on Ceph
Which Hypervisor Is Best? My SQL on Ceph
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Mellanox High Performance Networks for Ceph
Mellanox High Performance Networks for CephMellanox High Performance Networks for Ceph
Mellanox High Performance Networks for Ceph
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 

Semelhante a Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardware

Intel ssd dc data center family for PCIe
Intel ssd dc data center family for PCIeIntel ssd dc data center family for PCIe
Intel ssd dc data center family for PCIe
Low Hong Chuan
 
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI ConvergenceDAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
inside-BigData.com
 

Semelhante a Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardware (20)

Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
 
Deep Dive On Intel Optane SSDs And New Server Platforms
Deep Dive On Intel Optane SSDs And New Server PlatformsDeep Dive On Intel Optane SSDs And New Server Platforms
Deep Dive On Intel Optane SSDs And New Server Platforms
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Ceph Day Beijing - Storage Modernization with Intel & Ceph
Ceph Day Beijing - Storage Modernization with Intel & Ceph Ceph Day Beijing - Storage Modernization with Intel & Ceph
Ceph Day Beijing - Storage Modernization with Intel & Ceph
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and Ceph
 
Architecting the Cloud Infrastructure for the Future with Intel
Architecting the Cloud Infrastructure for the Future with IntelArchitecting the Cloud Infrastructure for the Future with Intel
Architecting the Cloud Infrastructure for the Future with Intel
 
Intel ssd dc data center family for PCIe
Intel ssd dc data center family for PCIeIntel ssd dc data center family for PCIe
Intel ssd dc data center family for PCIe
 
M|18 Intel and MariaDB: Strategic Collaboration to Enhance MariaDB Functional...
M|18 Intel and MariaDB: Strategic Collaboration to Enhance MariaDB Functional...M|18 Intel and MariaDB: Strategic Collaboration to Enhance MariaDB Functional...
M|18 Intel and MariaDB: Strategic Collaboration to Enhance MariaDB Functional...
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
22by7 and DellEMC Tech Day July 20 2017 - Power Edge
22by7 and DellEMC Tech Day July 20 2017 - Power Edge22by7 and DellEMC Tech Day July 20 2017 - Power Edge
22by7 and DellEMC Tech Day July 20 2017 - Power Edge
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPC
 
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI ConvergenceDAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Intro to Azure SQL database
Intro to Azure SQL databaseIntro to Azure SQL database
Intro to Azure SQL database
 
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems SpecialistOWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
 
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013
Webinář: Dell VRTX - datacentrum vše-v-jednom za skvělou cenu / 7.10.2013
 
Yashi dealer meeting settembre 2016 tecnologie xeon intel italia
Yashi dealer meeting settembre 2016 tecnologie xeon intel italiaYashi dealer meeting settembre 2016 tecnologie xeon intel italia
Yashi dealer meeting settembre 2016 tecnologie xeon intel italia
 
HPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big DataHPC DAY 2017 | HPE Storage and Data Management for Big Data
HPC DAY 2017 | HPE Storage and Data Management for Big Data
 

Mais de Red_Hat_Storage

Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red_Hat_Storage
 

Mais de Red_Hat_Storage (17)

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
 
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
 
Storage: Limitations, Frustrations, and Coping with Future Needs
Storage: Limitations, Frustrations, and Coping with Future NeedsStorage: Limitations, Frustrations, and Coping with Future Needs
Storage: Limitations, Frustrations, and Coping with Future Needs
 
Red Hat Storage Day Atlanta - Persistent Storage for Linux Containers
Red Hat Storage Day Atlanta - Persistent Storage for Linux Containers Red Hat Storage Day Atlanta - Persistent Storage for Linux Containers
Red Hat Storage Day Atlanta - Persistent Storage for Linux Containers
 
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
 

Último

Último (20)

Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 

Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardware

  • 1. Gary Howard, Senior Solution Architect, Data Centers 1
  • 2. DCG Storage Group 2 Driven by:  Mobile  Social Media  Internet of Things  Big Data and Cloud Source: IDC – The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things - April 2014 StorageCostStructureNeedsaFundamentalShift 1 2 3 5 7 12 19 30 48 77 125 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 Storage Capacity in Terabytes 62%CAGR 2% IT Budget CAGR As data volumes continue to grow, IT needs storage solutions that can reduce costs while still meeting other storage requirements
  • 3. 3 STORAGE TRANSFORMING KEY DRIVERS CLOUD CLOUD SERVICES 1 STORAGE MEDIA TRANSITIONS 3 3D XPOINT™ TECHNOLOGY NEXT GENERATION ARCHITECTURES 2 CLOUD CLOUD SERVICES NEXT GENERATION ARCHITECTURES STORAGE MEDIA TRANSITIONS 3D XPOINT™ TECHNOLOGY
  • 4. DCG Storage Group 4 WHYSTORAGEMODERNIZATIONISESSENTIAL Internet of Things Media Transition Scale Out Scale Up Storage Modernization: Seamless data access anywhere, at anytime, on any device at the required performance AGILE, Automated & Secure Infrastructures and Business Models Cloud Enterprise 01 1010101 101010101010 010101010101 101010101010 Orchestration
  • 5. DCG Storage Group MovefromScale-uptoScale-OutArchitectures 5 Supports storage capacity growth cost effectively Standard Ethernet network Data distributed across multiple nodes or clusters Flexible design to support multiple workloads Separate, dedicated networks Data stored in proprietary storage hardware Optimized to run only a specific workload Compute nodes working together external network Scalesperformance&capacity Scale-OutScale-up Single System internal network SCALESBYADDINGDISKSFORCAPACITY
  • 6. DCG Storage Group OpenSourceScale-OutCephwithoptimalcommercialsupport 6 Clients Ceph Storage Cluster Server 1 Ceph OS CPU Memory Disks Network Server 2 Ceph OS CPU Memory Disks Network Server 3 Ceph OS CPU Memory Disks Network Server 4 Ceph OS CPU Memory Disks Network Object Storage Block Storage File Storage One platform that provides Object Storage, Block Storage, S3 interface and File
  • 7. 7 Intel’sroleinstorage AdvancetheIndustry OpenSource&Standards BuildanOpenEcosystem Intel®StorageBuilders Endusersolutions Cloud,Enterprise IntelTechnologyLeadership Storage Optimized Platforms Intel® Xeon® E5-2600 v4 Platform Intel® Xeon® Processor D-1500 Platform Intel® Converged Network Adapters 10/40GbE Intel® SSDs for DC & Cloud Storage Optimized Software Intel® Intelligent Storage Acceleration Library Storage Performance Development Kit Intel® Cache Acceleration Software SSD & Non-Volatile Memory Interfaces: SATA , NVMe PCIe, Form Factors: 2.5”, M.2, U.2, PCIe AIC New Technologies: 3D NAND, Intel® Optane™ Ceph community contributions on workload profiling, latency analysis and performance optimizations 90+ partners Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Storage solutions architectures Intel solution architects have deep expertise on Ceph for low cost and high performance usage helpingcustomerstoenableamodernstorageinfrastructure
  • 8. How well can Ceph perform? Which of my workloads can it handle? Which server hardware is required for it to perform well?
  • 9. Optimization Criteria Properties Example Uses IOPS-optimized  Lowest cost per IOPS  Highest IOPS  Meets minimum fault domain recommend- ation (single server is less than or equal to 10% of the cluster)  Typically block storage  3x replication on hard disk drive (HDD) or 2x replication on Intel® SSD DC Series  MySQL on OpenStack clouds Throughput- optimized  Lowest cost per given unit of throughput  Highest throughput  Highest throughput per BTU  Highest throughput per watt  Meets minimum fault domain recommend- ation (single server is less than or equal to 10% of the cluster)  Block or object storage  3x replication  Active performance storage for video, audio, and images  Streaming media Capacity- optimized  Lowest cost per TB  Lowest BTU per TB  Lowest watt per TB  Meets minimum fault domain recommend- ation (single server is less than or equal to 15% of the cluster)  Typically object storage  Erasure coding common for maximizing usable capacity  Object archive  Video, audio, and image object archive repositories
  • 10. Open Stack Starter S M L 64TB 256TB + 1PB + 2PB + IOPS Optimized • Ceph block (RBD) • Intel® P3700s w/ co-located write journals OR • Intel® S3610 w/ P3700 write journals in a 4:1 ratio • Multiple OSDs per flash drive • 10 Intel® Xeon® E5 cores per P3700; 4 per S3610 • 2x or 3x replication (with backup) Throughput Optimized • Ceph block or object (RBD or RGW) • HDDs w/ P3700 or S3710 write journals • 1 Xeon core per 2 HDDs (e.g. with 24 HDDs, 12 core Intel® Xeon® E5-2650 v4) • Single OSD per HDD • 10GbE  40GbE with >12 HDD per chassis • 3x replication Cost-Capacity Optimized • Ceph object (RGW) • HDD drives with no SSD journal • 1 Xeon core per 2 HDD • Single OSD per HDD • Erasure-coded
  • 11. 11 Recent Intel testing shows it’s possible to reach 1.4M IOPS Cephfordatabaseworkloads Using a MySQL database workload, we measured 400K OLTP QPS using a mix of 70/30% Select/Update
  • 12. DCG Storage Group Cephperformancerange 12 Perceived performance range Actual performance range (measured)
  • 14. SummaryandNextSteps  Ceph can help deliver on the promise of the cloud, using next generation storage architectures  Flash technology enables new capabilities in small footprints  Ceph and MySQL provide a compelling case for converged storage that can support latency sensitive analytics workloads  Next steps:  Access the available resources to learn more about how Ceph can be deployed for your workloads  Consider a Ceph pilot with Red Hat and Intel Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Any difference in system hardware or software design or configuration may affect actual performance. See configuration slides in backup for details on software configuration and test benchmark parameters. 14
  • 15.
  • 16. DCG Storage Group Storagemodernization 16 ½ latency 2x throughput 50% CapEX reduction Storage Intel® Differentiated Storage ServicesApps Intel® CAS “Web” “Analytics’” .. . 1IDF 2015 Class SDS002 http://intelstudios.edgesuite.net//idf/2015/sf/aep/SSDS002/SSDS002.html IDF2015 Technology Demo: https://www.youtube.com/watch?v=vtIIbxO4Zlk Results of Yahoo* internal benchmark testing - Ruiping Sun, Principal Architect, Yahoo*. Hardware/Software Config: 8 OSD Nodes, each: HP ProLiant DL180 G6 ySPEC 39.5, 2x Xeon X5650 2.67GHz (HT enabled, total 12 cores, 24 threads), Intel 5520 IOH-36D B3 (Tylersburg), 48GB 1333MHz DDR3 (12x4GB PC3-10600 Samsung DDR3-1333 ECC Registered CL9 2Rx4), 10*8TB 7200 RPM SATA HDDs, 1*1.6TB Intel P3600 SSD (10GB journal per OSD, 1.5TB cache) (CAS config only), 2*HP NC362i/Intel 82576 Gigabit NICs, 2*Intel 82599EB 10GbE NICs, RHEL 6.5 w/kernel 3.10.0-123.4.4.el7
  • 17. Pushing Buying Behaviors Away from Norm Cost efficiency Speed of provisioning Vendor lock-in Storage scaling Admin Governance IOPS per GB better faster less lock-in elastic pool more expertise more control broad options better faster more lock-in silos* less expertise less control narrow options public cloud storage NAS/SAN storage appliances software defined storage commodity servers
  • 18. Illustrating 10x Range of Ceph Performance 1 DVD movie per second with 3-node cluster (standard 2U Ceph servers, hybrid) E-Book MP3 Song Audio CD DVD Movie HD Movie Blue-ray Movie UHD Movie 1 Blue-ray movie per second with 3-node cluster (standard 2U Ceph servers, all-flash)
  • 19. Range of Ceph Use-Case Examples Workload IO Profiles Workload Examples Searchable Examples Workload IO Characteristics Hardware Characteristics IOPS MySQL MariaDB PostgreSQL Medallia High IOPS/GB Smaller random IO Read/write mix Sled-based chassis Intel® DC P3700 10 Xeon® cores per P3700 10GbE Balanced/Throughpu t Digital media distro Server virtualization (OpenStack Cinder) Bloomberg Target, Walmart yahoo!, Comcast High MBps/TB Larger sequential IO Read/write mix Standard -> dense chassis HDD -> Intel® DC P3700 Balanced core : drive ratio 10GbE -> 40GbE Capacity-Archive Digital media archive Object archive Big Data archive yahoo! CERN Low cost/GB Sequential IO Write mostly Dense -> ultra-dense chassis HDD Low core : drive ratio 10GbE