SlideShare a Scribd company logo
1 of 54
Download to read offline
OUR VISION FOR OPEN UNIFIED CLOUD STORAGE
SAGE WEIL – RED HAT
OPENSTACK SUMMIT BARCELONA – 2016.10.27
2
● Now
– why we develop ceph
– why it's open
– why it's unified
– hardware
– openstack
– why not ceph
● Later
– roadmap highlights
– the road ahead
– community
OUTLINE
3
JEWEL
4
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
RADOSGW
A bucket-based
REST gateway,
compatible with S3
and Swift
RADOSGW
A bucket-based
REST gateway,
compatible with S3
and Swift
APPAPP APPAPP HOST/VMHOST/VM CLIENTCLIENT
CEPH FS
A POSIX-compliant
distributed file
system, with a
Linux kernel client
and support for
FUSE
CEPH FS
A POSIX-compliant
distributed file
system, with a
Linux kernel client
and support for
FUSE
NEARLY
AWESOME
AWESOMEAWESOME
AWESOME
AWESOME
5
2016 =
RGW
S3 and Swift compatible
object storage with object
versioning, multi-site
federation, and replication
LIBRADOS
A library allowing apps to direct access RADOS (C, C++, Java, Python, Ruby, PHP)
RADOS
A software-based, reliable, autonomic, distributed object store comprised of
self-healing, self-managing, intelligent storage nodes (OSDs) and lightweight monitors (Mons)
RBD
A virtual block device with
snapshots, copy-on-write
clones, and multi-site
replication
CEPHFS
A distributed POSIX file
system with coherent
caches and snapshots on
any directory
OBJECT BLOCK FILE
FULLY AWESOME
6
WHAT IS CEPH ALL ABOUT
● Distributed storage
● All components scale horizontally
● No single point of failure
● Software
● Hardware agnostic, commodity hardware
● Object, block, and file in a single cluster
● Self-manage whenever possible
● Open source (LGPL)
7
WHY OPEN SOURCE
● Avoid software vendor lock-in
● Avoid hardware lock-in
– and enable hybrid deployments
● Lower total cost of ownership
● Transparency
● Self-support
● Add, improve, or extend
8
WHY UNIFIED STORAGE
●
Simplicity of deployment
– single cluster serving all APIs
●
Efficient utilization of storage
●
Simplicity of management
– single set of management skills, staff, tools
...is that really true?
9
WHY UNIFIED STORAGE
●
Simplicity of deployment
– single cluster serving all APIs
●
Efficient utilization of space
●
Only really true for small clusters
– Large deployments optimize for
workload
●
Simplicity of management
– single set of skills, staff, tools
●
Functionality in RADOS exploited by all
– Erasure coding, tiering, performance,
monitoring
10
SPECIALIZED CLUSTERS
SSD
M M M
RADOS CLUSTER
NVMeSSDHDD
– Multiple Ceph clusters per-use case
RADOS CLUSTERRADOS CLUSTER
MM M M MM
11
HYBRID CLUSTER
SSD
M M M
RADOS CLUSTER
NVMeSSDHDD
– Single Ceph cluster
– Separate RADOS pools for different storage hardware types
12
CEPH ON FLASH
● Samsung
– up to 153 TB in 2u
– 700K IOPS, 30 GB/s
● SanDisk
– up to 512 TB in 3u
– 780K IOPS, 7 GB/s
13
CEPH ON SSD+HDD
● SuperMicro
● Dell
● HP
● Quanta / QCT
● Fujitsu (Eternus)
● Penguin OCP
14
● Microserver on 8TB He WD HDD
● 1 GB RAM
● 2 core 1.3 GHz Cortex A9
● 2x 1GbE SGMII instead of SATA
● 504 OSD 4 PB cluster
– SuperMicro 1048-RT chassis
● Next gen will be aarch64
CEPH ON HDD (LITERALLY)
15
FOSS → FAST AND OPEN INNOVATION
● Free and open source software (FOSS)
– enables innovation in software
– ideal testbed for new hardware architectures
● Proprietary software platforms are full of friction
– harder to modify, experiment with closed-source code
– require business partnerships and NDAs to be negotiated
– dual investment (of time and/or $$$) from both parties
16
PERSISTENT MEMORY
● Persistent (non-volatile) RAM is coming
– e.g., 3D X-Point from Intel/Micron any year now
– (and NV-DIMMs are a bit $$$ but already here)
● These are going to be
– fast (~1000x), dense (~10x), high-endurance (~1000x)
– expensive (~1/2 the cost of DRAM)
– disruptive
● Intel
– PMStore – OSD prototype backend targetting 3D-Xpoint
– NVM-L (pmem.io/nvml)
17
CREATE THE ECOSYSTEM
TO BECOME THE LINUX
OF DISTRIBUTED STORAGE
OPEN COLLABORATIVE GENERAL PURPOSE
18
KEYSTONE SWIF
T
CINDER NOVAGLANCE
LIBRBD
OPENSTACK
CEPH FS
M M
M
RADOS CLUSTER
KVM
CEPH + OPENSTACK
MANILA
RADOSG
W
19
CINDER DRIVER USAGE
20
WHY NOT CEPH?
● inertia
● performance
● functionality
● stability
● ease of use
21
WHAT ARE WE DOING ABOUT IT?
22
Hammer (LTS)
Spring 2015
Jewel (LTS)
Spring 2016
Infernalis
Fall 2015
Kraken
Fall 2016
Luminous (LTS)
Spring 2017
WE ARE HERE
RELEASE CADENCERELEASE CADENCE
10.2.z
12.2.z
0.94.z
23 KRAKEN
24
● BlueStore = Block + NewStore
– key/value database (RocksDB) for metadata
– all data written directly to raw block device(s)
– can combine HDD, SSD, NVMe, NVRAM
● Full data checksums (crc32c, xxhash)
● Inline compression (zlib, snappy, zstd)
● ~2x faster than FileStore
– better parallelism, efficiency on fast devices
– no double writes for data
– performs well with very small SSD journals
BLUESTORE
BlueStore
BlueFS
RocksDB
BlockDeviceBlockDeviceBlockDevice
BlueRocksEnv
data metadata
Allocator
ObjectStore
Compressor
OSD
25
● Current master
– nearly finalized disk format,
good performance
● Kraken
– stable code, stable disk format
– probably still flagged
'experimental'
● Luminous
– fully stable and ready for broad
usage
– maybe the default... we will see
how it goes
BLUESTORE – WHEN CAN HAZ?
● Migrating from FileStore
– evacuate and fail old OSDs
– reprovision with BlueStore
– let Ceph recovery normally
26
● New implementation of network layer
– replaces aging SimpleMessenger
– fixed size thread pool (vs 2 threads per socket)
– scales better to larger clusters
– more healthy relationship with tcmalloc
– now the default!
● Pluggable backends
– PosixStack – Linux sockets, TCP (default, supported)
– Two experimental backends!
ASYNCMESSENGER
27
TCP OVERHEAD IS SIGNIFICANT
writes
reads
28
DPDK STACK
29
RDMA STACK
● RDMA for data path (ibverbs)
● Keep TCP for control path
● Working prototype in ~1 month
30 LUMINOUS
31
MULTI-MDS CEPHFS
32
ERASURE CODE OVERWRITES
● Current EC RADOS pools are append only
– simple, stable suitable for RGW, or behind a cache tier
● EC overwrites will allow RBD and CephFS to consume EC pools directly
● It's hard
– implementation requires two-phase commit
– complex to avoid full-stripe update for small writes
– relies on efficient implementation of “move ranges” operation by BlueStore
● It will be huge
– tremendous impact on TCO
– make RBD great again
33
CEPH-MGR
● ceph-mon monitor daemons currently do a lot
– more than they need to (PG stats to support things like 'df')
– this limits cluster scalability
● ceph-mgr moves non-critical metrics into a separate daemon
– that is more efficient
– that can stream to graphite, influxdb
– that can efficiently integrate with external modules (even Python!)
● Good host for
– integrations, like Calamari REST API endpoint
– coming features like 'ceph top' or 'rbd top'
– high-level management functions and policy
M M
???
(time for new iconography)
34
QUALITY OF SERVICE
● Set policy for both
– reserved/minimum IOPS
– proportional sharing of excess capacity
● by
– type of IO (client, scrub, recovery)
– pool
– client (e.g., VM)
● Based on mClock paper from OSDI'10
– IO scheduler
– distributed enforcement with cooperating clients
35
A B C D E F A' B' G H
VM
SHARED STORAGE → LATENCY
ETHERNET
36
VM
LOCAL SSD → LOW LATENCY
A B C D E F A' B' G HSSD
SATA
ETHERNET
37
VM
LOCAL SSD → FAILURE → SADNESS
A B C D E F A' B' G HSSD
SATA
ETHERNET
38
VM
WRITEBACK IS UNORDERED
A B C D E F A' B' G HSSD
SATA
ETHERNET
A'
B'
C
D
39
VM
WRITEBACK IS UNORDERED
A B C D E F A' B' G HSSD
SATA
ETHERNET
A'
B'
C
D
A' B' C D → never happened
40
VM
RBD ORDERED WRITEBACK CACHE
A B C D E F A' B' G HSSD
SATA
ETHERNET
A B C D → CRASH CONSISTENT!
A
B
C
D
cleverness
41
● Low latency writes to local SSD
● Persistent cache (across reboots etc)
● Fast reads from cache
● Ordered writeback to the cluster, with batching, etc.
● RBD image is always point-in-time consistent
RBD ORDERED WRITEBACK CACHE
Low Latency
SPoF
No data
Local SSD Ceph RBD
Higher Latency
No SPoF
Perfect durability
Low Latency
SPoF on recent writes
Stale but crash consistent
42
IN THE FUTURE
MOST BYTES WILL BE STORED
IN OBJECTS
43
S3
Swift
Erasure coding
Multisite federation
Multisite replication
WORM
Encryption
Tiering
Deduplication
RADOSGW
Compression
44
ZONE CZONE B
RADOSGW INDEXING
RADOSG
WLIBRADOS
M
CLUSTER A
MM M
CLUSTER B
MM
RADOSG
W
RADOSG
W
RADOSG
WLIBRADOS LIBRADOS LIBRADOS
REST
RADOSG
WLIBRADOS
M
CLUSTER C
MM
REST
ZONE A
45
GROWING DEVELOPMENT COMMUNITY
46
● Red Hat
● Mirantis
● SUSE
● SanDisk
● XSKY
● ZTE
● LETV
● Quantum
● EasyStack
● H3C
● UnitedStack
● Digiware
● Mellanox
● Intel
● Walmart Labs
● DreamHost
● Tencent
● Deutsche Telekom
● Igalia
● Fujitsu
● DigitalOcean
● University of Toronto
GROWING DEVELOPMENT COMMUNITY
47
● Red Hat
● Mirantis
● SUSE
● SanDisk
● XSKY
● ZTE
● LETV
● Quantum
● EasyStack
● H3C
● UnitedStack
● Digiware
● Mellanox
● Intel
● Walmart Labs
● DreamHost
● Tencent
● Deutsche Telekom
● Igalia
● Fujitsu
● DigitalOcean
● University of Toronto
GROWING DEVELOPMENT COMMUNITY
48
● Red Hat
● Mirantis
● SUSE
● SanDisk
● XSKY
● ZTE
● LETV
● Quantum
● EasyStack
● H3C
● UnitedStack
● Digiware
● Mellanox
● Intel
● Walmart Labs
● DreamHost
● Tencent
● Deutsche Telekom
● Igalia
● Fujitsu
● DigitalOcean
● University of Toronto
GROWING DEVELOPMENT COMMUNITY
49
● Red Hat
● Mirantis
● SUSE
● SanDisk
● XSKY
● ZTE
● LETV
● Quantum
● EasyStack
● H3C
● UnitedStack
● Digiware
● Mellanox
● Intel
● Walmart Labs
● DreamHost
● Tencent
● Deutsche Telekom
● Igalia
● Fujitsu
● DigitalOcean
● University of Toronto
GROWING DEVELOPMENT COMMUNITY
50
● Red Hat
● Mirantis
● SUSE
● SanDisk
● XSKY
● ZTE
● LETV
● Quantum
● EasyStack
● H3C
● UnitedStack
● Digiware
● Mellanox
● Intel
● Walmart Labs
● DreamHost
● Tencent
● Deutsche Telekom
● Igalia
● Fujitsu
● DigitalOcean
● University of Toronto
GROWING DEVELOPMENT COMMUNITY
51
● Red Hat
● Mirantis
● SUSE
● SanDisk
● XSKY
● ZTE
● LETV
● Quantum
● EasyStack
● H3C
● UnitedStack
● Digiware
● Mellanox
● Intel
● Walmart Labs
● DreamHost
● Tencent
● Deutsche Telekom
● Igalia
● Fujitsu
● DigitalOcean
● University of Toronto
GROWING DEVELOPMENT COMMUNITY
52
● Nobody should have to use proprietary storage systems out of necessity
● The best storage solution should be an open source solution
● Ceph is growing up, but we're not done yet
– performance, scalability, features, ease of use
● We are highly motivated
TAKEAWAYS
53
Operator
● File bugs
http://tracker.ceph.com/
● Document
http://github.com/ceph/ceph
● Blog
● Build a relationship with the core
team
Developer
● Fix bugs
● Help design missing functionality
● Implement missing functionality
● Integrate
● Help make it easier to use
● Participate in monthly Ceph
Developer Meetings
– video chat, EMEA- and APAC-
friendly times
HOW TO HELP
THANK YOU!
Sage Weil
CEPH PRINCIPAL ARCHITECT
sage@redhat.com
@liewegas

More Related Content

What's hot

The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelColleen Corrice
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldSage Weil
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous updateinwin stack
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016John Spray
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkXiaoxi Chen
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 
Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)
Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)
Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)Ontico
 

What's hot (20)

The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyond
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to Jewel
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud world
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous update
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmark
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)
Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)
Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)
 

Viewers also liked

Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red_Hat_Storage
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsRed_Hat_Storage
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelRed_Hat_Storage
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep DiveRed_Hat_Storage
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackOpenStack_Online
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about cephEmma Haruka Iwao
 
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red_Hat_Storage
 
Rabbit mq, amqp and php
Rabbit mq, amqp and phpRabbit mq, amqp and php
Rabbit mq, amqp and phprodeob
 
Jenkins, jclouds, CloudStack, and CentOS by David Nalley
Jenkins, jclouds, CloudStack, and CentOS by David NalleyJenkins, jclouds, CloudStack, and CentOS by David Nalley
Jenkins, jclouds, CloudStack, and CentOS by David Nalleybuildacloud
 
Shingled Erasure Code (SHEC) at HotDep'14
Shingled Erasure Code (SHEC) at HotDep'14Shingled Erasure Code (SHEC) at HotDep'14
Shingled Erasure Code (SHEC) at HotDep'14miyamae1994
 
Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt Ceph Community
 
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapati
Policy Based SDN Solution for DC and Branch Office by Suresh BoddapatiPolicy Based SDN Solution for DC and Branch Office by Suresh Boddapati
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapatibuildacloud
 
L4-L7 services for SDN and NVF by Youcef Laribi
L4-L7 services for SDN and NVF by Youcef LaribiL4-L7 services for SDN and NVF by Youcef Laribi
L4-L7 services for SDN and NVF by Youcef Laribibuildacloud
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Community
 

Viewers also liked (17)

Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to Jewel
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Rabbit mq, amqp and php
Rabbit mq, amqp and phpRabbit mq, amqp and php
Rabbit mq, amqp and php
 
Jenkins, jclouds, CloudStack, and CentOS by David Nalley
Jenkins, jclouds, CloudStack, and CentOS by David NalleyJenkins, jclouds, CloudStack, and CentOS by David Nalley
Jenkins, jclouds, CloudStack, and CentOS by David Nalley
 
Shingled Erasure Code (SHEC) at HotDep'14
Shingled Erasure Code (SHEC) at HotDep'14Shingled Erasure Code (SHEC) at HotDep'14
Shingled Erasure Code (SHEC) at HotDep'14
 
Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt
 
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapati
Policy Based SDN Solution for DC and Branch Office by Suresh BoddapatiPolicy Based SDN Solution for DC and Branch Office by Suresh Boddapati
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapati
 
L4-L7 services for SDN and NVF by Youcef Laribi
L4-L7 services for SDN and NVF by Youcef LaribiL4-L7 services for SDN and NVF by Youcef Laribi
L4-L7 services for SDN and NVF by Youcef Laribi
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 

Similar to Ceph, Now and Later: Our Plan for Open Unified Cloud Storage

20171101 taco scargo luminous is out, what's in it for you
20171101 taco scargo   luminous is out, what's in it for you20171101 taco scargo   luminous is out, what's in it for you
20171101 taco scargo luminous is out, what's in it for youTaco Scargo
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryScyllaDB
 
Ceph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Community
 
Cache Tiering and Erasure Coding
Cache Tiering and Erasure CodingCache Tiering and Erasure Coding
Cache Tiering and Erasure CodingShinobu KINJO
 
Cache Tiering and Erasure Coding
Cache Tiering and Erasure CodingCache Tiering and Erasure Coding
Cache Tiering and Erasure CodingShinobu Kinjo
 
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
Seastar / ScyllaDB, or how we implemented a 10-times faster Cassandra
Seastar / ScyllaDB,  or how we implemented a 10-times faster CassandraSeastar / ScyllaDB,  or how we implemented a 10-times faster Cassandra
Seastar / ScyllaDB, or how we implemented a 10-times faster CassandraTzach Livyatan
 
Docker and coreos20141020b
Docker and coreos20141020bDocker and coreos20141020b
Docker and coreos20141020bRichard Kuo
 
Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph Community
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific DashboardCeph Community
 
Achieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMAchieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMdata://disrupted®
 
Achieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMAchieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMDevOps.com
 
Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Thang Man
 
Deploying OpenStack with Ansible
Deploying OpenStack with AnsibleDeploying OpenStack with Ansible
Deploying OpenStack with AnsibleKevin Carter
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraCeph Community
 

Similar to Ceph, Now and Later: Our Plan for Open Unified Cloud Storage (20)

20171101 taco scargo luminous is out, what's in it for you
20171101 taco scargo   luminous is out, what's in it for you20171101 taco scargo   luminous is out, what's in it for you
20171101 taco scargo luminous is out, what's in it for you
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent Memory
 
Ceph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wild
 
Cache Tiering and Erasure Coding
Cache Tiering and Erasure CodingCache Tiering and Erasure Coding
Cache Tiering and Erasure Coding
 
Cache Tiering and Erasure Coding
Cache Tiering and Erasure CodingCache Tiering and Erasure Coding
Cache Tiering and Erasure Coding
 
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Seastar / ScyllaDB, or how we implemented a 10-times faster Cassandra
Seastar / ScyllaDB,  or how we implemented a 10-times faster CassandraSeastar / ScyllaDB,  or how we implemented a 10-times faster Cassandra
Seastar / ScyllaDB, or how we implemented a 10-times faster Cassandra
 
Docker and coreos20141020b
Docker and coreos20141020bDocker and coreos20141020b
Docker and coreos20141020b
 
Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph as storage for CloudStack
Ceph as storage for CloudStack
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
 
Achieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMAchieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVM
 
Achieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMAchieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVM
 
Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016
 
Deploying OpenStack with Ansible
Deploying OpenStack with AnsibleDeploying OpenStack with Ansible
Deploying OpenStack with Ansible
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 

Recently uploaded

IA Generativa y Grafos de Neo4j: RAG time
IA Generativa y Grafos de Neo4j: RAG timeIA Generativa y Grafos de Neo4j: RAG time
IA Generativa y Grafos de Neo4j: RAG timeNeo4j
 
online pdf editor software solutions.pdf
online pdf editor software solutions.pdfonline pdf editor software solutions.pdf
online pdf editor software solutions.pdfMeon Technology
 
Introduction-to-Software-Development-Outsourcing.pptx
Introduction-to-Software-Development-Outsourcing.pptxIntroduction-to-Software-Development-Outsourcing.pptx
Introduction-to-Software-Development-Outsourcing.pptxIntelliSource Technologies
 
Webinar - IA generativa e grafi Neo4j: RAG time!
Webinar - IA generativa e grafi Neo4j: RAG time!Webinar - IA generativa e grafi Neo4j: RAG time!
Webinar - IA generativa e grafi Neo4j: RAG time!Neo4j
 
Kawika Technologies pvt ltd Software Development Company in Trivandrum
Kawika Technologies pvt ltd Software Development Company in TrivandrumKawika Technologies pvt ltd Software Development Company in Trivandrum
Kawika Technologies pvt ltd Software Development Company in TrivandrumKawika Technologies
 
Top Software Development Trends in 2024
Top Software Development Trends in  2024Top Software Development Trends in  2024
Top Software Development Trends in 2024Mind IT Systems
 
20240330_고급진 코드를 위한 exception 다루기
20240330_고급진 코드를 위한 exception 다루기20240330_고급진 코드를 위한 exception 다루기
20240330_고급진 코드를 위한 exception 다루기Chiwon Song
 
About .NET 8 and a first glimpse into .NET9
About .NET 8 and a first glimpse into .NET9About .NET 8 and a first glimpse into .NET9
About .NET 8 and a first glimpse into .NET9Jürgen Gutsch
 
How Does the Epitome of Spyware Differ from Other Malicious Software?
How Does the Epitome of Spyware Differ from Other Malicious Software?How Does the Epitome of Spyware Differ from Other Malicious Software?
How Does the Epitome of Spyware Differ from Other Malicious Software?AmeliaSmith90
 
Watermarking in Source Code: Applications and Security Challenges
Watermarking in Source Code: Applications and Security ChallengesWatermarking in Source Code: Applications and Security Challenges
Watermarking in Source Code: Applications and Security ChallengesShyamsundar Das
 
Big Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/ML
Big Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/MLBig Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/ML
Big Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/MLAlluxio, Inc.
 
Generative AI for Cybersecurity - EC-Council
Generative AI for Cybersecurity - EC-CouncilGenerative AI for Cybersecurity - EC-Council
Generative AI for Cybersecurity - EC-CouncilVICTOR MAESTRE RAMIREZ
 
Leveraging DxSherpa's Generative AI Services to Unlock Human-Machine Harmony
Leveraging DxSherpa's Generative AI Services to Unlock Human-Machine HarmonyLeveraging DxSherpa's Generative AI Services to Unlock Human-Machine Harmony
Leveraging DxSherpa's Generative AI Services to Unlock Human-Machine Harmonyelliciumsolutionspun
 
OpenChain Webinar: Universal CVSS Calculator
OpenChain Webinar: Universal CVSS CalculatorOpenChain Webinar: Universal CVSS Calculator
OpenChain Webinar: Universal CVSS CalculatorShane Coughlan
 
Mastering Kubernetes - Basics and Advanced Concepts using Example Project
Mastering Kubernetes - Basics and Advanced Concepts using Example ProjectMastering Kubernetes - Basics and Advanced Concepts using Example Project
Mastering Kubernetes - Basics and Advanced Concepts using Example Projectwajrcs
 
Why Choose Brain Inventory For Ecommerce Development.pdf
Why Choose Brain Inventory For Ecommerce Development.pdfWhy Choose Brain Inventory For Ecommerce Development.pdf
Why Choose Brain Inventory For Ecommerce Development.pdfBrain Inventory
 
eAuditor Audits & Inspections - conduct field inspections
eAuditor Audits & Inspections - conduct field inspectionseAuditor Audits & Inspections - conduct field inspections
eAuditor Audits & Inspections - conduct field inspectionsNirav Modi
 
Deep Learning for Images with PyTorch - Datacamp
Deep Learning for Images with PyTorch - DatacampDeep Learning for Images with PyTorch - Datacamp
Deep Learning for Images with PyTorch - DatacampVICTOR MAESTRE RAMIREZ
 
AI Embracing Every Shade of Human Beauty
AI Embracing Every Shade of Human BeautyAI Embracing Every Shade of Human Beauty
AI Embracing Every Shade of Human BeautyRaymond Okyere-Forson
 

Recently uploaded (20)

IA Generativa y Grafos de Neo4j: RAG time
IA Generativa y Grafos de Neo4j: RAG timeIA Generativa y Grafos de Neo4j: RAG time
IA Generativa y Grafos de Neo4j: RAG time
 
Sustainable Web Design - Claire Thornewill
Sustainable Web Design - Claire ThornewillSustainable Web Design - Claire Thornewill
Sustainable Web Design - Claire Thornewill
 
online pdf editor software solutions.pdf
online pdf editor software solutions.pdfonline pdf editor software solutions.pdf
online pdf editor software solutions.pdf
 
Introduction-to-Software-Development-Outsourcing.pptx
Introduction-to-Software-Development-Outsourcing.pptxIntroduction-to-Software-Development-Outsourcing.pptx
Introduction-to-Software-Development-Outsourcing.pptx
 
Webinar - IA generativa e grafi Neo4j: RAG time!
Webinar - IA generativa e grafi Neo4j: RAG time!Webinar - IA generativa e grafi Neo4j: RAG time!
Webinar - IA generativa e grafi Neo4j: RAG time!
 
Kawika Technologies pvt ltd Software Development Company in Trivandrum
Kawika Technologies pvt ltd Software Development Company in TrivandrumKawika Technologies pvt ltd Software Development Company in Trivandrum
Kawika Technologies pvt ltd Software Development Company in Trivandrum
 
Top Software Development Trends in 2024
Top Software Development Trends in  2024Top Software Development Trends in  2024
Top Software Development Trends in 2024
 
20240330_고급진 코드를 위한 exception 다루기
20240330_고급진 코드를 위한 exception 다루기20240330_고급진 코드를 위한 exception 다루기
20240330_고급진 코드를 위한 exception 다루기
 
About .NET 8 and a first glimpse into .NET9
About .NET 8 and a first glimpse into .NET9About .NET 8 and a first glimpse into .NET9
About .NET 8 and a first glimpse into .NET9
 
How Does the Epitome of Spyware Differ from Other Malicious Software?
How Does the Epitome of Spyware Differ from Other Malicious Software?How Does the Epitome of Spyware Differ from Other Malicious Software?
How Does the Epitome of Spyware Differ from Other Malicious Software?
 
Watermarking in Source Code: Applications and Security Challenges
Watermarking in Source Code: Applications and Security ChallengesWatermarking in Source Code: Applications and Security Challenges
Watermarking in Source Code: Applications and Security Challenges
 
Big Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/ML
Big Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/MLBig Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/ML
Big Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/ML
 
Generative AI for Cybersecurity - EC-Council
Generative AI for Cybersecurity - EC-CouncilGenerative AI for Cybersecurity - EC-Council
Generative AI for Cybersecurity - EC-Council
 
Leveraging DxSherpa's Generative AI Services to Unlock Human-Machine Harmony
Leveraging DxSherpa's Generative AI Services to Unlock Human-Machine HarmonyLeveraging DxSherpa's Generative AI Services to Unlock Human-Machine Harmony
Leveraging DxSherpa's Generative AI Services to Unlock Human-Machine Harmony
 
OpenChain Webinar: Universal CVSS Calculator
OpenChain Webinar: Universal CVSS CalculatorOpenChain Webinar: Universal CVSS Calculator
OpenChain Webinar: Universal CVSS Calculator
 
Mastering Kubernetes - Basics and Advanced Concepts using Example Project
Mastering Kubernetes - Basics and Advanced Concepts using Example ProjectMastering Kubernetes - Basics and Advanced Concepts using Example Project
Mastering Kubernetes - Basics and Advanced Concepts using Example Project
 
Why Choose Brain Inventory For Ecommerce Development.pdf
Why Choose Brain Inventory For Ecommerce Development.pdfWhy Choose Brain Inventory For Ecommerce Development.pdf
Why Choose Brain Inventory For Ecommerce Development.pdf
 
eAuditor Audits & Inspections - conduct field inspections
eAuditor Audits & Inspections - conduct field inspectionseAuditor Audits & Inspections - conduct field inspections
eAuditor Audits & Inspections - conduct field inspections
 
Deep Learning for Images with PyTorch - Datacamp
Deep Learning for Images with PyTorch - DatacampDeep Learning for Images with PyTorch - Datacamp
Deep Learning for Images with PyTorch - Datacamp
 
AI Embracing Every Shade of Human Beauty
AI Embracing Every Shade of Human BeautyAI Embracing Every Shade of Human Beauty
AI Embracing Every Shade of Human Beauty
 

Ceph, Now and Later: Our Plan for Open Unified Cloud Storage

  • 1. OUR VISION FOR OPEN UNIFIED CLOUD STORAGE SAGE WEIL – RED HAT OPENSTACK SUMMIT BARCELONA – 2016.10.27
  • 2. 2 ● Now – why we develop ceph – why it's open – why it's unified – hardware – openstack – why not ceph ● Later – roadmap highlights – the road ahead – community OUTLINE
  • 4. 4 RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP RBD A reliable and fully- distributed block device, with a Linux kernel client and a QEMU/KVM driver RBD A reliable and fully- distributed block device, with a Linux kernel client and a QEMU/KVM driver RADOSGW A bucket-based REST gateway, compatible with S3 and Swift RADOSGW A bucket-based REST gateway, compatible with S3 and Swift APPAPP APPAPP HOST/VMHOST/VM CLIENTCLIENT CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE NEARLY AWESOME AWESOMEAWESOME AWESOME AWESOME
  • 5. 5 2016 = RGW S3 and Swift compatible object storage with object versioning, multi-site federation, and replication LIBRADOS A library allowing apps to direct access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomic, distributed object store comprised of self-healing, self-managing, intelligent storage nodes (OSDs) and lightweight monitors (Mons) RBD A virtual block device with snapshots, copy-on-write clones, and multi-site replication CEPHFS A distributed POSIX file system with coherent caches and snapshots on any directory OBJECT BLOCK FILE FULLY AWESOME
  • 6. 6 WHAT IS CEPH ALL ABOUT ● Distributed storage ● All components scale horizontally ● No single point of failure ● Software ● Hardware agnostic, commodity hardware ● Object, block, and file in a single cluster ● Self-manage whenever possible ● Open source (LGPL)
  • 7. 7 WHY OPEN SOURCE ● Avoid software vendor lock-in ● Avoid hardware lock-in – and enable hybrid deployments ● Lower total cost of ownership ● Transparency ● Self-support ● Add, improve, or extend
  • 8. 8 WHY UNIFIED STORAGE ● Simplicity of deployment – single cluster serving all APIs ● Efficient utilization of storage ● Simplicity of management – single set of management skills, staff, tools ...is that really true?
  • 9. 9 WHY UNIFIED STORAGE ● Simplicity of deployment – single cluster serving all APIs ● Efficient utilization of space ● Only really true for small clusters – Large deployments optimize for workload ● Simplicity of management – single set of skills, staff, tools ● Functionality in RADOS exploited by all – Erasure coding, tiering, performance, monitoring
  • 10. 10 SPECIALIZED CLUSTERS SSD M M M RADOS CLUSTER NVMeSSDHDD – Multiple Ceph clusters per-use case RADOS CLUSTERRADOS CLUSTER MM M M MM
  • 11. 11 HYBRID CLUSTER SSD M M M RADOS CLUSTER NVMeSSDHDD – Single Ceph cluster – Separate RADOS pools for different storage hardware types
  • 12. 12 CEPH ON FLASH ● Samsung – up to 153 TB in 2u – 700K IOPS, 30 GB/s ● SanDisk – up to 512 TB in 3u – 780K IOPS, 7 GB/s
  • 13. 13 CEPH ON SSD+HDD ● SuperMicro ● Dell ● HP ● Quanta / QCT ● Fujitsu (Eternus) ● Penguin OCP
  • 14. 14 ● Microserver on 8TB He WD HDD ● 1 GB RAM ● 2 core 1.3 GHz Cortex A9 ● 2x 1GbE SGMII instead of SATA ● 504 OSD 4 PB cluster – SuperMicro 1048-RT chassis ● Next gen will be aarch64 CEPH ON HDD (LITERALLY)
  • 15. 15 FOSS → FAST AND OPEN INNOVATION ● Free and open source software (FOSS) – enables innovation in software – ideal testbed for new hardware architectures ● Proprietary software platforms are full of friction – harder to modify, experiment with closed-source code – require business partnerships and NDAs to be negotiated – dual investment (of time and/or $$$) from both parties
  • 16. 16 PERSISTENT MEMORY ● Persistent (non-volatile) RAM is coming – e.g., 3D X-Point from Intel/Micron any year now – (and NV-DIMMs are a bit $$$ but already here) ● These are going to be – fast (~1000x), dense (~10x), high-endurance (~1000x) – expensive (~1/2 the cost of DRAM) – disruptive ● Intel – PMStore – OSD prototype backend targetting 3D-Xpoint – NVM-L (pmem.io/nvml)
  • 17. 17 CREATE THE ECOSYSTEM TO BECOME THE LINUX OF DISTRIBUTED STORAGE OPEN COLLABORATIVE GENERAL PURPOSE
  • 18. 18 KEYSTONE SWIF T CINDER NOVAGLANCE LIBRBD OPENSTACK CEPH FS M M M RADOS CLUSTER KVM CEPH + OPENSTACK MANILA RADOSG W
  • 20. 20 WHY NOT CEPH? ● inertia ● performance ● functionality ● stability ● ease of use
  • 21. 21 WHAT ARE WE DOING ABOUT IT?
  • 22. 22 Hammer (LTS) Spring 2015 Jewel (LTS) Spring 2016 Infernalis Fall 2015 Kraken Fall 2016 Luminous (LTS) Spring 2017 WE ARE HERE RELEASE CADENCERELEASE CADENCE 10.2.z 12.2.z 0.94.z
  • 24. 24 ● BlueStore = Block + NewStore – key/value database (RocksDB) for metadata – all data written directly to raw block device(s) – can combine HDD, SSD, NVMe, NVRAM ● Full data checksums (crc32c, xxhash) ● Inline compression (zlib, snappy, zstd) ● ~2x faster than FileStore – better parallelism, efficiency on fast devices – no double writes for data – performs well with very small SSD journals BLUESTORE BlueStore BlueFS RocksDB BlockDeviceBlockDeviceBlockDevice BlueRocksEnv data metadata Allocator ObjectStore Compressor OSD
  • 25. 25 ● Current master – nearly finalized disk format, good performance ● Kraken – stable code, stable disk format – probably still flagged 'experimental' ● Luminous – fully stable and ready for broad usage – maybe the default... we will see how it goes BLUESTORE – WHEN CAN HAZ? ● Migrating from FileStore – evacuate and fail old OSDs – reprovision with BlueStore – let Ceph recovery normally
  • 26. 26 ● New implementation of network layer – replaces aging SimpleMessenger – fixed size thread pool (vs 2 threads per socket) – scales better to larger clusters – more healthy relationship with tcmalloc – now the default! ● Pluggable backends – PosixStack – Linux sockets, TCP (default, supported) – Two experimental backends! ASYNCMESSENGER
  • 27. 27 TCP OVERHEAD IS SIGNIFICANT writes reads
  • 29. 29 RDMA STACK ● RDMA for data path (ibverbs) ● Keep TCP for control path ● Working prototype in ~1 month
  • 32. 32 ERASURE CODE OVERWRITES ● Current EC RADOS pools are append only – simple, stable suitable for RGW, or behind a cache tier ● EC overwrites will allow RBD and CephFS to consume EC pools directly ● It's hard – implementation requires two-phase commit – complex to avoid full-stripe update for small writes – relies on efficient implementation of “move ranges” operation by BlueStore ● It will be huge – tremendous impact on TCO – make RBD great again
  • 33. 33 CEPH-MGR ● ceph-mon monitor daemons currently do a lot – more than they need to (PG stats to support things like 'df') – this limits cluster scalability ● ceph-mgr moves non-critical metrics into a separate daemon – that is more efficient – that can stream to graphite, influxdb – that can efficiently integrate with external modules (even Python!) ● Good host for – integrations, like Calamari REST API endpoint – coming features like 'ceph top' or 'rbd top' – high-level management functions and policy M M ??? (time for new iconography)
  • 34. 34 QUALITY OF SERVICE ● Set policy for both – reserved/minimum IOPS – proportional sharing of excess capacity ● by – type of IO (client, scrub, recovery) – pool – client (e.g., VM) ● Based on mClock paper from OSDI'10 – IO scheduler – distributed enforcement with cooperating clients
  • 35. 35 A B C D E F A' B' G H VM SHARED STORAGE → LATENCY ETHERNET
  • 36. 36 VM LOCAL SSD → LOW LATENCY A B C D E F A' B' G HSSD SATA ETHERNET
  • 37. 37 VM LOCAL SSD → FAILURE → SADNESS A B C D E F A' B' G HSSD SATA ETHERNET
  • 38. 38 VM WRITEBACK IS UNORDERED A B C D E F A' B' G HSSD SATA ETHERNET A' B' C D
  • 39. 39 VM WRITEBACK IS UNORDERED A B C D E F A' B' G HSSD SATA ETHERNET A' B' C D A' B' C D → never happened
  • 40. 40 VM RBD ORDERED WRITEBACK CACHE A B C D E F A' B' G HSSD SATA ETHERNET A B C D → CRASH CONSISTENT! A B C D cleverness
  • 41. 41 ● Low latency writes to local SSD ● Persistent cache (across reboots etc) ● Fast reads from cache ● Ordered writeback to the cluster, with batching, etc. ● RBD image is always point-in-time consistent RBD ORDERED WRITEBACK CACHE Low Latency SPoF No data Local SSD Ceph RBD Higher Latency No SPoF Perfect durability Low Latency SPoF on recent writes Stale but crash consistent
  • 42. 42 IN THE FUTURE MOST BYTES WILL BE STORED IN OBJECTS
  • 43. 43 S3 Swift Erasure coding Multisite federation Multisite replication WORM Encryption Tiering Deduplication RADOSGW Compression
  • 44. 44 ZONE CZONE B RADOSGW INDEXING RADOSG WLIBRADOS M CLUSTER A MM M CLUSTER B MM RADOSG W RADOSG W RADOSG WLIBRADOS LIBRADOS LIBRADOS REST RADOSG WLIBRADOS M CLUSTER C MM REST ZONE A
  • 46. 46 ● Red Hat ● Mirantis ● SUSE ● SanDisk ● XSKY ● ZTE ● LETV ● Quantum ● EasyStack ● H3C ● UnitedStack ● Digiware ● Mellanox ● Intel ● Walmart Labs ● DreamHost ● Tencent ● Deutsche Telekom ● Igalia ● Fujitsu ● DigitalOcean ● University of Toronto GROWING DEVELOPMENT COMMUNITY
  • 47. 47 ● Red Hat ● Mirantis ● SUSE ● SanDisk ● XSKY ● ZTE ● LETV ● Quantum ● EasyStack ● H3C ● UnitedStack ● Digiware ● Mellanox ● Intel ● Walmart Labs ● DreamHost ● Tencent ● Deutsche Telekom ● Igalia ● Fujitsu ● DigitalOcean ● University of Toronto GROWING DEVELOPMENT COMMUNITY
  • 48. 48 ● Red Hat ● Mirantis ● SUSE ● SanDisk ● XSKY ● ZTE ● LETV ● Quantum ● EasyStack ● H3C ● UnitedStack ● Digiware ● Mellanox ● Intel ● Walmart Labs ● DreamHost ● Tencent ● Deutsche Telekom ● Igalia ● Fujitsu ● DigitalOcean ● University of Toronto GROWING DEVELOPMENT COMMUNITY
  • 49. 49 ● Red Hat ● Mirantis ● SUSE ● SanDisk ● XSKY ● ZTE ● LETV ● Quantum ● EasyStack ● H3C ● UnitedStack ● Digiware ● Mellanox ● Intel ● Walmart Labs ● DreamHost ● Tencent ● Deutsche Telekom ● Igalia ● Fujitsu ● DigitalOcean ● University of Toronto GROWING DEVELOPMENT COMMUNITY
  • 50. 50 ● Red Hat ● Mirantis ● SUSE ● SanDisk ● XSKY ● ZTE ● LETV ● Quantum ● EasyStack ● H3C ● UnitedStack ● Digiware ● Mellanox ● Intel ● Walmart Labs ● DreamHost ● Tencent ● Deutsche Telekom ● Igalia ● Fujitsu ● DigitalOcean ● University of Toronto GROWING DEVELOPMENT COMMUNITY
  • 51. 51 ● Red Hat ● Mirantis ● SUSE ● SanDisk ● XSKY ● ZTE ● LETV ● Quantum ● EasyStack ● H3C ● UnitedStack ● Digiware ● Mellanox ● Intel ● Walmart Labs ● DreamHost ● Tencent ● Deutsche Telekom ● Igalia ● Fujitsu ● DigitalOcean ● University of Toronto GROWING DEVELOPMENT COMMUNITY
  • 52. 52 ● Nobody should have to use proprietary storage systems out of necessity ● The best storage solution should be an open source solution ● Ceph is growing up, but we're not done yet – performance, scalability, features, ease of use ● We are highly motivated TAKEAWAYS
  • 53. 53 Operator ● File bugs http://tracker.ceph.com/ ● Document http://github.com/ceph/ceph ● Blog ● Build a relationship with the core team Developer ● Fix bugs ● Help design missing functionality ● Implement missing functionality ● Integrate ● Help make it easier to use ● Participate in monthly Ceph Developer Meetings – video chat, EMEA- and APAC- friendly times HOW TO HELP
  • 54. THANK YOU! Sage Weil CEPH PRINCIPAL ARCHITECT sage@redhat.com @liewegas