SlideShare uma empresa Scribd logo
1 de 48
Baixar para ler offline
Basic and Advanced
Analysis of
Ceph Volume Backend Driver
in Cinder
TOP3
2017GLOBAL
PUBLISHERS
How Many?
How Much?
Details?
Plan?
8 Clusters
around 4 PB
KRBD Block Storage 3 Clusters
Block Storage as a Cinder
Backend
4 Clusters
Archiving Storage via S3
interface
1 Cluster
Origin CDN Storage
Data Processing Storage
CEPH in Netmarble
Today’s Journey
Default
Cinder
Features Complex
Cinder
Feature
Basic
Advanced
John Haan (Seungjin Han)
2011 – 2016 (Samsung SDS)
Cloud Solution Engineer
OpenStack Development
2016 – present (Netmarble)
Cloud Platform Engineer
Responsible for Ceph Storage
Who’s Speker
Basic
Default Cinder Features
• Concept of Cinder Volume
– RBD Copy on Write
• Cinder Snapshot
– RBD Snapshot / Flatten
• Cinder Backup
– RBD export-diff / import-diff
Basic
OpenStack Cinder & CEPH
OpenStack Cinder
• Block storage service for OpenStack
• Provides storage resources to end users
• Dominent users use Ceph as a cinder backend
OpenStack Cinder & CEPH
• http://docs.ceph.com/docs/master/rbd/rbd-openstack/
How to Integrate Ceph as a Cinder Backend
ceph osd pool create volumes …
ceph osd pool create images …
ceph osd pool create backups …
ceph osd pool create vms …
### glance
[glance_store]
rbd_store_pool = images
### cinder
[rbd]
rbd_pool = volumes
### nova
[libvirt]
images_rbd_pool = vms
OpenStack
Configurations
Create
Ceph Pools
OpenStack Cinder & CEPH
OSD.0 OSD.1 OSD.2
PG
PG
PG
PG
PG
PG
PG
PG
images volumes vms
OpenStack Cinder & CEPH
Anatomy of Cinder Volume
• 4MB chuk of rbd volume
• 8MB chunk of rbd volumes
### cinder
[rbd]
rbd_store_chunk_size = {chunk size in MB}
# default value is 4MB
• 20GB of cinder volume
• could change rados default size
Where is my data stored in Filestore
objects
cinder volume
Where is my data stored in Filestore
objects
cinder volume
Where is my data stored in Filestore
objects
cinder volume
Where is my data stored in Filestore
PG1 PG2 PG3 PG4 PG 77
Volumes Pool
objects
cinder volume
Where is my data stored in Filestore
PG1 PG2 PG3 PG4 PG77
Volumes Pool
objects
cinder volume
Where is my data stored in Filestore
PG1 PG2 PG3 PG4 PG 77
Volumes Pool
objects
OSD.345 OSD.323
PG 1 PG 2
cinder volume
PG 77 PG 77
OSD.264
PG 77PG 3
Where is my data stored in Filestore
PG1 PG2 PG3 PG4 PG 77
Volumes Pool
objects
cinder volume
OSD.345 OSD.323
PG 1 PG 2PG 77 PG 77
OSD.264
PG 77PG 3
Where is my data stored in Bluestore
PG1 PG2 PG3 PG4 PG 77
Volumes Pool
objects
cinder volume
OSD.345 OSD.323
PG 1 PG 2PG 77 PG 77
OSD.264
PG 77PG 3
Where is my data stored in Bluestore
PG1 PG2 PG3 PG4 PG 77
Volumes Pool
objects
cinder volume
via ceph-objectstore-tool
OSD.345 OSD.323
PG 1 PG 2PG 77 PG 77
OSD.264
PG 77PG 3
Copy on Write
• original Image(parent) à protected snapshot
à cloned(child) image
• fast volume provisioning than legacy storage
Copy on Write
• must set show_image_direct_url to True in glance configuration
• children volumes related to snapshot
<protected snapshot from the image>
<children volume from the snapshot>
Copy on Write
• libvirt xml of instance
• The volume connected with compute node
Cinder Snapshot Analysis
• capture of what a volume like at a particular moment
• cinder snapshot can be restored into a cinder volume
• data of cinder snapshot can be transferred into
a cinder backup volume
Snapshot of Cinder
cinder volume snaphot
volume from snapshot
backup volume from snapshot
Ceph Snapshot Analysis
• snapshot concept is the same as cinder
• volume can be flatten or cloned when creating from snapshot
Snapshot of Ceph
### cinder
[rbd]
rbd_flatten_volume_from_snapshot = {false or true}
### CEPH
rbd –p volumes flatten volume-xxxxxx
Image flatten: 100% complete...done.
Snapshot of Cinder & Ceph
cinder snapshot from a
cinder volume
snapshot image from
the rbd volume
Non flatten volume
Flatten volume
Volume from a Ceph Snapshot
Cinder Backup with CEPH Backend
• Utilize rbd export-diff & import-diff
• Support full backup & incremental backup
Full Backup from a Cinder Volume
• full backup volumes
Full Backup from a Cinder Volume
• ceph volume image from backups pool
• snapshot volume image from the backup volume
Incremental Backups
• Incremental backup volumes
Incremental Backups
• snapshot volumes from the backup volume
$ rbd export-diff –-pool volumes –-from-snap backup.4e volumes/volume-b7..@backup.4e7.. - | rbd
import-diff –backups – backups/volume-4e7..
• Incremental Snapshot with RBD
Advanced
• Image Cached Volume
• Cinder Replication
– RBD Mirroring
Advanced
Complex Cinder Features
Concern about CoW
w
R
I
T
E
Image-Volume Cache
Image-Volume Cache Defines
[DEFAULT]
cinder_internal_tenant_project_id = {project id}
cinder_internal_tenant_user_id = {user id}
[ceph]
image_volume_cache_enabled = True
image_volume_cache_max_size_gb = 10000
image_volume_cache_max_count = 1000
Configuration
• improve the performance of creating a volume
from an image
• volume is first created from an image
à a new cached image-volume will be created
Image-Volume Cache
Image-Volume Cache in Cinder
• cached volume in cinder
How Ceph provides image-cache
volume from an
instance initially
created.
cached image
volume
snapshot from the
the volume
Concern about Disaster Recovery
Replication in Cinder
Replication in Cinder Analysis
• depends on the driver’s implementation
• There is no automatic failover since the use case is Disaster
Recovery, and it must be done manually when the primary
backend is out.
Ceph Replication in Cinder
Step To Do
1 prepare different ceph cluster
2 configure ceph clusters in mirrored mode and to mirror the pool used by cinder.
3 copy cluster keys to the cinder volume node.
4 configure ceph driver in cinder to use replication.
RBD Mirroring
RBD Mirroring Defines
• asynchronously mirrored between two ceph clusters
• uses the RBD journaling image feature to ensure crash-
consistent replication
• 2 ways : per pool(automatically mirrored), specific subset of
images
• Use Cases : disaster recovery, global block device
distribution
Cinder Volume Replication
Peering
• primary
• secondary
Cinder Volume Replication
Replication Status
• cluster 1
• cluster 2
Cinder Volume Replication
Process of Fail Over
• Before Fail-Over
• After Fail-Over
Cinder Volume Replication
Result of Fail Over
• cinder list
• cluster 1 & 2
Straight roads do not
make skillful
drivers. - Paulo Coelho -
Q & A
Thanks

Mais conteúdo relacionado

Mais procurados

Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
buildacloud
 
Kvm performance optimization for ubuntu
Kvm performance optimization for ubuntuKvm performance optimization for ubuntu
Kvm performance optimization for ubuntu
Sim Janghoon
 

Mais procurados (20)

Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
AF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on FlashAF Ceph: Ceph Performance Analysis and Improvement on Flash
AF Ceph: Ceph Performance Analysis and Improvement on Flash
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS Update
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
 
Ceph
CephCeph
Ceph
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Kvm performance optimization for ubuntu
Kvm performance optimization for ubuntuKvm performance optimization for ubuntu
Kvm performance optimization for ubuntu
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing Guide
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
 
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red HatMultiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
 

Semelhante a Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John Haan

Walking the Walk: Developing the MongoDB Backup Service with MongoDB
Walking the Walk: Developing the MongoDB Backup Service with MongoDBWalking the Walk: Developing the MongoDB Backup Service with MongoDB
Walking the Walk: Developing the MongoDB Backup Service with MongoDB
MongoDB
 
Red Hat Storage Day LA - Persistent Storage for Linux Containers
Red Hat Storage Day LA - Persistent Storage for Linux Containers Red Hat Storage Day LA - Persistent Storage for Linux Containers
Red Hat Storage Day LA - Persistent Storage for Linux Containers
Red_Hat_Storage
 

Semelhante a Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John Haan (20)

NICConf 2015 - azure disaster recovery in 60min
NICConf 2015 -  azure disaster recovery in 60minNICConf 2015 -  azure disaster recovery in 60min
NICConf 2015 - azure disaster recovery in 60min
 
The road to enterprise ready open stack storage as service
The road to enterprise ready open stack storage as serviceThe road to enterprise ready open stack storage as service
The road to enterprise ready open stack storage as service
 
Walking the Walk: Developing the MongoDB Backup Service with MongoDB
Walking the Walk: Developing the MongoDB Backup Service with MongoDBWalking the Walk: Developing the MongoDB Backup Service with MongoDB
Walking the Walk: Developing the MongoDB Backup Service with MongoDB
 
Openstack Denver Meetup - Intro to Block Storage
Openstack Denver Meetup - Intro to Block StorageOpenstack Denver Meetup - Intro to Block Storage
Openstack Denver Meetup - Intro to Block Storage
 
PostgreSQL Disaster Recovery with Barman (PGConf.EU 2013)
PostgreSQL Disaster Recovery with Barman (PGConf.EU 2013)PostgreSQL Disaster Recovery with Barman (PGConf.EU 2013)
PostgreSQL Disaster Recovery with Barman (PGConf.EU 2013)
 
Enhancing Data Protection Workflows with Kanister And Argo Workflows
Enhancing Data Protection Workflows with Kanister And Argo WorkflowsEnhancing Data Protection Workflows with Kanister And Argo Workflows
Enhancing Data Protection Workflows with Kanister And Argo Workflows
 
Backup, Restore, and Disaster Recovery
Backup, Restore, and Disaster RecoveryBackup, Restore, and Disaster Recovery
Backup, Restore, and Disaster Recovery
 
Bring Your Own Container: Using Docker Images In Production
Bring Your Own Container: Using Docker Images In ProductionBring Your Own Container: Using Docker Images In Production
Bring Your Own Container: Using Docker Images In Production
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
OpenStack Glance
OpenStack GlanceOpenStack Glance
OpenStack Glance
 
MongoDB World 2018: Solving Your Backup Needs Using MongoDB Ops Manager, Clou...
MongoDB World 2018: Solving Your Backup Needs Using MongoDB Ops Manager, Clou...MongoDB World 2018: Solving Your Backup Needs Using MongoDB Ops Manager, Clou...
MongoDB World 2018: Solving Your Backup Needs Using MongoDB Ops Manager, Clou...
 
Preparing your dockerised application for production deployment
Preparing your dockerised application for production deploymentPreparing your dockerised application for production deployment
Preparing your dockerised application for production deployment
 
Ibm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ashIbm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ash
 
20171122 aws usergrp_coretech-spn-cicd-aws-v01
20171122 aws usergrp_coretech-spn-cicd-aws-v0120171122 aws usergrp_coretech-spn-cicd-aws-v01
20171122 aws usergrp_coretech-spn-cicd-aws-v01
 
HBaseConAsia2018 Track1-5: Improving HBase reliability at PInterest with geo ...
HBaseConAsia2018 Track1-5: Improving HBase reliability at PInterest with geo ...HBaseConAsia2018 Track1-5: Improving HBase reliability at PInterest with geo ...
HBaseConAsia2018 Track1-5: Improving HBase reliability at PInterest with geo ...
 
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...
 
Red Hat Storage Day LA - Persistent Storage for Linux Containers
Red Hat Storage Day LA - Persistent Storage for Linux Containers Red Hat Storage Day LA - Persistent Storage for Linux Containers
Red Hat Storage Day LA - Persistent Storage for Linux Containers
 
Guaranteeing CloudStack Storage Performance
Guaranteeing CloudStack Storage Performance Guaranteeing CloudStack Storage Performance
Guaranteeing CloudStack Storage Performance
 
Laying OpenStack Cinder Block Services
Laying OpenStack Cinder Block ServicesLaying OpenStack Cinder Block Services
Laying OpenStack Cinder Block Services
 

Último

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 

Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John Haan

  • 1. Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder
  • 3. How Many? How Much? Details? Plan? 8 Clusters around 4 PB KRBD Block Storage 3 Clusters Block Storage as a Cinder Backend 4 Clusters Archiving Storage via S3 interface 1 Cluster Origin CDN Storage Data Processing Storage CEPH in Netmarble
  • 5. John Haan (Seungjin Han) 2011 – 2016 (Samsung SDS) Cloud Solution Engineer OpenStack Development 2016 – present (Netmarble) Cloud Platform Engineer Responsible for Ceph Storage Who’s Speker
  • 6. Basic Default Cinder Features • Concept of Cinder Volume – RBD Copy on Write • Cinder Snapshot – RBD Snapshot / Flatten • Cinder Backup – RBD export-diff / import-diff Basic
  • 7. OpenStack Cinder & CEPH OpenStack Cinder • Block storage service for OpenStack • Provides storage resources to end users • Dominent users use Ceph as a cinder backend
  • 8. OpenStack Cinder & CEPH • http://docs.ceph.com/docs/master/rbd/rbd-openstack/ How to Integrate Ceph as a Cinder Backend
  • 9. ceph osd pool create volumes … ceph osd pool create images … ceph osd pool create backups … ceph osd pool create vms … ### glance [glance_store] rbd_store_pool = images ### cinder [rbd] rbd_pool = volumes ### nova [libvirt] images_rbd_pool = vms OpenStack Configurations Create Ceph Pools OpenStack Cinder & CEPH
  • 10. OSD.0 OSD.1 OSD.2 PG PG PG PG PG PG PG PG images volumes vms OpenStack Cinder & CEPH
  • 11. Anatomy of Cinder Volume • 4MB chuk of rbd volume • 8MB chunk of rbd volumes ### cinder [rbd] rbd_store_chunk_size = {chunk size in MB} # default value is 4MB • 20GB of cinder volume • could change rados default size
  • 12. Where is my data stored in Filestore objects cinder volume
  • 13. Where is my data stored in Filestore objects cinder volume
  • 14. Where is my data stored in Filestore objects cinder volume
  • 15. Where is my data stored in Filestore PG1 PG2 PG3 PG4 PG 77 Volumes Pool objects cinder volume
  • 16. Where is my data stored in Filestore PG1 PG2 PG3 PG4 PG77 Volumes Pool objects cinder volume
  • 17. Where is my data stored in Filestore PG1 PG2 PG3 PG4 PG 77 Volumes Pool objects OSD.345 OSD.323 PG 1 PG 2 cinder volume PG 77 PG 77 OSD.264 PG 77PG 3
  • 18. Where is my data stored in Filestore PG1 PG2 PG3 PG4 PG 77 Volumes Pool objects cinder volume OSD.345 OSD.323 PG 1 PG 2PG 77 PG 77 OSD.264 PG 77PG 3
  • 19. Where is my data stored in Bluestore PG1 PG2 PG3 PG4 PG 77 Volumes Pool objects cinder volume OSD.345 OSD.323 PG 1 PG 2PG 77 PG 77 OSD.264 PG 77PG 3
  • 20. Where is my data stored in Bluestore PG1 PG2 PG3 PG4 PG 77 Volumes Pool objects cinder volume via ceph-objectstore-tool OSD.345 OSD.323 PG 1 PG 2PG 77 PG 77 OSD.264 PG 77PG 3
  • 21. Copy on Write • original Image(parent) à protected snapshot à cloned(child) image • fast volume provisioning than legacy storage
  • 22. Copy on Write • must set show_image_direct_url to True in glance configuration • children volumes related to snapshot <protected snapshot from the image> <children volume from the snapshot>
  • 23. Copy on Write • libvirt xml of instance • The volume connected with compute node
  • 24. Cinder Snapshot Analysis • capture of what a volume like at a particular moment • cinder snapshot can be restored into a cinder volume • data of cinder snapshot can be transferred into a cinder backup volume Snapshot of Cinder cinder volume snaphot volume from snapshot backup volume from snapshot
  • 25. Ceph Snapshot Analysis • snapshot concept is the same as cinder • volume can be flatten or cloned when creating from snapshot Snapshot of Ceph ### cinder [rbd] rbd_flatten_volume_from_snapshot = {false or true} ### CEPH rbd –p volumes flatten volume-xxxxxx Image flatten: 100% complete...done.
  • 26. Snapshot of Cinder & Ceph cinder snapshot from a cinder volume snapshot image from the rbd volume
  • 27. Non flatten volume Flatten volume Volume from a Ceph Snapshot
  • 28. Cinder Backup with CEPH Backend • Utilize rbd export-diff & import-diff • Support full backup & incremental backup
  • 29. Full Backup from a Cinder Volume • full backup volumes
  • 30. Full Backup from a Cinder Volume • ceph volume image from backups pool • snapshot volume image from the backup volume
  • 32. Incremental Backups • snapshot volumes from the backup volume $ rbd export-diff –-pool volumes –-from-snap backup.4e volumes/volume-b7..@backup.4e7.. - | rbd import-diff –backups – backups/volume-4e7.. • Incremental Snapshot with RBD
  • 33. Advanced • Image Cached Volume • Cinder Replication – RBD Mirroring Advanced Complex Cinder Features
  • 35. Image-Volume Cache Image-Volume Cache Defines [DEFAULT] cinder_internal_tenant_project_id = {project id} cinder_internal_tenant_user_id = {user id} [ceph] image_volume_cache_enabled = True image_volume_cache_max_size_gb = 10000 image_volume_cache_max_count = 1000 Configuration • improve the performance of creating a volume from an image • volume is first created from an image à a new cached image-volume will be created
  • 37. Image-Volume Cache in Cinder • cached volume in cinder
  • 38. How Ceph provides image-cache volume from an instance initially created. cached image volume snapshot from the the volume
  • 40. Replication in Cinder Replication in Cinder Analysis • depends on the driver’s implementation • There is no automatic failover since the use case is Disaster Recovery, and it must be done manually when the primary backend is out. Ceph Replication in Cinder Step To Do 1 prepare different ceph cluster 2 configure ceph clusters in mirrored mode and to mirror the pool used by cinder. 3 copy cluster keys to the cinder volume node. 4 configure ceph driver in cinder to use replication.
  • 41. RBD Mirroring RBD Mirroring Defines • asynchronously mirrored between two ceph clusters • uses the RBD journaling image feature to ensure crash- consistent replication • 2 ways : per pool(automatically mirrored), specific subset of images • Use Cases : disaster recovery, global block device distribution
  • 42. Cinder Volume Replication Peering • primary • secondary
  • 43. Cinder Volume Replication Replication Status • cluster 1 • cluster 2
  • 44. Cinder Volume Replication Process of Fail Over • Before Fail-Over • After Fail-Over
  • 45. Cinder Volume Replication Result of Fail Over • cinder list • cluster 1 & 2
  • 46. Straight roads do not make skillful drivers. - Paulo Coelho -
  • 47. Q & A