SlideShare a Scribd company logo
1 of 32
Download to read offline
Inktank
Delivering the Future of Storage


The End of RAID as you know it with Ceph Replication
March 28, 2013
Agenda
l    Intank and Ceph Introduction

l    Ceph Technology

l    Challenges of Raid

l    Ceph Advantages

l    Q&A

l    Resources and Moving Forward
•    Company that provides       •  Distributed unified object,
     professional services and      block and file storage
     support for Ceph
                                    platform
•    Founded in 2011
                                 •    Created by storage
•    Funded by DreamHost              experts

•    Mark Shuttleworth           •    Open source
     invested $1M
                                 •    In the Linux Kernel
•    Sage Weil, CTO and
     creator of Ceph             •    Integrated into Cloud
                                      Platforms
Ceph Technology Overview
Ceph Technological Foundations

Ceph was built with the following goals:

•  Every component must scale

•  There can be no single points of failure

•  The solution must be software-based, not an appliance

•  Must be open source

•  Should run on readily-available, commodity hardware

•  Everything must self-manage wherever possible



                                                           5
Ceph Innovations
CRUSH data placement algorithm
Algorithm is infrastructure aware and quickly adjusts to failures
Data location is computed rather than locked up
Enables clients to directly directly communicate with servers that store their data
Enables clients to perform parallel I/O for greatly enhanced throughput


Reliable Autonomic Distributed Object Store
Storage devices assume complete responsibility for data integrity
They operate independently, in parallel, without central choreography
Very efficient. Very fast. Very scalable.


CephFS Distributed Metadata Server
Highly scalable to large numbers of active/active metadata servers and high throughput
Highly reliable and available, with full Posix semantics and consistency guarantees
Has both a FUSE client and a client fully integrated into the Linux kernel


Advanced Virtual Block Device
Enterprise storage capabilities from utility server hardware
Thin Provisioned, Allocate-on-Write Snapshots, LUN cloning
In the Linux kernel and integrated with OpenStack components
Unified Storage Platform
Object
   •     Archival and backup storage
   •     Primary data storage
   •     S3-like storage
   •     Web services and platforms
   •     Application development
Block
   •  SAN replacement
   •  Virtual block device, VM images
File
    •  HPC
    •  Posix-compatible applications
Ceph Unified Storage Platform

        OBJECTS                      VIRTUAL DISKS             FILES & DIRECTORIES




     CEPH                          CEPH                           CEPH
    GATEWAY                     BLOCK DEVICE                  FILE SYSTEM
A powerful S3- and Swift-      A distributed virtual block   A distributed, scale-out
compatible gateway that        device that delivers high-     filesystem with POSIX
 brings the power of the      performance, cost-effective    semantics that provides
  Ceph Object Store to        storage for virtual machines   storage for a legacy and
   modern applications          and legacy applications        modern applications




                            CEPH OBJECT STORE
 A reliable, easy to manage, next-generation distributed object
store that provides storage of unstructured data for applications
RADOS Cluster Makeup


          OSD    OSD    OSD    OSD      OSD
RADOS
 Node
                                                   btrfs
          FS     FS     FS         FS   FS         xfs
                                                   ext4

          DISK   DISK   DISK   DISK     DISK




RADOS            M             M               M
Cluster



                                                           9
RADOS Object Storage Daemons
                     Intelligent Storage Servers

                •    Serve stored objects to clients

                •    OSD is primary for some objects
                      •  Responsible for replication
                      •  Responsible for coherency
                      •  Responsible for re-balancing
                      •  Responsible for recovery

                •    OSD is secondary for some objects
                      •  Under control of primary
                      •  Capable of becoming primary

                •    Supports extended object classes
                      •  Atomic transactions
                      •  Synchronization and notifications
                      •  Send computation to the data

10
CRUSH
 Pseudo-random placement
 algorithm
     •  Deterministic function of
        inputs
     •  Clients can compute data
        location

 Rule-based configuration
     •  Desired/required replica
        count
     •  Affinity/distribution rules
     •  Infrastructure topology
     •  Weighting

 Excellent data distribution
     •  Declustered placement
     •  Excellent data re-distribution
     •  Migration proportional to
        change
                                         11
CLIENT

         ??




              12
RADOS Monitors
                Stewards of the Cluster



     M
            •  Distributed consensus (Paxos)

            •  Odd number required (quorum)

            •  Maintain/distribute cluster map

            •  Authentication/key servers

            •  Monitors are not in the data path

            •  Clients talk directly to OSDs




13
RAID and its Challenges




                          14
Redundant Array of Inexpensive Disks
              Enhanced Reliability
                 •  RAID-1 mirroring
                 •  RAID-5/6 parity (reduced overhead)
                 •  Automated recovery

              Enhanced Performance
                 •    RAID-0 striping
                 •    SAN interconnects
                 •    Enterprise SAS drives
                 •    Proprietary H/W RAID controllers

              Economical Storage Solutions
                 •  Software RAID implementations
                 •  iSCSI and JBODs

              Enhanced Capacity
                 •  Logical volume concatenation

15
RAID Challenges: Capacity/Speed
•  Storage economies in disks come from more GB per spindle

•  NRE rates are flat (typically estimated at 10E-15/bit)
       •  4% chance of NRE while recovering a 4+1 RAID-5 set
          and it goes up with the number of volumes in the set
          many RAID controllers fail the recovery after an NRE

•  Access speed has not kept up with density increases
       •  27 hours to rebuild a 4+1 RAID-5 set at 20MB/s
           during which time a second drive can fail

•  Managing the risk of second failures requires hot-spares
       •  Defeating some of the savings from parity
          redundancy
RAID Challenges: Expansion

•  The next generation of disks will be larger and cost less per
   GB. We would like to use these as we expand

•  Most RAID replication schemes require identical disk meaning
   new disks cannot be added to an old set meaning failed disks
   must be replaced with identical units

•  Proprietary appliances may require replacements from
   manufacturer (at much higher than commodity prices)

•  Many storage systems reach a limit beyond which they
   cannot be further expanded (e.g. fork-lift upgrade)

•  Re-balancing existing data over new volumes is non-trivial
RAID Challenges: Reliability/Availability

•  RAID-5 can only survive a single disk failure
   •    The odds of an NRE during recovery are significant
   •    Odds of a second failure during recovery are non-negligible
   •    Annual peta-byte durability for RAID-5 is only 3 nines

•  RAID-6 redundancy protects against two disk failures
   •    Odds of an NRE during recovery are still significant
   •    Client data access will be starved out during recovery
   •    Throttling recovery increases the risk of data loss

•  Even RAID-6 can't protect against:
   •    Server failures
   •    NIC failures
   •    Switch failures
   •    OS crashes
   •    Facility or regional disasters
RAID Challenges: Expense

Capital Expenses … good RAID costs
   •    Significant mark-up for enterprise hardware
   •    High performance RAID controllers can add $50-100/disk
   •    SANs further increase
   •    Expensive equipment, much of which is often poorly used
   •    Software RAID is much less expensive, and much slower

Operating Expenses … RAID doesn't manage itself
   •    RAID group, LUN and pool management
   •    Lots of application-specific tunable parameters
   •    Difficult expansion and migration
   •    When a recovery goes bad, it goes very bad
   •    Don't even think about putting off replacing a failed drive
Ceph Advantages
Ceph VALUE PROPOSITION

                        •  Open source
                        •  Runs on commodity hardware
    SAVES MONEY         •  Runs in heterogeneous
                           environments


                        •  Self-managing
     SAVES TIME         •  OK to batch drive replacements
                        •  Emerging platform integration


                        •  Object, block, & filesystem storage
INCREASES FLEXIBILITY   •  Highly adaptable software solution
                        •  Easier deployment of new services


                        •  No vendor lock in
    LOWERS RISK         •  Rule configurable failure-zones
                        •  Improved reliability and availability
Ceph Advantage: Declustered Placement
•    Consider a failed 2TB RAID mirror
      •  We must copy 2TB from the survivor to the successor
      •  Survivor and successor are likely in same failure zone

•    Consider two RADOS objects clustered on the same primary
      •  Surviving copies are declustered (on different secondaries)
      •  New copies will be declustered (on different successors)
      •  Copy 10GB from each of 200 survivors to 200 successors
      •  Survivors and successors are in different failure zones

•    Benefits
      •  Recovery is parallel and 200x faster
      •  Service can continue during the recovery process
      •  Exposure to 2nd failures is reduced by 200x
      •  Zone aware placement protects against higher level failures
      •  Recovery is automatic and does not await new drives
      •  No idle hot-spares are required
CLIENT

         ??




              23
Ceph Advantage: Object Granularity
•    Consider a failed 2TB RAID mirror
      •  To recover it we must read and write (at least) 2TB
      •  Successor must be same size as failed volume
      •  An error in recovery will probably lose the file system

•    Consider a failed RADOS OSD
      •  To recover it we must read and write thousands of objects
      •  Successor OSDs must have each have some free space
      •  An error in recovery will probably lose one object

•    Benefits
      •  Heterogeneous commodity disks are easily supported
      •  Better and more uniform space utilization
      •  Per-object updates always preserve causality ordering
      •  Object updates are more easily replicated over WAN links
      •  Greatly reduced data loss if errors do occur
Ceph Advantage: Intelligent Storage

•  Intelligent OSDs automatically rebalance data
     •  When new nodes are added
     •  When old nodes fail or are decommissioned
     •  When placement policies are changed

•  The resulting rebalancing is very good:
    •  Even distribution of data across all OSDs
    •  Uniform mix of old and new data across all OSDs
    •  Moves only as much data as required

•  Intelligent OSDs continuously scrub their objects
     •  To detect and correct silent write errors before another failure

•  This architecture scales from petabytes to exabytes
    •  A single pool of thin provisioned, self-managing storage
    •  Serving a wide range of block, object, and file clients
Ceph Advantage: Price

•  Can leverage commodity hardware for lowest costs
•  Not locked in to single vendor; get best deal over time
•  RAID not required, leading to lower component costs



                          Enterprise RAID       Ceph Replication


      Raw $/GB            $3                    $0.50

      Protected $/GB      $4 (RAID6 6+2)        $1.50 (3 copies)

      Usable (90%)        $4.44                 $1.67

      Replicated          $8.88 (Main + Bkup)   $1.67 (3 copies)

      Relative Expense    533% storage cost     Baseline (100%)
Q&A
Leverage great online resources

Documentation on the Ceph web site:
   •  http://ceph.com/docs/master/
Blogs from Inktank and the Ceph community:
   •  http://www.inktank.com/news-events/blog/
   •  http://ceph.com/community/blog/
Developer resources:
   •  http://ceph.com/resources/development/
   •  http://ceph.com/resources/mailing-list-irc/
   •  http://dir.gmane.org/gmane.comp.file-
      systems.ceph.devel
29




     Leverage Ceph Expert Support
     Inktank will partner with you for complex deployments
         •  Solution design and Proof-of-Concept
         •  Solution customization
         •  Capacity planning
         •  Performance optimization

     Having access to expert support is a production best practice
        •  Troubleshooting
        •  Debugging

     A full description of our services can be found at the following:

     Consulting Services: http://www.inktank.com/consulting-services/

     Support Subscriptions: http://www.inktank.com/support-services/
Check out our upcoming webinars
Ceph Unified Storage for OpenStack
 •  April 4, 2013
 •  10:00AM PT, 12:00PM CT, 1:00PM ET

Technical Deep Dive Into Ceph Object Storage
 •  April 10, 2013
 •  10:00AM PT, 12:00PM CT, 1:00PM ET

Register today at:
http://www.inktank.com/news-events/webinars/
Contact Us
Info@inktank.com
1-855-INKTANK

Don’t forget to follow us on:

   Twitter: https://twitter.com/inktank

   Facebook: http://www.facebook.com/inktank

   YouTube: http://www.youtube.com/inktankstorage
Thank you for joining!

More Related Content

What's hot

MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11Kenny Gryp
 
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
DX12 & Vulkan: Dawn of a New Generation of Graphics APIsDX12 & Vulkan: Dawn of a New Generation of Graphics APIs
DX12 & Vulkan: Dawn of a New Generation of Graphics APIsAMD Developer Central
 
Cloud Native ClickHouse at Scale--Using the Altinity Kubernetes Operator-2022...
Cloud Native ClickHouse at Scale--Using the Altinity Kubernetes Operator-2022...Cloud Native ClickHouse at Scale--Using the Altinity Kubernetes Operator-2022...
Cloud Native ClickHouse at Scale--Using the Altinity Kubernetes Operator-2022...Altinity Ltd
 
zenoh: The Edge Data Fabric
zenoh: The Edge Data Fabriczenoh: The Edge Data Fabric
zenoh: The Edge Data FabricAngelo Corsaro
 
MySQL 5.7とレプリケーションにおける改良
MySQL 5.7とレプリケーションにおける改良MySQL 5.7とレプリケーションにおける改良
MySQL 5.7とレプリケーションにおける改良Shinya Sugiyama
 
The Hot Rod Protocol in Infinispan
The Hot Rod Protocol in InfinispanThe Hot Rod Protocol in Infinispan
The Hot Rod Protocol in InfinispanGalder Zamarreño
 
Red Hat OpenShift Container Storage
Red Hat OpenShift Container StorageRed Hat OpenShift Container Storage
Red Hat OpenShift Container StorageTakuya Utsunomiya
 
Barbarian raids guide level 1to15
Barbarian raids guide level 1to15 Barbarian raids guide level 1to15
Barbarian raids guide level 1to15 Silver Caprice
 
Shader model 5 0 and compute shader
Shader model 5 0 and compute shaderShader model 5 0 and compute shader
Shader model 5 0 and compute shaderzaywalker
 
MySQL Router - Explore The Secrets (MySQL Belgian Days 2024)
MySQL Router - Explore The Secrets (MySQL Belgian Days 2024)MySQL Router - Explore The Secrets (MySQL Belgian Days 2024)
MySQL Router - Explore The Secrets (MySQL Belgian Days 2024)Miguel Araújo
 
Cross Data Center Replication with Redis using Redis Enterprise
Cross Data Center Replication with Redis using Redis EnterpriseCross Data Center Replication with Redis using Redis Enterprise
Cross Data Center Replication with Redis using Redis EnterpriseCihan Biyikoglu
 
memcached Binary Protocol in a Nutshell
memcached Binary Protocol in a Nutshellmemcached Binary Protocol in a Nutshell
memcached Binary Protocol in a NutshellToru Maesaka
 
Khronos Munich 2018 - Halcyon and Vulkan
Khronos Munich 2018 - Halcyon and VulkanKhronos Munich 2018 - Halcyon and Vulkan
Khronos Munich 2018 - Halcyon and VulkanElectronic Arts / DICE
 
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Mydbops
 
Flink on Kubernetes operator
Flink on Kubernetes operatorFlink on Kubernetes operator
Flink on Kubernetes operatorEui Heo
 
Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Flink Forward
 
Parallel Graphics in Frostbite - Current & Future (Siggraph 2009)
Parallel Graphics in Frostbite - Current & Future (Siggraph 2009)Parallel Graphics in Frostbite - Current & Future (Siggraph 2009)
Parallel Graphics in Frostbite - Current & Future (Siggraph 2009)Johan Andersson
 

What's hot (20)

MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
 
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
DX12 & Vulkan: Dawn of a New Generation of Graphics APIsDX12 & Vulkan: Dawn of a New Generation of Graphics APIs
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
 
Cloud Native ClickHouse at Scale--Using the Altinity Kubernetes Operator-2022...
Cloud Native ClickHouse at Scale--Using the Altinity Kubernetes Operator-2022...Cloud Native ClickHouse at Scale--Using the Altinity Kubernetes Operator-2022...
Cloud Native ClickHouse at Scale--Using the Altinity Kubernetes Operator-2022...
 
zenoh: The Edge Data Fabric
zenoh: The Edge Data Fabriczenoh: The Edge Data Fabric
zenoh: The Edge Data Fabric
 
MySQL 5.7とレプリケーションにおける改良
MySQL 5.7とレプリケーションにおける改良MySQL 5.7とレプリケーションにおける改良
MySQL 5.7とレプリケーションにおける改良
 
The Hot Rod Protocol in Infinispan
The Hot Rod Protocol in InfinispanThe Hot Rod Protocol in Infinispan
The Hot Rod Protocol in Infinispan
 
Red Hat OpenShift Container Storage
Red Hat OpenShift Container StorageRed Hat OpenShift Container Storage
Red Hat OpenShift Container Storage
 
Barbarian raids guide level 1to15
Barbarian raids guide level 1to15 Barbarian raids guide level 1to15
Barbarian raids guide level 1to15
 
Shader model 5 0 and compute shader
Shader model 5 0 and compute shaderShader model 5 0 and compute shader
Shader model 5 0 and compute shader
 
MySQL Router - Explore The Secrets (MySQL Belgian Days 2024)
MySQL Router - Explore The Secrets (MySQL Belgian Days 2024)MySQL Router - Explore The Secrets (MySQL Belgian Days 2024)
MySQL Router - Explore The Secrets (MySQL Belgian Days 2024)
 
Cross Data Center Replication with Redis using Redis Enterprise
Cross Data Center Replication with Redis using Redis EnterpriseCross Data Center Replication with Redis using Redis Enterprise
Cross Data Center Replication with Redis using Redis Enterprise
 
Kafka & Hadoop in Rakuten
Kafka & Hadoop in RakutenKafka & Hadoop in Rakuten
Kafka & Hadoop in Rakuten
 
memcached Binary Protocol in a Nutshell
memcached Binary Protocol in a Nutshellmemcached Binary Protocol in a Nutshell
memcached Binary Protocol in a Nutshell
 
Khronos Munich 2018 - Halcyon and Vulkan
Khronos Munich 2018 - Halcyon and VulkanKhronos Munich 2018 - Halcyon and Vulkan
Khronos Munich 2018 - Halcyon and Vulkan
 
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
 
Smooth as Silk Exadata Patching
Smooth as Silk Exadata PatchingSmooth as Silk Exadata Patching
Smooth as Silk Exadata Patching
 
Ca 7 primer
Ca 7 primerCa 7 primer
Ca 7 primer
 
Flink on Kubernetes operator
Flink on Kubernetes operatorFlink on Kubernetes operator
Flink on Kubernetes operator
 
Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...
 
Parallel Graphics in Frostbite - Current & Future (Siggraph 2009)
Parallel Graphics in Frostbite - Current & Future (Siggraph 2009)Parallel Graphics in Frostbite - Current & Future (Siggraph 2009)
Parallel Graphics in Frostbite - Current & Future (Siggraph 2009)
 

Similar to The End of RAID as you know it with Ceph Replication

New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
 
NAVER Ceph Storage on ssd for Container
NAVER Ceph Storage on ssd for ContainerNAVER Ceph Storage on ssd for Container
NAVER Ceph Storage on ssd for ContainerJangseon Ryu
 
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...Celia Chase
 
Gluster open stack dev summit 042011
Gluster open stack dev summit 042011Gluster open stack dev summit 042011
Gluster open stack dev summit 042011Open Stack
 
What is coming for VMware vSphere?
What is coming for VMware vSphere?What is coming for VMware vSphere?
What is coming for VMware vSphere?Duncan Epping
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
 
OSS Presentation by Bryan Badger
OSS Presentation by Bryan BadgerOSS Presentation by Bryan Badger
OSS Presentation by Bryan BadgerOpenStorageSummit
 
State of the Container Ecosystem
State of the Container EcosystemState of the Container Ecosystem
State of the Container EcosystemVinay Rao
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red_Hat_Storage
 
Future of cloud storage
Future of cloud storageFuture of cloud storage
Future of cloud storageGlusterFS
 
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...DoKC
 
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...DoKC
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld
 
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
Revolutionary Storage for Modern Databases, Applications and InfrastrctureRevolutionary Storage for Modern Databases, Applications and Infrastrcture
Revolutionary Storage for Modern Databases, Applications and Infrastrcturesabnees
 

Similar to The End of RAID as you know it with Ceph Replication (20)

XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
 
NAVER Ceph Storage on ssd for Container
NAVER Ceph Storage on ssd for ContainerNAVER Ceph Storage on ssd for Container
NAVER Ceph Storage on ssd for Container
 
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
 
Gluster open stack dev summit 042011
Gluster open stack dev summit 042011Gluster open stack dev summit 042011
Gluster open stack dev summit 042011
 
What is coming for VMware vSphere?
What is coming for VMware vSphere?What is coming for VMware vSphere?
What is coming for VMware vSphere?
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
 
OSS Presentation by Bryan Badger
OSS Presentation by Bryan BadgerOSS Presentation by Bryan Badger
OSS Presentation by Bryan Badger
 
State of the Container Ecosystem
State of the Container EcosystemState of the Container Ecosystem
State of the Container Ecosystem
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
DAS RAID NAS SAN
DAS RAID NAS SANDAS RAID NAS SAN
DAS RAID NAS SAN
 
Future of cloud storage
Future of cloud storageFuture of cloud storage
Future of cloud storage
 
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
 
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
Disaggregated Container Attached Storage - Yet Another Topology with What Pur...
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
 
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
Revolutionary Storage for Modern Databases, Applications and InfrastrctureRevolutionary Storage for Modern Databases, Applications and Infrastrcture
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
 

Recently uploaded

08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 

Recently uploaded (20)

08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 

The End of RAID as you know it with Ceph Replication

  • 1. Inktank Delivering the Future of Storage The End of RAID as you know it with Ceph Replication March 28, 2013
  • 2. Agenda l  Intank and Ceph Introduction l  Ceph Technology l  Challenges of Raid l  Ceph Advantages l  Q&A l  Resources and Moving Forward
  • 3. •  Company that provides •  Distributed unified object, professional services and block and file storage support for Ceph platform •  Founded in 2011 •  Created by storage •  Funded by DreamHost experts •  Mark Shuttleworth •  Open source invested $1M •  In the Linux Kernel •  Sage Weil, CTO and creator of Ceph •  Integrated into Cloud Platforms
  • 5. Ceph Technological Foundations Ceph was built with the following goals: •  Every component must scale •  There can be no single points of failure •  The solution must be software-based, not an appliance •  Must be open source •  Should run on readily-available, commodity hardware •  Everything must self-manage wherever possible 5
  • 6. Ceph Innovations CRUSH data placement algorithm Algorithm is infrastructure aware and quickly adjusts to failures Data location is computed rather than locked up Enables clients to directly directly communicate with servers that store their data Enables clients to perform parallel I/O for greatly enhanced throughput Reliable Autonomic Distributed Object Store Storage devices assume complete responsibility for data integrity They operate independently, in parallel, without central choreography Very efficient. Very fast. Very scalable. CephFS Distributed Metadata Server Highly scalable to large numbers of active/active metadata servers and high throughput Highly reliable and available, with full Posix semantics and consistency guarantees Has both a FUSE client and a client fully integrated into the Linux kernel Advanced Virtual Block Device Enterprise storage capabilities from utility server hardware Thin Provisioned, Allocate-on-Write Snapshots, LUN cloning In the Linux kernel and integrated with OpenStack components
  • 7. Unified Storage Platform Object •  Archival and backup storage •  Primary data storage •  S3-like storage •  Web services and platforms •  Application development Block •  SAN replacement •  Virtual block device, VM images File •  HPC •  Posix-compatible applications
  • 8. Ceph Unified Storage Platform OBJECTS VIRTUAL DISKS FILES & DIRECTORIES CEPH CEPH CEPH GATEWAY BLOCK DEVICE FILE SYSTEM A powerful S3- and Swift- A distributed virtual block A distributed, scale-out compatible gateway that device that delivers high- filesystem with POSIX brings the power of the performance, cost-effective semantics that provides Ceph Object Store to storage for virtual machines storage for a legacy and modern applications and legacy applications modern applications CEPH OBJECT STORE A reliable, easy to manage, next-generation distributed object store that provides storage of unstructured data for applications
  • 9. RADOS Cluster Makeup OSD OSD OSD OSD OSD RADOS Node btrfs FS FS FS FS FS xfs ext4 DISK DISK DISK DISK DISK RADOS M M M Cluster 9
  • 10. RADOS Object Storage Daemons Intelligent Storage Servers •  Serve stored objects to clients •  OSD is primary for some objects •  Responsible for replication •  Responsible for coherency •  Responsible for re-balancing •  Responsible for recovery •  OSD is secondary for some objects •  Under control of primary •  Capable of becoming primary •  Supports extended object classes •  Atomic transactions •  Synchronization and notifications •  Send computation to the data 10
  • 11. CRUSH Pseudo-random placement algorithm •  Deterministic function of inputs •  Clients can compute data location Rule-based configuration •  Desired/required replica count •  Affinity/distribution rules •  Infrastructure topology •  Weighting Excellent data distribution •  Declustered placement •  Excellent data re-distribution •  Migration proportional to change 11
  • 12. CLIENT ?? 12
  • 13. RADOS Monitors Stewards of the Cluster M •  Distributed consensus (Paxos) •  Odd number required (quorum) •  Maintain/distribute cluster map •  Authentication/key servers •  Monitors are not in the data path •  Clients talk directly to OSDs 13
  • 14. RAID and its Challenges 14
  • 15. Redundant Array of Inexpensive Disks Enhanced Reliability •  RAID-1 mirroring •  RAID-5/6 parity (reduced overhead) •  Automated recovery Enhanced Performance •  RAID-0 striping •  SAN interconnects •  Enterprise SAS drives •  Proprietary H/W RAID controllers Economical Storage Solutions •  Software RAID implementations •  iSCSI and JBODs Enhanced Capacity •  Logical volume concatenation 15
  • 16. RAID Challenges: Capacity/Speed •  Storage economies in disks come from more GB per spindle •  NRE rates are flat (typically estimated at 10E-15/bit) •  4% chance of NRE while recovering a 4+1 RAID-5 set and it goes up with the number of volumes in the set many RAID controllers fail the recovery after an NRE •  Access speed has not kept up with density increases •  27 hours to rebuild a 4+1 RAID-5 set at 20MB/s during which time a second drive can fail •  Managing the risk of second failures requires hot-spares •  Defeating some of the savings from parity redundancy
  • 17. RAID Challenges: Expansion •  The next generation of disks will be larger and cost less per GB. We would like to use these as we expand •  Most RAID replication schemes require identical disk meaning new disks cannot be added to an old set meaning failed disks must be replaced with identical units •  Proprietary appliances may require replacements from manufacturer (at much higher than commodity prices) •  Many storage systems reach a limit beyond which they cannot be further expanded (e.g. fork-lift upgrade) •  Re-balancing existing data over new volumes is non-trivial
  • 18. RAID Challenges: Reliability/Availability •  RAID-5 can only survive a single disk failure •  The odds of an NRE during recovery are significant •  Odds of a second failure during recovery are non-negligible •  Annual peta-byte durability for RAID-5 is only 3 nines •  RAID-6 redundancy protects against two disk failures •  Odds of an NRE during recovery are still significant •  Client data access will be starved out during recovery •  Throttling recovery increases the risk of data loss •  Even RAID-6 can't protect against: •  Server failures •  NIC failures •  Switch failures •  OS crashes •  Facility or regional disasters
  • 19. RAID Challenges: Expense Capital Expenses … good RAID costs •  Significant mark-up for enterprise hardware •  High performance RAID controllers can add $50-100/disk •  SANs further increase •  Expensive equipment, much of which is often poorly used •  Software RAID is much less expensive, and much slower Operating Expenses … RAID doesn't manage itself •  RAID group, LUN and pool management •  Lots of application-specific tunable parameters •  Difficult expansion and migration •  When a recovery goes bad, it goes very bad •  Don't even think about putting off replacing a failed drive
  • 21. Ceph VALUE PROPOSITION •  Open source •  Runs on commodity hardware SAVES MONEY •  Runs in heterogeneous environments •  Self-managing SAVES TIME •  OK to batch drive replacements •  Emerging platform integration •  Object, block, & filesystem storage INCREASES FLEXIBILITY •  Highly adaptable software solution •  Easier deployment of new services •  No vendor lock in LOWERS RISK •  Rule configurable failure-zones •  Improved reliability and availability
  • 22. Ceph Advantage: Declustered Placement •  Consider a failed 2TB RAID mirror •  We must copy 2TB from the survivor to the successor •  Survivor and successor are likely in same failure zone •  Consider two RADOS objects clustered on the same primary •  Surviving copies are declustered (on different secondaries) •  New copies will be declustered (on different successors) •  Copy 10GB from each of 200 survivors to 200 successors •  Survivors and successors are in different failure zones •  Benefits •  Recovery is parallel and 200x faster •  Service can continue during the recovery process •  Exposure to 2nd failures is reduced by 200x •  Zone aware placement protects against higher level failures •  Recovery is automatic and does not await new drives •  No idle hot-spares are required
  • 23. CLIENT ?? 23
  • 24. Ceph Advantage: Object Granularity •  Consider a failed 2TB RAID mirror •  To recover it we must read and write (at least) 2TB •  Successor must be same size as failed volume •  An error in recovery will probably lose the file system •  Consider a failed RADOS OSD •  To recover it we must read and write thousands of objects •  Successor OSDs must have each have some free space •  An error in recovery will probably lose one object •  Benefits •  Heterogeneous commodity disks are easily supported •  Better and more uniform space utilization •  Per-object updates always preserve causality ordering •  Object updates are more easily replicated over WAN links •  Greatly reduced data loss if errors do occur
  • 25. Ceph Advantage: Intelligent Storage •  Intelligent OSDs automatically rebalance data •  When new nodes are added •  When old nodes fail or are decommissioned •  When placement policies are changed •  The resulting rebalancing is very good: •  Even distribution of data across all OSDs •  Uniform mix of old and new data across all OSDs •  Moves only as much data as required •  Intelligent OSDs continuously scrub their objects •  To detect and correct silent write errors before another failure •  This architecture scales from petabytes to exabytes •  A single pool of thin provisioned, self-managing storage •  Serving a wide range of block, object, and file clients
  • 26. Ceph Advantage: Price •  Can leverage commodity hardware for lowest costs •  Not locked in to single vendor; get best deal over time •  RAID not required, leading to lower component costs Enterprise RAID Ceph Replication Raw $/GB $3 $0.50 Protected $/GB $4 (RAID6 6+2) $1.50 (3 copies) Usable (90%) $4.44 $1.67 Replicated $8.88 (Main + Bkup) $1.67 (3 copies) Relative Expense 533% storage cost Baseline (100%)
  • 27. Q&A
  • 28. Leverage great online resources Documentation on the Ceph web site: •  http://ceph.com/docs/master/ Blogs from Inktank and the Ceph community: •  http://www.inktank.com/news-events/blog/ •  http://ceph.com/community/blog/ Developer resources: •  http://ceph.com/resources/development/ •  http://ceph.com/resources/mailing-list-irc/ •  http://dir.gmane.org/gmane.comp.file- systems.ceph.devel
  • 29. 29 Leverage Ceph Expert Support Inktank will partner with you for complex deployments •  Solution design and Proof-of-Concept •  Solution customization •  Capacity planning •  Performance optimization Having access to expert support is a production best practice •  Troubleshooting •  Debugging A full description of our services can be found at the following: Consulting Services: http://www.inktank.com/consulting-services/ Support Subscriptions: http://www.inktank.com/support-services/
  • 30. Check out our upcoming webinars Ceph Unified Storage for OpenStack •  April 4, 2013 •  10:00AM PT, 12:00PM CT, 1:00PM ET Technical Deep Dive Into Ceph Object Storage •  April 10, 2013 •  10:00AM PT, 12:00PM CT, 1:00PM ET Register today at: http://www.inktank.com/news-events/webinars/
  • 31. Contact Us Info@inktank.com 1-855-INKTANK Don’t forget to follow us on: Twitter: https://twitter.com/inktank Facebook: http://www.facebook.com/inktank YouTube: http://www.youtube.com/inktankstorage
  • 32. Thank you for joining!