Ceph is a massively scalable, open source, software-defined storage system that runs on commodity hardware. Get an update about the latest version of Red Hat Ceph Storage, including information about the newest features and use cases, with a particular focus on cloud storage and OpenStack. We’ll also explore the themes and directions for the roadmap for the next 12 months.
1. RED HAT CEPH STORAGE:
PAST, PRESENT AND FUTURE
Neil Levine
June 25, 2016
2. AGENDA
Red Hat Storage Overview
Past
Retrospective on Inktank acquisition
Red Hat Ceph Storage 1.2
Present
Red Hat Ceph Storage 1.3
RHEL-OSP with 1.3
Future
Red Hat Ceph Storage 2.0
OpenStack and Containers
3. Open Software-Defined Storage is a fundamental
reimagining of how storage infrastructure works.
It provides substantial economic and operational
advantages, and it has quickly become ideally
suited for a growing number of use cases.
TODAY EMERGING FUTURE
Cloud
Infrastructure
Cloud
Native Apps
Analytics
Hyper-
Convergence
Containers
???
???
OPEN, SOFTWARE-DEFINED STORAGE
4. A RISING TIDE
“By 2020, between 70-80% of unstructured data will be held on
lower-cost storage managed by SDS environments.”
“By 2019, 70% of existing storage array products
will also be available as software only versions”
“By 2016, server-based storage solutions will lower
storage hardware costs by 50% or more.”
Gartner: “IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry”
Innovation Insight: Separating Hype From Hope for Software-Defined Storage
Innovation Insight: Separating Hype From Hope for Software-Defined Storage
Market size is projected to increase approximately 20%
year-over-year between 2015 and 2019.
201
3
201
4
201
5
201
6
201
7
201
8
201
9
$1,349B
$1,195B
$1,029B
$859B
$706B
$592B
SDS-P MARKET SIZE BY SEGMENT
$457B
Block Storage
File Storage
Object Storage
Hyperconverged
Source: IDC
Software-Defined Storage is leading a shift in the
global storage industry, with far-reaching effects.
5. THE RED HAT STORAGE PORTFOLIO
Ceph
management
OPENSOURCE
SOFTWARE
Gluster
management
Ceph
data services
Gluster
data services
STANDARD
HARDWARE
Share-nothing, scale-out
architecture provides durability
and adapts to changing demands
Self-managing and self-healing
features reduce operational overhead
Standards-based interfaces
and full APIs ease integration
with applications and systems
Supported by the
experts at Red Hat
6. ● VM Storage with OpenStack Cinder,
Glance & Nova
● Object storage for tenant apps
Built from the ground up as a next-generation
storage system, based on years of research and
suitable for powering infrastructure platforms
TARGET USE CASES
Rich Media and Archival
● S3-compatible object storage
Highly tunable, extensible, and configurable, with
policy-based control and no single point of failure
Offers mature interfaces for block and object
storage for the enterprise
Cloud Infrastructure
Customer Highlight: Cisco
Cisco uses Red Hat Ceph Storage to deliver storage
for next-generation cloud services
RED HAT CEPH STORAGE
Powerful distributed storage for the cloud and beyond
7. ANALYTICS
Big Data analytics with Hadoop
CLOUD
INFRASTRUCTURE
RICH MEDIA
AND ARCHIVAL
SYNC AND
SHARE
ENTERPRISE
VIRTUALIZATION
Machine data analytics with Splunk
Virtual machine storage with OpenStack
Object storage for tenant applications
Cost-effective storage for rich media streaming
Active archives
File sync and share with ownCloud
Storage for conventional
virtualization with RHEV
FOCUSED SET OF USE CASES
10. MGMT
All required dependencies are now included within a local package
repository, allowing deployment to non-Internet-connected storage nodes.
MGMTCORECORECOREOBJECT
Administrators can now perform basic cluster administration tasks
through Calamari, the Ceph visual interface.
Erasure-coded storage back-ends are now available, providing durability
with lower capacity requirements than traditional, replicated back-ends.
A cache tier pool can now be designated as a writeback or read cache for
an underlying storage pool in order to provide cost-effective performance.
Clients can be configured to read objects from the closest replica,
increasing performance and reducing network strain.
The Ceph Object Gateway now supports and enforces quotas
for users and buckets.
Off-line installer
GUI management
Erasure coding
Cache tiering
RADOS read-
affinity
User and
bucket quotas
These features were introduced in version 1.2 of Red Hat Ceph Storage,
and have been supported by Red Hat since July, 2014.
DETAIL:
RED HAT CEPH STORAGE V1.2
16. RED HAT CEPH STORAGE 1.3
GA Today
Based on Ceph Hammer (0.94)
Core Themes
Robustness at Scale
Operational Efficiency
Performance
17. Red Hat Ceph Storage 1.3 contains improved logic and
algorithms that allow it to do the “right thing” for users with
multi-petabyte clusters where hardware failure is normal:
ROBUSTNESS AT SCALE
Improved self-management for large clusters
● Improved automatic rebalancing logic, which prioritizes
degraded over misplaced objects
● Rebalancing operations can be temporarily disabled so they
don’t impact performance
● Time-scheduled scrubbing, to avoid disruption during peak
times
● Sharding of object buckets to avoid hot-spots
18. Ceph is a distributed system with lots of moving parts.
Red Hat Ceph Storage 1.3 introduces features to help
manage storage more efficiently.
OPERATIONAL EFFICIENCY
Making administration tasks easier
● Calamari now supports multiple users and clusters
● CRUSH management via Calamari API allows
programmatic adjustment of placement policies
● Lightweight, embedded Civetweb server eases
deployment of the Ceph Object Gateway
● Faster Ceph Block Device operations make resize,
delete, and flatten operations quicker, while export
parallelism makes backups faster
20. A number of performance tweaks improve the speed of
Red Hat Ceph Storage 1.3 and increase I/O consistency:
PERFORMANCE
Speedier, more efficient distributed storage
● Optimizations for flash storage devices
increases Ceph’s topline speed
● Read ahead caching accelerates virtual
machine booting in OpenStack
● Allocation hinting reduces XFS fragmentation
to avoid performance degradation over time
● Caching hinting preserves the cache’s
advantages and improves performance
PERFORMANCE
SCALE
26. DETAIL:
RED HAT CEPH STORAGE “TUFNELL”
CORECORECORE
More intelligent scrubbing policies and improved peering logic to reduce
impact of common operations on overall cluster performance.
More information about objects will be provided to help administrators
perform repair operations on corrupted data.
New backend for OSDs to provide performance benefits on existing
and modern drives (SSD, K/V).
Performance
Consistency
Guided Repair
New Backing Store
(Tech Preview)
These projects are currently active in the Ceph development community. They may be
available and supported by Red Hat once they reach the necessary level of maturity.
MGMT
A new user interface with improved sorting and visibility of critical data.
MGMT
Introduction of altering features that notify administrations of critical
issues via email or SMS.
New UI
Alerting
27. BLOCK
Introduction of a highly-available iSCSI interface for the Ceph Block Device,
allowing integration with legacy systems
BLOCKOBJECTOBJECT
Capabilities for managing virtual block devices in multiple regions, maintaining
consistency through automated mirroring of incremental changes
Access to objects stored in the Ceph Object Gateway via standard Network
File System (NFS) endpoints, providing storage for legacy systems and
applications
Support for deployment of the Ceph Object Gateway across multiple sites
in an active/active configuration (in addition to the currently-available
active/passive configuration)
iSCSI
Mirroring
NFS
Active/Active
Multi-Site
These projects are currently active in the Ceph development community. They may be
available and supported by Red Hat once they reach the necessary level of maturity.
DETAIL:
RED HAT CEPH STORAGE “TUFNELL”