SlideShare uma empresa Scribd logo
1 de 28
Baixar para ler offline
1 © Hortonworks Inc. 2011–2018. All rights reserved
HDFS Scalability and Evolution:
HDDS and Ozone
Sanjay Radia,
Founder, Chief Architect, Hortonworks
2 © Hortonworks Inc. 2011–2018. All rights reserved
About the Speakers
• Sanjay Radia
• Chief Architect, Founder, Hortonworks
• Apache Hadoop PMC and Committer
• Part of the original Hadoop team at Yahoo! since 2007
• Chief Architect of Hadoop Core at Yahoo!
• Prior
• Data center automation, virtualization, Java, HA, OSs, File Systems
• Startup, Sun Microsystems, INRIA…
• Ph.D., University of Waterloo
Page 2
Architecting the Future of Big Data
3 © Hortonworks Inc. 2011–2018. All rights reserved
HDFS – What It Does Well and Not So Well
HDFS does well
• Scaling – IO + PBs + clients
• Horizontal scaling – IO + PBs
• Fast IO – scans and writes
• Number of concurrent clients 60K++
• Low latency metadata operations
• Fault tolerant storage layer
• Locality
• Replicas/Reliability and parallelism
• Layering – Namespace layer and storage layer
• Security
But scaling Namespace is limited to
500M files (192G Heap)
• Scaling Namespace – 500M FILES
• Scaling Block space
• Scaling Block reports
• Scaling DN’s block management
• Need further scaling of client/RPC 150K++
Ironically, Namespace in mem
is strength and weakness
4 © Hortonworks Inc. 2011–2018. All rights reserved
Proof Points of Scaling Data, IO, Clients/RPC
• Proof points of large data and large clusters
• Single Organizations have over 600PB in HDFS
• Single clusters with over 200PB using federation
• Large clusters over 4K multi-core nodes bombarding a single NN
• Federation is the currents caling solution (both Namespace &
Operations)
• In deployment at Twitter, Yahoo, FB, and elsewhere
Metadata in memory the strength of the original GFS and HDFS design
But also its weakness in scaling number of files and blocks
5 © Hortonworks Inc. 2011–2018. All rights reserved
Scaling HDFS—
with HDDS and Ozone
6 © Hortonworks Inc. 2011–2018. All rights reserved
HDFS Layering
DN	1 DN	2 DN	m
.. .. ..
NS1
...
NS	k
Block	Management	Layer
Block	Pool		kBlock	Poo	1
NN-1 NN-k
Common	Storage
BlockStorageNamespace
7 © Hortonworks Inc. 2011–2018. All rights reserved
Solutions to Scaling Files, Blocks, Clients/RPC
Scale Namespace
• Hierarchical file system
• Cache only workingSet of namespace in
memory
• Partition:
• Distributed namespace (transparent automatic
partitioning)
• Volumes (static partitioning)
Flat Key-Value store
• Cache only workingSet of namespace in
memory
• Partition/Shard the space (easy to hash)
Scale Metadata Clients/RPC
• Multi-thread namespace manager
• Partitioning/Sharding
Slow NN startup
• Cache only workingSet in mem
• Shard/partition namespace
Scale Block Management
• Containers of blocks (2GB-16GB+)
• Will significantly reduce BlockMap
• Reduce Number of Block/Container reports
8 © Hortonworks Inc. 2011–2018. All rights reserved
Scaling HDFS
Must Scale both the Namespace and the Block Layer
• Scaling one is not sufficient
Scalable Block layer: Hadoop Distributed Data Storage (HDDS)
• Containers of blocks
• Replicated as a group
• Reduces Block Map
Scale Namespace: Several approaches (not exclusive)
• Partial namespace in memory
• Shard namespace
• Use flat namespace (KV namespace) – easier to implement and scale – Ozone
9 © Hortonworks Inc. 2011–2018. All rights reserved
Scale Storage Layer:
Container of Blocks
HDDS
Flat KV
Namespace:
Ozone
New
HDFS
OzoneFS:
Hadoop
Compatible
FS
Hierarchical
Namespace:
New Scalable
NN
Evolution Towards New HDFS
10 © Hortonworks Inc. 2011–2018. All rights reserved
HDFS Ozone and Quadra on Same Cluster/storage
- Shared Storage Servers and Shared Physical Storage
Data Nodes : Shared Storage Servers for HDFS-Blocks and Ozone/Quadra Blocks
Shared Physical Storage
HDFS
Scalable FS
with
Hierarchical
Name space
Hadoop Compatible FS API
FileSystem or FileContext
Quadra
Raw
Storage
Volumes
Raw Storage API
(Lun/EBS like, SCSI)
Linux FS
Ozone
Highly
Scalable KV
Object Store
Flat
Namespace
S3 API
11 © Hortonworks Inc. 2011–2018. All rights reserved
How it all Fits Together
Old HDFS NN
All namespace in
memory
Storage&IONamespace
HDFS Block storage on DataNodes
(Bid -> Data)
Physical Storage - Shared DataNodes and physical
storage shared between
Old HDFS and HDDS
Block Reports
BlockMap
(Bid ->IPAddress of DN
File = Bid[]
Ozone Master
K-V Flat
Namespace
File (Object) = Bid[]
Bid = Cid+ LocalId
New HDFS NN
(scalable)
Hierarchical
Namespace
File = Bid[]
Bid = Cid+ LocalId
Container Management
& Cluster Membership
HDDS Container Storage on DataNodes
(Bid -> Data, but blocks grouped in containers)
HDDS
HDDS – Clean
Separation of
Block layer
DataNodes
ContainerMap
(CId ->IPAddress of DNContainer Reports
NewExisting HDFS
12 © Hortonworks Inc. 2011–2018. All rights reserved
Ozone FS
Ozone/HDDS Can Be Used Separately, or also with HDFS
• Initially HDFS is the default FS
• Has many features
• so cannot be replaced by OzoneFS on day one
• Ozone FS sits on side as additional namespace,
sharing DNs
• For applications work with Hadoop Compatible FS
on K-V Store – Hive, Spark …
• How is Ozone FS accessed?
• Use direct URIs for either HDFS or OzoneFS
• Mount in HDFS or in ViewFS
HDFS
Default
FS
13 © Hortonworks Inc. 2011–2018. All rights reserved
Scalable Block Layer:
Hadoop Distributed Data Storage (HDDS)
Container: Containers of blocks (2GB-16GB+)
• Replicated as a group
• Each Container has a unique ContainerId
– Every block within a container has a block id
- BlockId = ContainerId, LocalId
CM – Container manager
• Cluster membership
• Receives container reports from DNs
• Manages container replication
• Maintained Container Map (Cid->IPAddr)
Data Nodes – HDFS and HDDS can share DNs
• DataNodes contain a set of containers (just like
they used to contain blocks)
• DataNodes send Container-reports (like block
reports) to CM (Container Manager)
Block Pools
• Just like blocks were in block pools, containers
are also in container pools
– This allow independent namespaces to carve out
their block space
HDDS: Separate layer from namespace layer (strictly separate, not almost)
14 © Hortonworks Inc. 2011–2018. All rights reserved
Key Ozone Characteristics—Compare with HDFS
• Scale Block Management
• Containers of block (2 GB to 16GB)
• 2-4gb block containers initially => 40-80x
reduction in BR and CM block map
• Reduce BR on DNs, Masters, Network
• Scale Namespace
• Key Space Manager caches only working set in
memory
• Future scaling:
• Flat namespace is easy to shard (Bucket are
natural sharding points)
• Scale Num of Metadata Clients/Rpc
• No single global lock like NN
• Metadata operations are simpler
• Sharding will help further
§ Fault Tolerance
– Blocks – inherits HDFS’s block-layer FT
– Namespace – uses Raft rather then Journal Nodes
•HA Easier
§ Manageability
– GC/Overloaded Master is not longer an issue
• caches working set
– Journal nodes disappear – Raft is used
– Faster and more predictable failover
– Fast start up
• Faster upgrades
• Faster failover
• Retains HDFS Semantics & Performance
– Strong consistency, locality, fast scans, …
• Other:
– OM can run on DNs – beneficial for
small clusters or embedded systems
15 © Hortonworks Inc. 2011–2018. All rights reserved
Will OzoneFS’s Key-Value Store Work with Hadoop Apps?
• Two years ago – NO!
• Today - Yes!
• Hive, Spark and others are making sure they work on Cloud K-V Object Stores via HCFS
• Even customers are ensuring that their apps work on Cloud K-V Object Stores via HCFS
• Lack of real directories and their ACLs: Fake directories + Buckets ACLs
• Lack of eventual consistency in S3 is being worked around – S3Gaurd (Note: OzoneFS is consistent)
• Lack of rename in S3 is being worked around
• Various direct output committers (early versions had issues)
• Netflix Direct Commiter; being replaced by Iceberg
• Via Metastore (Databricks has proprietary version, Hive’s approach)
16 © Hortonworks Inc. 2011–2018. All rights reserved
Details of HDDS
17 © Hortonworks Inc. 2011–2018. All rights reserved
Container Structure (Using RocksDB)
• An embedded LSM/KVStore (RocksDB)
• BlockId is the key,
• filename of local chunk file is value
• Optimizations
• Small blocks (< 1MB) can be stored directly in rocksDB
• Compaction for block data to avoid lots of files
• But this can be evolved over time Container
Index
Chunk
data file
Chunk data
file
Chunk data
file
Chunk data
file
Key 1
LSM
LevelDB/RocksDB
Key N
Chunk Data
File Name
Offset Length
18 © Hortonworks Inc. 2011–2018. All rights reserved
Replication of Container
• Use RAFT replication instead of data pipeline, for both data and metadata
• Proven to be correct
• Traditionally Raft used for small updates and transactions, fits well for metadata
• Performance considerations
• When writing the meta data into raft-journal, put the data directly in container
storage
• Raft-journal in separate disk – fast contagious writes without seeking
• Data spread across the other disks
• Client uses Raft protocol to write data to the DNs storing the container
Page 18
19 © Hortonworks Inc. 2011–2018. All rights reserved
Open and Closed Containers
Open – active writers
• Need at least( NumSpindles * Data nodes) open active containers
• Clients can get locality on writes
• Data is spread across all data nodes
• Improved IO and better chance of getting locality
• Keep DNs and ALL spindles busy
Closed – typically when full or had a failure in the past
• Why close a container on failures
• We originally considered keeping it open and bringing in a new DN
• Wait for the data to copy?
• Decided to close it, and have it replicated
• Can open later or can merge with other closed container – under design
20 © Hortonworks Inc. 2011–2018. All rights reserved
Details of Ozone
21 © Hortonworks Inc. 2011–2018. All rights reserved
Ozone Master
DN1 DN2 DNn
Ozone Master
K-V
Namespace
File (Object) = Bid[]
Bid = Cid+ LocalId
CM
ContainerMap
(CId ->IPAddress of DN
Client
RocksDB
bId[]= Open(Key,..)
GetBlockLocations(Bid)
$$$
$$$ - Container Map Cache
$$$
Read, Write, …
22 © Hortonworks Inc. 2011–2018. All rights reserved
Ozone APIs
• Key: /VolumeName/BucketId/ObjectKey e.g /Home/John/foo/bar/zoo)
• ACLs at Volume and Bucket level (the other dirs are fake)
• Future sharding at bucket level
• => Ozone is Consistent (unlike S3)
Ozone Object API (RPC)
S3 Connector
Hadoop FileSystem and Hadoop
FileContext Connectors
23 © Hortonworks Inc. 2011–2018. All rights reserved
Where Does the Ozone Master Run?
Which Node?
• On a separate node with large enough memory for caching the working set
• Caching the working set is important for large number of concurrent clients
• This option would give predictable performance for large clusters
• On the Datanodes
• How much memory for caching,
• Note: tasks and other services run on DN since they are typically also compute nodes
Where is Storage for the Ozone KV Metadata?
• Local disk
• If on DN then is it dedicated disk or shared with DN?
• Use the container storage (Its using RocksDB anyway)
• Spread Ozone volumes across containers to gain performance,
• but this may limit volume size & force more Ozone volumes than Admin wants
24 © Hortonworks Inc. 2011–2018. All rights reserved
Quadra – Lun-like Raw-Block Storage
Used for creating mountable disk FS volume
25 © Hortonworks Inc. 2011–2018. All rights reserved
Quadra: Raw-Block Storage Volume (Lun)
Lun-like storage service where the blocks are stored on HDDS
• Volume: A raw-block device that can be used to create a mountable disk on Linux.
• Raw-Blocks - those of the native FS that will use the Lun Volume
• Raw-block size is dictated by the native fs like ext4 (4K)
• Raw-Blocks are unit of IO operations by native file systems.
• Raw-Block is the unit of read/write/update to HDDS
• Ozone and Quadra share HDDS as a common storage backend
• Current prototype: 1 raw-block = 1 HDDS block (but this will change later)
Can be used in Kubernetes for container state
28 © Hortonworks Inc. 2011–2018. All rights reserved
Status
• HDDS: Block container
• 2-4gb block containers initially
– Reduction of 40-80 in BR and block map
– Reduce BR pressure in on NN/OzoneMaster
• Initial version to scale to 10s billions of blocks
• Ozone Master
• Implemented using RocksDB (just like the HDDS in DNs)
• Initial version to scale to 10 billion objects
• Current Status and Steps to GA
• Stabilize HDDS and Ozone
• Measure and improve performance
• Add HA for Ozone Master and Container Manager
• Add security – Security design completed and published
• After GA
• Further stabilization and performance improvements
• Transparent encryption
• Erasure codes
• Snapshots (or their equivalent)
• ..
29 © Hortonworks Inc. 2011–2018. All rights reserved
Summary
• HDFS scale proven in real production systems
• 4K+ clusters
• Raw Storage >200PB in single federated NN cluster and >30PB in non-federated clusters
• Scales to 60K+ concurrent clients bombarding the NN
• But very large number of small files is a challenge (500M files)
• HDDS + Ozone: Scalable Hadoop Storage
• Retains
• HDFS block storage Fault-tolerance
• HDFS Horizonal scaling for Storage, IO
• HDFS’s move computation to Storage
• HDDS: Block containers:
• Initially scale to 10B blocks, later to 100B+ blocks (HDFS-7240)
• Ozone – Flat KV namespace + Hadoop Compatible FS (OzoneFS)
• initially scale to 10B files (HDFS-13074)
• Community working on a Hierarchal Namespace on HDDS (HDFS-10419)
30 © Hortonworks Inc. 2011–2018. All rights reserved
Thank You
Q&A

Mais conteúdo relacionado

Mais procurados

Real-time Analytics with Trino and Apache Pinot
Real-time Analytics with Trino and Apache PinotReal-time Analytics with Trino and Apache Pinot
Real-time Analytics with Trino and Apache PinotXiang Fu
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
 
Intro to Delta Lake
Intro to Delta LakeIntro to Delta Lake
Intro to Delta LakeDatabricks
 
Hadoop Hbase - Introduction
Hadoop Hbase - IntroductionHadoop Hbase - Introduction
Hadoop Hbase - IntroductionBlandine Larbret
 
Snowflake Best Practices for Elastic Data Warehousing
Snowflake Best Practices for Elastic Data WarehousingSnowflake Best Practices for Elastic Data Warehousing
Snowflake Best Practices for Elastic Data WarehousingAmazon Web Services
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & FeaturesDataStax Academy
 
Snowflake Automated Deployments / CI/CD Pipelines
Snowflake Automated Deployments / CI/CD PipelinesSnowflake Automated Deployments / CI/CD Pipelines
Snowflake Automated Deployments / CI/CD PipelinesDrew Hansen
 
HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practicelarsgeorge
 
Hadoop Architecture and HDFS
Hadoop Architecture and HDFSHadoop Architecture and HDFS
Hadoop Architecture and HDFSEdureka!
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiFlink Forward
 
Zookeeper Introduce
Zookeeper IntroduceZookeeper Introduce
Zookeeper Introducejhao niu
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
 
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfDeep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfAltinity Ltd
 
Iceberg + Alluxio for Fast Data Analytics
Iceberg + Alluxio for Fast Data AnalyticsIceberg + Alluxio for Fast Data Analytics
Iceberg + Alluxio for Fast Data AnalyticsAlluxio, Inc.
 
HDFS on Kubernetes—Lessons Learned with Kimoon Kim
HDFS on Kubernetes—Lessons Learned with Kimoon KimHDFS on Kubernetes—Lessons Learned with Kimoon Kim
HDFS on Kubernetes—Lessons Learned with Kimoon KimDatabricks
 
DNS Security Presentation ISSA
DNS Security Presentation ISSADNS Security Presentation ISSA
DNS Security Presentation ISSASrikrupa Srivatsan
 
Using ClickHouse for Experimentation
Using ClickHouse for ExperimentationUsing ClickHouse for Experimentation
Using ClickHouse for ExperimentationGleb Kanterov
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep DiveRed_Hat_Storage
 

Mais procurados (20)

Real-time Analytics with Trino and Apache Pinot
Real-time Analytics with Trino and Apache PinotReal-time Analytics with Trino and Apache Pinot
Real-time Analytics with Trino and Apache Pinot
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
 
Intro to HBase
Intro to HBaseIntro to HBase
Intro to HBase
 
Intro to Delta Lake
Intro to Delta LakeIntro to Delta Lake
Intro to Delta Lake
 
Hadoop Hbase - Introduction
Hadoop Hbase - IntroductionHadoop Hbase - Introduction
Hadoop Hbase - Introduction
 
Snowflake Best Practices for Elastic Data Warehousing
Snowflake Best Practices for Elastic Data WarehousingSnowflake Best Practices for Elastic Data Warehousing
Snowflake Best Practices for Elastic Data Warehousing
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & Features
 
Snowflake Automated Deployments / CI/CD Pipelines
Snowflake Automated Deployments / CI/CD PipelinesSnowflake Automated Deployments / CI/CD Pipelines
Snowflake Automated Deployments / CI/CD Pipelines
 
HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practice
 
Apache Ranger
Apache RangerApache Ranger
Apache Ranger
 
Hadoop Architecture and HDFS
Hadoop Architecture and HDFSHadoop Architecture and HDFS
Hadoop Architecture and HDFS
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
 
Zookeeper Introduce
Zookeeper IntroduceZookeeper Introduce
Zookeeper Introduce
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
 
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfDeep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
 
Iceberg + Alluxio for Fast Data Analytics
Iceberg + Alluxio for Fast Data AnalyticsIceberg + Alluxio for Fast Data Analytics
Iceberg + Alluxio for Fast Data Analytics
 
HDFS on Kubernetes—Lessons Learned with Kimoon Kim
HDFS on Kubernetes—Lessons Learned with Kimoon KimHDFS on Kubernetes—Lessons Learned with Kimoon Kim
HDFS on Kubernetes—Lessons Learned with Kimoon Kim
 
DNS Security Presentation ISSA
DNS Security Presentation ISSADNS Security Presentation ISSA
DNS Security Presentation ISSA
 
Using ClickHouse for Experimentation
Using ClickHouse for ExperimentationUsing ClickHouse for Experimentation
Using ClickHouse for Experimentation
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 

Semelhante a HDFS Scalability and Evolution with HDDS and Ozone

Ozone and HDFS’s evolution
Ozone and HDFS’s evolutionOzone and HDFS’s evolution
Ozone and HDFS’s evolutionDataWorks Summit
 
Evolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Distributed Storage SubsystemEvolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Distributed Storage SubsystemDataWorks Summit/Hadoop Summit
 
Ozone: scaling HDFS to trillions of objects
Ozone: scaling HDFS to trillions of objectsOzone: scaling HDFS to trillions of objects
Ozone: scaling HDFS to trillions of objectsDataWorks Summit
 
Ozone: An Object Store in HDFS
Ozone: An Object Store in HDFSOzone: An Object Store in HDFS
Ozone: An Object Store in HDFSDataWorks Summit
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and FutureDataWorks Summit
 
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesScaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesDataWorks Summit/Hadoop Summit
 
Ozone- Object store for Apache Hadoop
Ozone- Object store for Apache HadoopOzone- Object store for Apache Hadoop
Ozone- Object store for Apache HadoopHortonworks
 
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBaseHBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBaseCloudera, Inc.
 
HBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseHBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseenissoz
 
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesScaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesDataWorks Summit
 
Dancing elephants - efficiently working with object stores from Apache Spark ...
Dancing elephants - efficiently working with object stores from Apache Spark ...Dancing elephants - efficiently working with object stores from Apache Spark ...
Dancing elephants - efficiently working with object stores from Apache Spark ...DataWorks Summit
 
Democratizing Memory Storage
Democratizing Memory StorageDemocratizing Memory Storage
Democratizing Memory StorageDataWorks Summit
 
Apache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community UpdateApache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community UpdateDataWorks Summit
 
Hadoop Present - Open Enterprise Hadoop
Hadoop Present - Open Enterprise HadoopHadoop Present - Open Enterprise Hadoop
Hadoop Present - Open Enterprise HadoopYifeng Jiang
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsEsther Kundin
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsEsther Kundin
 

Semelhante a HDFS Scalability and Evolution with HDDS and Ozone (20)

Ozone and HDFS’s evolution
Ozone and HDFS’s evolutionOzone and HDFS’s evolution
Ozone and HDFS’s evolution
 
Evolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Distributed Storage SubsystemEvolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Distributed Storage Subsystem
 
Evolving HDFS to a Generalized Storage Subsystem
Evolving HDFS to a Generalized Storage SubsystemEvolving HDFS to a Generalized Storage Subsystem
Evolving HDFS to a Generalized Storage Subsystem
 
Evolving HDFS to Generalized Storage Subsystem
Evolving HDFS to Generalized Storage SubsystemEvolving HDFS to Generalized Storage Subsystem
Evolving HDFS to Generalized Storage Subsystem
 
Ozone: scaling HDFS to trillions of objects
Ozone: scaling HDFS to trillions of objectsOzone: scaling HDFS to trillions of objects
Ozone: scaling HDFS to trillions of objects
 
Ozone: An Object Store in HDFS
Ozone: An Object Store in HDFSOzone: An Object Store in HDFS
Ozone: An Object Store in HDFS
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and Future
 
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesScaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
 
Ozone- Object store for Apache Hadoop
Ozone- Object store for Apache HadoopOzone- Object store for Apache Hadoop
Ozone- Object store for Apache Hadoop
 
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBaseHBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
 
HBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseHBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBase
 
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesScaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
 
Hadoop 3 in a Nutshell
Hadoop 3 in a NutshellHadoop 3 in a Nutshell
Hadoop 3 in a Nutshell
 
Dancing elephants - efficiently working with object stores from Apache Spark ...
Dancing elephants - efficiently working with object stores from Apache Spark ...Dancing elephants - efficiently working with object stores from Apache Spark ...
Dancing elephants - efficiently working with object stores from Apache Spark ...
 
Democratizing Memory Storage
Democratizing Memory StorageDemocratizing Memory Storage
Democratizing Memory Storage
 
Containers and Big Data
Containers and Big Data Containers and Big Data
Containers and Big Data
 
Apache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community UpdateApache Hadoop 3.0 Community Update
Apache Hadoop 3.0 Community Update
 
Hadoop Present - Open Enterprise Hadoop
Hadoop Present - Open Enterprise HadoopHadoop Present - Open Enterprise Hadoop
Hadoop Present - Open Enterprise Hadoop
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
 
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry TrendsBig Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
 

Mais de DataWorks Summit

Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisDataWorks Summit
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiDataWorks Summit
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...DataWorks Summit
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...DataWorks Summit
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal SystemDataWorks Summit
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExampleDataWorks Summit
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberDataWorks Summit
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixDataWorks Summit
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiDataWorks Summit
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsDataWorks Summit
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureDataWorks Summit
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...DataWorks Summit
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudDataWorks Summit
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerDataWorks Summit
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...DataWorks Summit
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouDataWorks Summit
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkDataWorks Summit
 

Mais de DataWorks Summit (20)

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
 

Último

Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 

Último (20)

Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 

HDFS Scalability and Evolution with HDDS and Ozone

  • 1. 1 © Hortonworks Inc. 2011–2018. All rights reserved HDFS Scalability and Evolution: HDDS and Ozone Sanjay Radia, Founder, Chief Architect, Hortonworks
  • 2. 2 © Hortonworks Inc. 2011–2018. All rights reserved About the Speakers • Sanjay Radia • Chief Architect, Founder, Hortonworks • Apache Hadoop PMC and Committer • Part of the original Hadoop team at Yahoo! since 2007 • Chief Architect of Hadoop Core at Yahoo! • Prior • Data center automation, virtualization, Java, HA, OSs, File Systems • Startup, Sun Microsystems, INRIA… • Ph.D., University of Waterloo Page 2 Architecting the Future of Big Data
  • 3. 3 © Hortonworks Inc. 2011–2018. All rights reserved HDFS – What It Does Well and Not So Well HDFS does well • Scaling – IO + PBs + clients • Horizontal scaling – IO + PBs • Fast IO – scans and writes • Number of concurrent clients 60K++ • Low latency metadata operations • Fault tolerant storage layer • Locality • Replicas/Reliability and parallelism • Layering – Namespace layer and storage layer • Security But scaling Namespace is limited to 500M files (192G Heap) • Scaling Namespace – 500M FILES • Scaling Block space • Scaling Block reports • Scaling DN’s block management • Need further scaling of client/RPC 150K++ Ironically, Namespace in mem is strength and weakness
  • 4. 4 © Hortonworks Inc. 2011–2018. All rights reserved Proof Points of Scaling Data, IO, Clients/RPC • Proof points of large data and large clusters • Single Organizations have over 600PB in HDFS • Single clusters with over 200PB using federation • Large clusters over 4K multi-core nodes bombarding a single NN • Federation is the currents caling solution (both Namespace & Operations) • In deployment at Twitter, Yahoo, FB, and elsewhere Metadata in memory the strength of the original GFS and HDFS design But also its weakness in scaling number of files and blocks
  • 5. 5 © Hortonworks Inc. 2011–2018. All rights reserved Scaling HDFS— with HDDS and Ozone
  • 6. 6 © Hortonworks Inc. 2011–2018. All rights reserved HDFS Layering DN 1 DN 2 DN m .. .. .. NS1 ... NS k Block Management Layer Block Pool kBlock Poo 1 NN-1 NN-k Common Storage BlockStorageNamespace
  • 7. 7 © Hortonworks Inc. 2011–2018. All rights reserved Solutions to Scaling Files, Blocks, Clients/RPC Scale Namespace • Hierarchical file system • Cache only workingSet of namespace in memory • Partition: • Distributed namespace (transparent automatic partitioning) • Volumes (static partitioning) Flat Key-Value store • Cache only workingSet of namespace in memory • Partition/Shard the space (easy to hash) Scale Metadata Clients/RPC • Multi-thread namespace manager • Partitioning/Sharding Slow NN startup • Cache only workingSet in mem • Shard/partition namespace Scale Block Management • Containers of blocks (2GB-16GB+) • Will significantly reduce BlockMap • Reduce Number of Block/Container reports
  • 8. 8 © Hortonworks Inc. 2011–2018. All rights reserved Scaling HDFS Must Scale both the Namespace and the Block Layer • Scaling one is not sufficient Scalable Block layer: Hadoop Distributed Data Storage (HDDS) • Containers of blocks • Replicated as a group • Reduces Block Map Scale Namespace: Several approaches (not exclusive) • Partial namespace in memory • Shard namespace • Use flat namespace (KV namespace) – easier to implement and scale – Ozone
  • 9. 9 © Hortonworks Inc. 2011–2018. All rights reserved Scale Storage Layer: Container of Blocks HDDS Flat KV Namespace: Ozone New HDFS OzoneFS: Hadoop Compatible FS Hierarchical Namespace: New Scalable NN Evolution Towards New HDFS
  • 10. 10 © Hortonworks Inc. 2011–2018. All rights reserved HDFS Ozone and Quadra on Same Cluster/storage - Shared Storage Servers and Shared Physical Storage Data Nodes : Shared Storage Servers for HDFS-Blocks and Ozone/Quadra Blocks Shared Physical Storage HDFS Scalable FS with Hierarchical Name space Hadoop Compatible FS API FileSystem or FileContext Quadra Raw Storage Volumes Raw Storage API (Lun/EBS like, SCSI) Linux FS Ozone Highly Scalable KV Object Store Flat Namespace S3 API
  • 11. 11 © Hortonworks Inc. 2011–2018. All rights reserved How it all Fits Together Old HDFS NN All namespace in memory Storage&IONamespace HDFS Block storage on DataNodes (Bid -> Data) Physical Storage - Shared DataNodes and physical storage shared between Old HDFS and HDDS Block Reports BlockMap (Bid ->IPAddress of DN File = Bid[] Ozone Master K-V Flat Namespace File (Object) = Bid[] Bid = Cid+ LocalId New HDFS NN (scalable) Hierarchical Namespace File = Bid[] Bid = Cid+ LocalId Container Management & Cluster Membership HDDS Container Storage on DataNodes (Bid -> Data, but blocks grouped in containers) HDDS HDDS – Clean Separation of Block layer DataNodes ContainerMap (CId ->IPAddress of DNContainer Reports NewExisting HDFS
  • 12. 12 © Hortonworks Inc. 2011–2018. All rights reserved Ozone FS Ozone/HDDS Can Be Used Separately, or also with HDFS • Initially HDFS is the default FS • Has many features • so cannot be replaced by OzoneFS on day one • Ozone FS sits on side as additional namespace, sharing DNs • For applications work with Hadoop Compatible FS on K-V Store – Hive, Spark … • How is Ozone FS accessed? • Use direct URIs for either HDFS or OzoneFS • Mount in HDFS or in ViewFS HDFS Default FS
  • 13. 13 © Hortonworks Inc. 2011–2018. All rights reserved Scalable Block Layer: Hadoop Distributed Data Storage (HDDS) Container: Containers of blocks (2GB-16GB+) • Replicated as a group • Each Container has a unique ContainerId – Every block within a container has a block id - BlockId = ContainerId, LocalId CM – Container manager • Cluster membership • Receives container reports from DNs • Manages container replication • Maintained Container Map (Cid->IPAddr) Data Nodes – HDFS and HDDS can share DNs • DataNodes contain a set of containers (just like they used to contain blocks) • DataNodes send Container-reports (like block reports) to CM (Container Manager) Block Pools • Just like blocks were in block pools, containers are also in container pools – This allow independent namespaces to carve out their block space HDDS: Separate layer from namespace layer (strictly separate, not almost)
  • 14. 14 © Hortonworks Inc. 2011–2018. All rights reserved Key Ozone Characteristics—Compare with HDFS • Scale Block Management • Containers of block (2 GB to 16GB) • 2-4gb block containers initially => 40-80x reduction in BR and CM block map • Reduce BR on DNs, Masters, Network • Scale Namespace • Key Space Manager caches only working set in memory • Future scaling: • Flat namespace is easy to shard (Bucket are natural sharding points) • Scale Num of Metadata Clients/Rpc • No single global lock like NN • Metadata operations are simpler • Sharding will help further § Fault Tolerance – Blocks – inherits HDFS’s block-layer FT – Namespace – uses Raft rather then Journal Nodes •HA Easier § Manageability – GC/Overloaded Master is not longer an issue • caches working set – Journal nodes disappear – Raft is used – Faster and more predictable failover – Fast start up • Faster upgrades • Faster failover • Retains HDFS Semantics & Performance – Strong consistency, locality, fast scans, … • Other: – OM can run on DNs – beneficial for small clusters or embedded systems
  • 15. 15 © Hortonworks Inc. 2011–2018. All rights reserved Will OzoneFS’s Key-Value Store Work with Hadoop Apps? • Two years ago – NO! • Today - Yes! • Hive, Spark and others are making sure they work on Cloud K-V Object Stores via HCFS • Even customers are ensuring that their apps work on Cloud K-V Object Stores via HCFS • Lack of real directories and their ACLs: Fake directories + Buckets ACLs • Lack of eventual consistency in S3 is being worked around – S3Gaurd (Note: OzoneFS is consistent) • Lack of rename in S3 is being worked around • Various direct output committers (early versions had issues) • Netflix Direct Commiter; being replaced by Iceberg • Via Metastore (Databricks has proprietary version, Hive’s approach)
  • 16. 16 © Hortonworks Inc. 2011–2018. All rights reserved Details of HDDS
  • 17. 17 © Hortonworks Inc. 2011–2018. All rights reserved Container Structure (Using RocksDB) • An embedded LSM/KVStore (RocksDB) • BlockId is the key, • filename of local chunk file is value • Optimizations • Small blocks (< 1MB) can be stored directly in rocksDB • Compaction for block data to avoid lots of files • But this can be evolved over time Container Index Chunk data file Chunk data file Chunk data file Chunk data file Key 1 LSM LevelDB/RocksDB Key N Chunk Data File Name Offset Length
  • 18. 18 © Hortonworks Inc. 2011–2018. All rights reserved Replication of Container • Use RAFT replication instead of data pipeline, for both data and metadata • Proven to be correct • Traditionally Raft used for small updates and transactions, fits well for metadata • Performance considerations • When writing the meta data into raft-journal, put the data directly in container storage • Raft-journal in separate disk – fast contagious writes without seeking • Data spread across the other disks • Client uses Raft protocol to write data to the DNs storing the container Page 18
  • 19. 19 © Hortonworks Inc. 2011–2018. All rights reserved Open and Closed Containers Open – active writers • Need at least( NumSpindles * Data nodes) open active containers • Clients can get locality on writes • Data is spread across all data nodes • Improved IO and better chance of getting locality • Keep DNs and ALL spindles busy Closed – typically when full or had a failure in the past • Why close a container on failures • We originally considered keeping it open and bringing in a new DN • Wait for the data to copy? • Decided to close it, and have it replicated • Can open later or can merge with other closed container – under design
  • 20. 20 © Hortonworks Inc. 2011–2018. All rights reserved Details of Ozone
  • 21. 21 © Hortonworks Inc. 2011–2018. All rights reserved Ozone Master DN1 DN2 DNn Ozone Master K-V Namespace File (Object) = Bid[] Bid = Cid+ LocalId CM ContainerMap (CId ->IPAddress of DN Client RocksDB bId[]= Open(Key,..) GetBlockLocations(Bid) $$$ $$$ - Container Map Cache $$$ Read, Write, …
  • 22. 22 © Hortonworks Inc. 2011–2018. All rights reserved Ozone APIs • Key: /VolumeName/BucketId/ObjectKey e.g /Home/John/foo/bar/zoo) • ACLs at Volume and Bucket level (the other dirs are fake) • Future sharding at bucket level • => Ozone is Consistent (unlike S3) Ozone Object API (RPC) S3 Connector Hadoop FileSystem and Hadoop FileContext Connectors
  • 23. 23 © Hortonworks Inc. 2011–2018. All rights reserved Where Does the Ozone Master Run? Which Node? • On a separate node with large enough memory for caching the working set • Caching the working set is important for large number of concurrent clients • This option would give predictable performance for large clusters • On the Datanodes • How much memory for caching, • Note: tasks and other services run on DN since they are typically also compute nodes Where is Storage for the Ozone KV Metadata? • Local disk • If on DN then is it dedicated disk or shared with DN? • Use the container storage (Its using RocksDB anyway) • Spread Ozone volumes across containers to gain performance, • but this may limit volume size & force more Ozone volumes than Admin wants
  • 24. 24 © Hortonworks Inc. 2011–2018. All rights reserved Quadra – Lun-like Raw-Block Storage Used for creating mountable disk FS volume
  • 25. 25 © Hortonworks Inc. 2011–2018. All rights reserved Quadra: Raw-Block Storage Volume (Lun) Lun-like storage service where the blocks are stored on HDDS • Volume: A raw-block device that can be used to create a mountable disk on Linux. • Raw-Blocks - those of the native FS that will use the Lun Volume • Raw-block size is dictated by the native fs like ext4 (4K) • Raw-Blocks are unit of IO operations by native file systems. • Raw-Block is the unit of read/write/update to HDDS • Ozone and Quadra share HDDS as a common storage backend • Current prototype: 1 raw-block = 1 HDDS block (but this will change later) Can be used in Kubernetes for container state
  • 26. 28 © Hortonworks Inc. 2011–2018. All rights reserved Status • HDDS: Block container • 2-4gb block containers initially – Reduction of 40-80 in BR and block map – Reduce BR pressure in on NN/OzoneMaster • Initial version to scale to 10s billions of blocks • Ozone Master • Implemented using RocksDB (just like the HDDS in DNs) • Initial version to scale to 10 billion objects • Current Status and Steps to GA • Stabilize HDDS and Ozone • Measure and improve performance • Add HA for Ozone Master and Container Manager • Add security – Security design completed and published • After GA • Further stabilization and performance improvements • Transparent encryption • Erasure codes • Snapshots (or their equivalent) • ..
  • 27. 29 © Hortonworks Inc. 2011–2018. All rights reserved Summary • HDFS scale proven in real production systems • 4K+ clusters • Raw Storage >200PB in single federated NN cluster and >30PB in non-federated clusters • Scales to 60K+ concurrent clients bombarding the NN • But very large number of small files is a challenge (500M files) • HDDS + Ozone: Scalable Hadoop Storage • Retains • HDFS block storage Fault-tolerance • HDFS Horizonal scaling for Storage, IO • HDFS’s move computation to Storage • HDDS: Block containers: • Initially scale to 10B blocks, later to 100B+ blocks (HDFS-7240) • Ozone – Flat KV namespace + Hadoop Compatible FS (OzoneFS) • initially scale to 10B files (HDFS-13074) • Community working on a Hierarchal Namespace on HDDS (HDFS-10419)
  • 28. 30 © Hortonworks Inc. 2011–2018. All rights reserved Thank You Q&A