SlideShare uma empresa Scribd logo
1 de 25
YARN Node Labels
Wangda Tan, Hortonorks (wangda@apache.org)
Mayank Bansal, ebay (mabansal@ebay.com)
About us
Wangda Tan
• Last 5+ years in big data field, Hadoop,
Open-MPI, etc.
• Now
• Apache Hadoop Committer
@Hortonworks, all in YARN.
• Now spending most of time on
resource scheduling enhancements.
• Past
• Pivotal (PHD team, brings
OpenMPI/GraphLab to YARN)
• Alibaba (ODPS team, platform for
distributed data-mining)
Mayank Bansal
• Hadoop Architect @ ebay
• Apache Hadoop Committer
• Apache Oozie PMC and Committer
• Current
• Leading Hadoop Core Development
for YARN and MapReduce @ ebay
• Past
• Working on Scheduler / Resource
Managers
Agenda
• Overview
• Problems
• What is node label
• Understand by example
• Architecture
• Case study
• Status
• Future
Overview – Background
• Resources are managed
by a hierarchy of queues.
• One queue can have
multiple applications
• Container is the result
resource scheduling,
Which is a bundle of
resources and can run
process(es)
Overview – How to manage your workload by queues
• By organization:
• Marketing/Finance queue
• By workload
• Interactive/Batch queue
• Hybrid
• Finance-
batch/Marketing-realtime
queue
Problems
• No way to specify for specific resource on nodes
• E.g. nodes with GPU / SSD
• No way for application to request nodes with specific resources.
• Unable to partition a cluster based on organizations/workloads
What is Node Label?
• Group nodes with similar profile
• Hardware
• Software
• Organization
• Workloads
• A way for app to specify where to run in a cluster
Node Labels
• Types of node labels
• Node partition (Since 2.6)
• Node constraints (WIP)
• Node partition
• One node belongs to only one partition
• Related to resource planning
• Node constraints
• One node can assign multiple
constraints
• Not related to resource planning
Understand by example (1)
• A real-world example about why node
partition is needed:
• Company-X has a big cluster, Each of
Engineering/Marketing/Sales team has
33% share of the cluster.
...
...
YARN RM
Engineer
33%
Marketing
33%
Sales
33%
Understand by example (2)
Engineer
50%
Marketing
50%
..
.
..
.
• Engineering/marketing team need GPU installed
servers to do some visualization works. So they spent
equal amount of money buy some machines with GPU.
• They want to share the cluster 50:50.
• Sales team spent $0 on these node nodes, so it cannot
run anything on these new nodes at all.
Understand by example (3)
• Here problem comes:
• if you create a separated YARN cluster, ops
team will unhappy.
• If you add these new nodes to original
cluster, you cannot guarantee
engineering/marking team have preference
to use these new nodes.
...
...
YARN RM
...
...
?
Understand by example (4)
...
...
YARN RM
...
...
"Default" Partition "GPU" Partition
Engineer
33%
Marketing
33%
Sales
33%
Engineer
50%
Marketing
50%
• Node partition is to solve this problem:
• Add GPU partition, which is managed by the same YARN RM. Admin can specify
different percentage of shares in different partitions.
Understand by example (5)
• Understand Non-exclusive node
partition:
• In the previous example, “GPU”
partition can be only used by
engineering and marketing team.
• This is a bad for resource utilization.
• Admin can define, if “GPU” partition
has idle resources, sales queue can use
it. But when engineering/marketing
come back. Resource allocated to sales
queue will be preempted.
• (available since Hadoop 2.8)
...
...
"Default" Partition
...
...
"GPU" Partition
Guaranteed to use Can use if it's idle
Engineer Marketing Sales
33%
33%
33%
50%
50%
0%
Understand by example (6)
• Configuration for above example (Capacity Scheduler)
yarn.scheduler.capacity.root.queues=engineering,marketing,sales
yarn.scheduler.capacity.root.engineering.capacity=33
yarn.scheduler.capacity.root.marketing.capacity=33
yarn.scheduler.capacity.root.sales.capacity=33
---------
yarn.scheduler.capacity.root.engineering.accessible-node-labels=GPU
yarn.scheduler.capacity.root.marketing.accessible-node-labels=GPU
---------
yarn.scheduler.capacity.root.engineering.accessible-node-labels.GPU.capacity=50
yarn.scheduler.capacity.root.marketing.accessible-node-labels.GPU.capacity=50
---------
(optional)
yarn.scheduler.capacity.root.engineering.default-node-label-expression=GPU
They’re original configuration
without node partition
Capacities
For node partitions.
Queue ACLs
For node partitions.
(optional)
Applications running in the queue
Will run in GPU partition
By default
Understand by example (7)
...
...
YARN RM
Company (100%)
R & D (50%) Sales (50%)
QE (20%) Dev (80%)
Without node partition
YARN RM
Company (100%)
R & D (50%) Sales (50%)
QE (20%) Dev (80%)
With node partition
Default GPU
Company (100%)
R & D (100%) Sales (0%)
QE (50%) Dev (50%)
Architecture
• Central piece:
NodeLabelsManager
• Stores labels and their attributes
• Store nodes-to-labels mapping
• It can be read/write by
• CLI and REST API (which we
called centralized configuration)
• OR NM can retrieve labels on it
and send to RM (we call it
distributed configuration)
• Scheduler uses node labels
manager make decisions and
receive resource request from
AM, return allocated
container to AM
Case study (1) – uses node label
• Use node label to create isolated
environment for
batch/interactive/low-latency
workloads.
• Deploy YARN containers onto
compute nodes are optimized and
accelerated for each workload:
• Using RDMA-enabled nodes to
accelerate shuffle.
• Using powerful CPU nodes to
accelerate compression.
• It is possible to DOUBLE THE
DENSITY of today’s traditional
Hadoop cluster with substantially
better price performance.
• Create a converged system that
allow Hadoop / Vertica / Spark and
other stacks share a common pool
of data.
Case study (2) – uses node label
Case study (3) – Ebay cluster use node label
• Separate Machine Learning workloads from regular workloads
• Use node label to separate licensed software to some machines
• Enabling GPU workloads
• Separation of organizational workloads
Case study (4) – Slider use cases
• HBase region servers run in nodes with
SSD (Non-exclusive).
• HBase master monopolize to use
nodes.
• Map-reduce jobs run in other nodes.
And they can use idle resources of
region server nodes.
... ...
HBase
Master
(Exclusive)
HBase
Region Server
(Non-Exclusive)
Default
Slider
HBase
Master RS RS
Launches
MR
AM
User
Submit
Task
Task
TaskTask
Status – Done parts of Node Labels
• Exclusive / non-exclusive node partition support in Capacity Scheduler (√)
• User-limit
• Preemption
• Now all respecting node partition!
• Centralized configuration via CLI/REST API (√)
• Distributed configuration in Node Manager’s config/script (√)
Status - Node Labels Web UI
Status – Other Apache projects support node label
• Following projects are already
support node label:
• (SPARK-6470)
• (MAPREDUCE-6304)
• Slider (SLIDER-81)
• (via SLIDER)
• (via SLIDER)
• (via SLIDER)
• (AMBARI-10063)
Future of Node Label
• Support constraints (YARN-3409)
• Orthogonal to partition, they’re
describing attributes of node’s
hardware/software just for
affinity.
• Some example of constraints:
• glibc version
• JDK version
• Type of CPU (x86_64/i686)
• Physical or virtualized
• With this, application can ask for
resource
• glibc.version >= 2.20 &&
JDK.version >= 8u20 &&
x86_64
• Support node label in
FairScheduler (YARN-2497)
• Support in more projects
• Tez
• Oozie
• …
Q & A

Mais conteĂşdo relacionado

Mais procurados

Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm
Chandler Huang
 
Performance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache Impala
Cloudera, Inc.
 

Mais procurados (20)

Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
 
Building a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQLBuilding a SIMD Supported Vectorized Native Engine for Spark SQL
Building a SIMD Supported Vectorized Native Engine for Spark SQL
 
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...
Accelerating Spark SQL Workloads to 50X Performance with Apache Arrow-Based F...
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & Internals
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
 
Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm
 
Memory Management in Apache Spark
Memory Management in Apache SparkMemory Management in Apache Spark
Memory Management in Apache Spark
 
Spark SQL Join Improvement at Facebook
Spark SQL Join Improvement at FacebookSpark SQL Join Improvement at Facebook
Spark SQL Join Improvement at Facebook
 
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
 
Performance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache Impala
 
Adaptive Query Execution: Speeding Up Spark SQL at Runtime
Adaptive Query Execution: Speeding Up Spark SQL at RuntimeAdaptive Query Execution: Speeding Up Spark SQL at Runtime
Adaptive Query Execution: Speeding Up Spark SQL at Runtime
 
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
 
Performance Troubleshooting Using Apache Spark Metrics
Performance Troubleshooting Using Apache Spark MetricsPerformance Troubleshooting Using Apache Spark Metrics
Performance Troubleshooting Using Apache Spark Metrics
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
 
Apache Sparkにおけるメモリ - アプリケーションを落とさないメモリ設計手法 -
Apache Sparkにおけるメモリ - アプリケーションを落とさないメモリ設計手法 -Apache Sparkにおけるメモリ - アプリケーションを落とさないメモリ設計手法 -
Apache Sparkにおけるメモリ - アプリケーションを落とさないメモリ設計手法 -
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark ApplicationsTop 5 Mistakes to Avoid When Writing Apache Spark Applications
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
 
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
 
Dr. Elephant for Monitoring and Tuning Apache Spark Jobs on Hadoop with Carl ...
Dr. Elephant for Monitoring and Tuning Apache Spark Jobs on Hadoop with Carl ...Dr. Elephant for Monitoring and Tuning Apache Spark Jobs on Hadoop with Carl ...
Dr. Elephant for Monitoring and Tuning Apache Spark Jobs on Hadoop with Carl ...
 
How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...
 

Destaque

Airflow - An Open Source Platform to Author and Monitor Data Pipelines
Airflow - An Open Source Platform to Author and Monitor Data PipelinesAirflow - An Open Source Platform to Author and Monitor Data Pipelines
Airflow - An Open Source Platform to Author and Monitor Data Pipelines
DataWorks Summit
 
Reservations Based Scheduling: if you’re late don’t blame us!
Reservations Based Scheduling: if you’re late don’t blame us!  Reservations Based Scheduling: if you’re late don’t blame us!
Reservations Based Scheduling: if you’re late don’t blame us!
DataWorks Summit
 
Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...
Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...
Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...
gethue
 
Dat Afull Thanasis Plhres Eommex
Dat Afull Thanasis Plhres EommexDat Afull Thanasis Plhres Eommex
Dat Afull Thanasis Plhres Eommex
ATHANASIOS KAVVADAS
 

Destaque (20)

Towards SLA-based Scheduling on YARN Clusters
Towards SLA-based Scheduling on YARN ClustersTowards SLA-based Scheduling on YARN Clusters
Towards SLA-based Scheduling on YARN Clusters
 
Apache Hadoop YARN – Multi-Tenancy, Capacity Scheduler & Preemption - Stamped...
Apache Hadoop YARN – Multi-Tenancy, Capacity Scheduler & Preemption - Stamped...Apache Hadoop YARN – Multi-Tenancy, Capacity Scheduler & Preemption - Stamped...
Apache Hadoop YARN – Multi-Tenancy, Capacity Scheduler & Preemption - Stamped...
 
Hadoop crash course workshop at Hadoop Summit
Hadoop crash course workshop at Hadoop SummitHadoop crash course workshop at Hadoop Summit
Hadoop crash course workshop at Hadoop Summit
 
Enabling Diverse Workload Scheduling in YARN
Enabling Diverse Workload Scheduling in YARNEnabling Diverse Workload Scheduling in YARN
Enabling Diverse Workload Scheduling in YARN
 
Scale 12 x Efficient Multi-tenant Hadoop 2 Workloads with Yarn
Scale 12 x   Efficient Multi-tenant Hadoop 2 Workloads with YarnScale 12 x   Efficient Multi-tenant Hadoop 2 Workloads with Yarn
Scale 12 x Efficient Multi-tenant Hadoop 2 Workloads with Yarn
 
Airflow - An Open Source Platform to Author and Monitor Data Pipelines
Airflow - An Open Source Platform to Author and Monitor Data PipelinesAirflow - An Open Source Platform to Author and Monitor Data Pipelines
Airflow - An Open Source Platform to Author and Monitor Data Pipelines
 
Reservations Based Scheduling: if you’re late don’t blame us!
Reservations Based Scheduling: if you’re late don’t blame us!  Reservations Based Scheduling: if you’re late don’t blame us!
Reservations Based Scheduling: if you’re late don’t blame us!
 
Node labels in YARN
Node labels in YARNNode labels in YARN
Node labels in YARN
 
Keep me in the Loop: INotify in HDFS
Keep me in the Loop: INotify in HDFSKeep me in the Loop: INotify in HDFS
Keep me in the Loop: INotify in HDFS
 
Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...
Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...
Spark Summit Europe: Building a REST Job Server for interactive Spark as a se...
 
(SEC314) Customer Perspectives on Implementing Security Controls with AWS | A...
(SEC314) Customer Perspectives on Implementing Security Controls with AWS | A...(SEC314) Customer Perspectives on Implementing Security Controls with AWS | A...
(SEC314) Customer Perspectives on Implementing Security Controls with AWS | A...
 
Spark Summit EU talk by Berni Schiefer
Spark Summit EU talk by Berni SchieferSpark Summit EU talk by Berni Schiefer
Spark Summit EU talk by Berni Schiefer
 
Scale-Out Resource Management at Microsoft using Apache YARN
Scale-Out Resource Management at Microsoft using Apache YARNScale-Out Resource Management at Microsoft using Apache YARN
Scale-Out Resource Management at Microsoft using Apache YARN
 
Airflow at WePay
Airflow at WePayAirflow at WePay
Airflow at WePay
 
Ozone: An Object Store in HDFS
Ozone: An Object Store in HDFSOzone: An Object Store in HDFS
Ozone: An Object Store in HDFS
 
Airflow - a data flow engine
Airflow - a data flow engineAirflow - a data flow engine
Airflow - a data flow engine
 
Introduction to Apache Airflow - Data Day Seattle 2016
Introduction to Apache Airflow - Data Day Seattle 2016Introduction to Apache Airflow - Data Day Seattle 2016
Introduction to Apache Airflow - Data Day Seattle 2016
 
GPU Support In Spark And GPU/CPU Mixed Resource Scheduling At Production Scale
GPU Support In Spark And GPU/CPU Mixed Resource Scheduling At Production ScaleGPU Support In Spark And GPU/CPU Mixed Resource Scheduling At Production Scale
GPU Support In Spark And GPU/CPU Mixed Resource Scheduling At Production Scale
 
Understanding the Basics of Personal Data: Vendors, Users, and You (Web 2.0 NYC)
Understanding the Basics of Personal Data: Vendors, Users, and You (Web 2.0 NYC)Understanding the Basics of Personal Data: Vendors, Users, and You (Web 2.0 NYC)
Understanding the Basics of Personal Data: Vendors, Users, and You (Web 2.0 NYC)
 
Dat Afull Thanasis Plhres Eommex
Dat Afull Thanasis Plhres EommexDat Afull Thanasis Plhres Eommex
Dat Afull Thanasis Plhres Eommex
 

Semelhante a Node Labels in YARN

Challenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on HadoopChallenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on Hadoop
DataWorks Summit
 

Semelhante a Node Labels in YARN (20)

Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
 
Spark and Deep Learning frameworks with distributed workloads
Spark and Deep Learning frameworks with distributed workloadsSpark and Deep Learning frameworks with distributed workloads
Spark and Deep Learning frameworks with distributed workloads
 
Cloudera Impala - Las Vegas Big Data Meetup Nov 5th 2014
Cloudera Impala - Las Vegas Big Data Meetup Nov 5th 2014Cloudera Impala - Las Vegas Big Data Meetup Nov 5th 2014
Cloudera Impala - Las Vegas Big Data Meetup Nov 5th 2014
 
Scaling Deep Learning on Hadoop at LinkedIn
Scaling Deep Learning on Hadoop at LinkedInScaling Deep Learning on Hadoop at LinkedIn
Scaling Deep Learning on Hadoop at LinkedIn
 
Impala tech-talk by Dimitris Tsirogiannis
Impala tech-talk by Dimitris TsirogiannisImpala tech-talk by Dimitris Tsirogiannis
Impala tech-talk by Dimitris Tsirogiannis
 
Scaling Deep Learning on Hadoop at LinkedIn
Scaling Deep Learning on Hadoop at LinkedInScaling Deep Learning on Hadoop at LinkedIn
Scaling Deep Learning on Hadoop at LinkedIn
 
Tajo_Meetup_20141120
Tajo_Meetup_20141120Tajo_Meetup_20141120
Tajo_Meetup_20141120
 
Challenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop EngineChallenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop Engine
 
Challenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on HadoopChallenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on Hadoop
 
YARN
YARNYARN
YARN
 
Chicago spark meetup-april2017-public
Chicago spark meetup-april2017-publicChicago spark meetup-april2017-public
Chicago spark meetup-april2017-public
 
Spark Tips & Tricks
Spark Tips & TricksSpark Tips & Tricks
Spark Tips & Tricks
 
The Impala Cookbook
The Impala CookbookThe Impala Cookbook
The Impala Cookbook
 
Using Big Data techniques to query and store OpenStreetMap data. Stephen Knox...
Using Big Data techniques to query and store OpenStreetMap data. Stephen Knox...Using Big Data techniques to query and store OpenStreetMap data. Stephen Knox...
Using Big Data techniques to query and store OpenStreetMap data. Stephen Knox...
 
MySQL Performance Metrics that Matter
MySQL Performance Metrics that MatterMySQL Performance Metrics that Matter
MySQL Performance Metrics that Matter
 
Low Latency Polyglot Model Scoring using Apache Apex
Low Latency Polyglot Model Scoring using Apache ApexLow Latency Polyglot Model Scoring using Apache Apex
Low Latency Polyglot Model Scoring using Apache Apex
 
Hadoop tutorial
Hadoop tutorialHadoop tutorial
Hadoop tutorial
 
Big Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMRBig Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMR
 
Hadoop Tutorial.ppt
Hadoop Tutorial.pptHadoop Tutorial.ppt
Hadoop Tutorial.ppt
 
Troubleshooting Hadoop: Distributed Debugging
Troubleshooting Hadoop: Distributed DebuggingTroubleshooting Hadoop: Distributed Debugging
Troubleshooting Hadoop: Distributed Debugging
 

Mais de DataWorks Summit

HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
DataWorks Summit
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
DataWorks Summit
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
DataWorks Summit
 

Mais de DataWorks Summit (20)

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
 

Último

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 

Node Labels in YARN

  • 1. YARN Node Labels Wangda Tan, Hortonorks (wangda@apache.org) Mayank Bansal, ebay (mabansal@ebay.com)
  • 2. About us Wangda Tan • Last 5+ years in big data field, Hadoop, Open-MPI, etc. • Now • Apache Hadoop Committer @Hortonworks, all in YARN. • Now spending most of time on resource scheduling enhancements. • Past • Pivotal (PHD team, brings OpenMPI/GraphLab to YARN) • Alibaba (ODPS team, platform for distributed data-mining) Mayank Bansal • Hadoop Architect @ ebay • Apache Hadoop Committer • Apache Oozie PMC and Committer • Current • Leading Hadoop Core Development for YARN and MapReduce @ ebay • Past • Working on Scheduler / Resource Managers
  • 3. Agenda • Overview • Problems • What is node label • Understand by example • Architecture • Case study • Status • Future
  • 4. Overview – Background • Resources are managed by a hierarchy of queues. • One queue can have multiple applications • Container is the result resource scheduling, Which is a bundle of resources and can run process(es)
  • 5. Overview – How to manage your workload by queues • By organization: • Marketing/Finance queue • By workload • Interactive/Batch queue • Hybrid • Finance- batch/Marketing-realtime queue
  • 6. Problems • No way to specify for specific resource on nodes • E.g. nodes with GPU / SSD • No way for application to request nodes with specific resources. • Unable to partition a cluster based on organizations/workloads
  • 7. What is Node Label? • Group nodes with similar profile • Hardware • Software • Organization • Workloads • A way for app to specify where to run in a cluster
  • 8. Node Labels • Types of node labels • Node partition (Since 2.6) • Node constraints (WIP) • Node partition • One node belongs to only one partition • Related to resource planning • Node constraints • One node can assign multiple constraints • Not related to resource planning
  • 9. Understand by example (1) • A real-world example about why node partition is needed: • Company-X has a big cluster, Each of Engineering/Marketing/Sales team has 33% share of the cluster. ... ... YARN RM Engineer 33% Marketing 33% Sales 33%
  • 10. Understand by example (2) Engineer 50% Marketing 50% .. . .. . • Engineering/marketing team need GPU installed servers to do some visualization works. So they spent equal amount of money buy some machines with GPU. • They want to share the cluster 50:50. • Sales team spent $0 on these node nodes, so it cannot run anything on these new nodes at all.
  • 11. Understand by example (3) • Here problem comes: • if you create a separated YARN cluster, ops team will unhappy. • If you add these new nodes to original cluster, you cannot guarantee engineering/marking team have preference to use these new nodes. ... ... YARN RM ... ... ?
  • 12. Understand by example (4) ... ... YARN RM ... ... "Default" Partition "GPU" Partition Engineer 33% Marketing 33% Sales 33% Engineer 50% Marketing 50% • Node partition is to solve this problem: • Add GPU partition, which is managed by the same YARN RM. Admin can specify different percentage of shares in different partitions.
  • 13. Understand by example (5) • Understand Non-exclusive node partition: • In the previous example, “GPU” partition can be only used by engineering and marketing team. • This is a bad for resource utilization. • Admin can define, if “GPU” partition has idle resources, sales queue can use it. But when engineering/marketing come back. Resource allocated to sales queue will be preempted. • (available since Hadoop 2.8) ... ... "Default" Partition ... ... "GPU" Partition Guaranteed to use Can use if it's idle Engineer Marketing Sales 33% 33% 33% 50% 50% 0%
  • 14. Understand by example (6) • Configuration for above example (Capacity Scheduler) yarn.scheduler.capacity.root.queues=engineering,marketing,sales yarn.scheduler.capacity.root.engineering.capacity=33 yarn.scheduler.capacity.root.marketing.capacity=33 yarn.scheduler.capacity.root.sales.capacity=33 --------- yarn.scheduler.capacity.root.engineering.accessible-node-labels=GPU yarn.scheduler.capacity.root.marketing.accessible-node-labels=GPU --------- yarn.scheduler.capacity.root.engineering.accessible-node-labels.GPU.capacity=50 yarn.scheduler.capacity.root.marketing.accessible-node-labels.GPU.capacity=50 --------- (optional) yarn.scheduler.capacity.root.engineering.default-node-label-expression=GPU They’re original configuration without node partition Capacities For node partitions. Queue ACLs For node partitions. (optional) Applications running in the queue Will run in GPU partition By default
  • 15. Understand by example (7) ... ... YARN RM Company (100%) R & D (50%) Sales (50%) QE (20%) Dev (80%) Without node partition YARN RM Company (100%) R & D (50%) Sales (50%) QE (20%) Dev (80%) With node partition Default GPU Company (100%) R & D (100%) Sales (0%) QE (50%) Dev (50%)
  • 16. Architecture • Central piece: NodeLabelsManager • Stores labels and their attributes • Store nodes-to-labels mapping • It can be read/write by • CLI and REST API (which we called centralized configuration) • OR NM can retrieve labels on it and send to RM (we call it distributed configuration) • Scheduler uses node labels manager make decisions and receive resource request from AM, return allocated container to AM
  • 17. Case study (1) – uses node label • Use node label to create isolated environment for batch/interactive/low-latency workloads. • Deploy YARN containers onto compute nodes are optimized and accelerated for each workload: • Using RDMA-enabled nodes to accelerate shuffle. • Using powerful CPU nodes to accelerate compression. • It is possible to DOUBLE THE DENSITY of today’s traditional Hadoop cluster with substantially better price performance. • Create a converged system that allow Hadoop / Vertica / Spark and other stacks share a common pool of data.
  • 18. Case study (2) – uses node label
  • 19. Case study (3) – Ebay cluster use node label • Separate Machine Learning workloads from regular workloads • Use node label to separate licensed software to some machines • Enabling GPU workloads • Separation of organizational workloads
  • 20. Case study (4) – Slider use cases • HBase region servers run in nodes with SSD (Non-exclusive). • HBase master monopolize to use nodes. • Map-reduce jobs run in other nodes. And they can use idle resources of region server nodes. ... ... HBase Master (Exclusive) HBase Region Server (Non-Exclusive) Default Slider HBase Master RS RS Launches MR AM User Submit Task Task TaskTask
  • 21. Status – Done parts of Node Labels • Exclusive / non-exclusive node partition support in Capacity Scheduler (√) • User-limit • Preemption • Now all respecting node partition! • Centralized configuration via CLI/REST API (√) • Distributed configuration in Node Manager’s config/script (√)
  • 22. Status - Node Labels Web UI
  • 23. Status – Other Apache projects support node label • Following projects are already support node label: • (SPARK-6470) • (MAPREDUCE-6304) • Slider (SLIDER-81) • (via SLIDER) • (via SLIDER) • (via SLIDER) • (AMBARI-10063)
  • 24. Future of Node Label • Support constraints (YARN-3409) • Orthogonal to partition, they’re describing attributes of node’s hardware/software just for affinity. • Some example of constraints: • glibc version • JDK version • Type of CPU (x86_64/i686) • Physical or virtualized • With this, application can ask for resource • glibc.version >= 2.20 && JDK.version >= 8u20 && x86_64 • Support node label in FairScheduler (YARN-2497) • Support in more projects • Tez • Oozie • …
  • 25. Q & A

Notas do Editor

  1. v5
  2. Add nodes here
  3. In simple: Node Partition is to split a big cluster to several smaller sub-clusters according to hardware / usage. Each partition has different capacities on queue hierarchy.
  4. Configuratio -> configuration