SlideShare uma empresa Scribd logo
1 de 118
Baixar para ler offline
HIGH PERFORMANCE DISTRIBUTED TENSORFLOW
IN PRODUCTION WITH GPUS (AND KUBERNETES)
NIPS CONFERENCE
LOS ANGELES BIG DATA MEETUP
SO-CAL PYDATA MEETUP
DECEMBER 2017
CHRIS FREGLY
FOUNDER @ PIPELINE.AI
INTRODUCTIONS: ME
§ Chris Fregly, Founder @ PipelineAI
§ Formerly Netflix, Databricks, IBM Spark Tech
§ Advanced Spark and TensorFlow Meetup
Please Join Our 60,000+ Global Members!!
Contact Me
chris@pipeline.ai
@cfregly
Global Locations
* San Francisco
* Chicago
* Austin
* Washington DC
* Dusseldorf
* London
INTRODUCTIONS: YOU
§ Software Engineer, Data Scientist, Data Engineer, Data Analyst
§ Interested in Optimizing and Deploying TF Models to Production
§ Nice to Have a Working Knowledge of TensorFlow (Not Required)
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
PIPELINE.AI IS 100% OPEN SOURCE
§ https://github.com/PipelineAI/pipeline/
§ Please Star 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
PIPELINE.AI OVERVIEW
450,000 Docker Downloads
60,000 Users Registered for GA
60,000 Meetup Members
50,000 LinkedIn Followers
2,200 GitHub Stars
12 Enterprise Beta Users
WHY HEAVY FOCUS ON MODEL SERVING?
Model Training
Batch & Boring
Offline in Research Lab
Pipeline Ends at Training
No Insight into Live Production
Small Number of Data Scientists
Optimizations Very Well-Known
Real-Time & Exciting!!
Online in Live Production
Pipeline Extends into Production
Continuous Insight into Live Production
Huuuuuuge Number of Application Users
**Many Optimizations Not Yet Utilized
<<<
Model Serving
100’s Training Jobs per Day 1,000,000’s Predictions per Sec
COMPARE MODELS OFFLINE & ONLINE
§ Offline, Batch Metrics
§ Validation + Training Accuracy
§ CPU + GPU Utilization
§ Live Prediction Values
§ Compare Relative Precision
§ Newly-Seen, Streaming Data
§ Online, Real-Time Metrics
§ Response Time, Throughput
§ Cost ($) Per Prediction
EVERYBODY GETS A GPU!
SETUP ENVIRONMENT
§ Step 1: Browse to the following:
http://allocator.community.pipeline.ai/allocate
§ Step 2: Browse to the following:
http://<ip-address>
§ Step 3: Browse around.
I will provide a Jupyter Username/Password soon.
Need Help?
Use the Chat!
VERIFY SETUP
http://<ip-address>
Any username,
Any password!
HANDS-ON EXERCISES
§ Combo of Jupyter Notebooks and Command Line
§ Command Line through Jupyter Terminal
§ Some Exercises Based on Experimental Features
You May See Errors. Stay Calm. It’s OK!!
LET’S EXPLORE OUR ENVIRONMENT
§ Navigate to the following notebook:
01_Explore_Environment
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
PULSE CHECK
BREAK Need Help?
Use the Chat!
§ Please Star 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
SETTING UP TENSORFLOW WITH GPUS
§ Very Painful!
§ Especially inside Docker
§ Use nvidia-docker
§ Especially on Kubernetes!
§ Use the Latest Kubernetes (with Init Script Support)
§ http://pipeline.ai for GitHub + DockerHub Links
TENSORFLOW + CUDA + NVIDIA GPU
GPU HALF-PRECISION SUPPORT
§ FP32 is “Full Precision”, FP16 is “Half Precision”
§ Two(2) FP16’s in Each FP32 GPU Core for 2x Throughput!
§ Lower Precision is OK for Approx. Deep Learning Use Cases
§ The Network Matters Most – Not Individual Neuron Accuracy
§ Supported by Pascal P100 (2016) and Volta V100 (2017)
You Can Set
TF_FP16_MATMUL_USE_FP32_COMPUTE=0
on GPU w/ Compute Capability(CC) 5.3+
VOLTA V100 (2017) VS. PASCAL P100 (2016)
§ 84 Streaming Multiprocessors (SM’s)
§ 5,376 GPU Cores
§ 672 Tensor Cores (ie. Google TPU)
§ Mixed FP16/FP32 Precision
§ Matrix Dims Should be Multiples of 8
§ More Shared Memory
§ New L0 Instruction Cache
§ Faster L1 Data Cache
§ V100 vs. P100 Performance
§ 12x Training, 6x Inference
FP32 VS. FP16 ON AWS GPU INSTANCES
FP16 Half Precision
87.2 T ops/second for p3 Volta V100
4.1 T ops/second for g3 Tesla M60
1.6 T ops/second for p2 Tesla K80
FP32 Full Precision
15.4 T ops/second for p3 Volta V100
4.0 T ops/second for g3 Tesla M60
3.3 T ops/second for p2 Tesla K80
§ Currently Supports the Following:
§ Tesla K80
§ Pascal P100
§ Volta V100 Coming Soon?
§ TPUs (Only in Google Cloud)
§ Attach GPUs to CPU Instances
§ Similar to AWS Elastic GPU, except less confusing
WHAT ABOUT GOOGLE CLOUD?
V100 AND CUDA 9
§ Independent Thread Scheduling - Finally!!
§ Similar to CPU fine-grained thread synchronization semantics
§ Allows GPU to yield execution of any thread
§ Still Optimized for SIMT (Same Instruction Multi-Thread)
§ SIMT units automatically scheduled together
§ Explicit Synchronization
P100 V100
GPU CUDA PROGRAMMING
§ Barbaric, But Fun Barbaric
§ Must Know Hardware Very Well
§ Hardware Changes are Painful
§ Use the Profilers & Debuggers
CUDA STREAMS
§ Asynchronous I/O Transfer
§ Overlap Compute and I/O
§ Keep GPUs Saturated!
§ Used Heavily by TensorFlow
Bad
Good
Bad
Good
CUDA SHARED AND UNIFIED MEMORY
LET’S SEE WHAT THIS THING CAN DO!
§ Navigate to the following notebook:
01a_Explore_GPU
01b_Explore_Numba
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
TRAINING TERMINOLOGY
§ Tensors: N-Dimensional Arrays
§ ie. Scalar, Vector, Matrix
§ Operations: MatMul, Add, SummaryLog,…
§ Graph: Graph of Operations (DAG)
§ Session: Contains Graph(s)
§ Feeds: Feed Inputs into Placeholder
§ Fetches: Fetch Output from Operation
§ Variables: What We Learn Through Training
§ aka “Weights”, “Parameters”
§ Devices: Hardware Device (GPU, CPU, TPU, ...)
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
with tf.device(“/cpu:0,/gpu:15”):
TENSORFLOW SESSION
Session
graph: GraphDef
Variables:
“W” : 0.328
“b” : -1.407
Variables are
Randomly
Initialized,
then
Periodically
Checkpointed
GraphDef is
Created During
Training, then
Frozen for
Inference
TENSORFLOW GRAPH EXECUTION
§ Lazy Execution by Default
§ Similar to Spark
§ Eager Execution Now Supported (TensorFlow 1.4+)
§ Similar to PyTorch
§ "Linearize” Execution to Minimize RAM Usage
§ Useful on Single GPU with Limited RAM
OPERATION PARALLELISM
§ Inter-Op (Between-Op) Parallelism
§ By default, TensorFlow runs multiple ops in parallel
§ Useful for low core and small memory/cache envs
§ Set to one (1)
§ Intra-Op (Within-Op) Parallelism
§ Different threads can use same set of data in RAM
§ Useful for compute-bound workloads (CNNs)
§ Set to # of cores (>=2)
TENSORFLOW MODEL
§ MetaGraph
§ Combines GraphDef and Metadata
§ GraphDef
§ Architecture of your model (nodes, edges)
§ Metadata
§ Asset: Accompanying assets to your model
§ SignatureDef: Maps external to internal tensors
§ Variables
§ Stored separately during training (checkpoint)
§ Allows training to continue from any checkpoint
§ Variables are “frozen” into Constants when preparing for inference
GraphDef
x
W
mul add
b
MetaGraph
Metadata
Assets
SignatureDef
Tags
Version
Variables:
“W” : 0.328
“b” : -1.407
SAVED MODEL FORMAT
§ Different Format than Traditional Exporter
§ Contains Checkpoints, 1..* MetaGraph’s, and Assets
§ Export Manually with SavedModelBuilder
§ Estimator.export_savedmodel()
§ Hooks to Generate SignatureDef
§ Use saved_model_cli to Verify
§ Used by TensorFlow Serving
§ New Standard Export Format? (Catching on Slowly…)
BATCH NORMALIZATION (2015)
§ Each Mini-Batch May Have Wildly Different Distributions
§ Normalize per Batch (and Layer)
§ Faster Training, Learns Quicker
§ Final Model is More Accurate
§ TensorFlow is already on 2nd Generation Batch Algorithm
§ First-Class Support for Fusing Batch Norm Layers
§ Final mean + variance Are Folded Into Graph Later
-- (Almost) Always Use Batch Normalization! --
z = tf.matmul(a_prev, W)
a = tf.nn.relu(z)
a_mean, a_var = tf.nn.moments(a, [0])
scale = tf.Variable(tf.ones([depth/channels]))
beta = tf.Variable(tf.zeros ([depth/channels]))
bn = tf.nn.batch_normalizaton(a, a_mean, a_var,
beta, scale, 0.001)
DROPOUT (2014)
§ Training Technique
§ Prevents Overfitting
§ Helps Avoid Local Minima
§ Inherent Ensembling Technique
§ Creates and Combines Different Neural Architectures
§ Expressed as Probability Percentage (ie. 50%)
§ Boost Other Weights During Validation & Prediction
Perform Dropout
(Training Phase)
Boost for Dropout
(Validation & Prediction Phase)
0%
Dropout
50%
Dropout
EXTEND EXISTING DATA PIPELINES
§ Data Processing
§ HDFS/Hadoop
§ Spark
§ Containers
§ Docker
§ Schedulers
§ Kubernetes
§ Mesos
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-hadoop</artifactId>
</dependency>
https://github.com/tensorflow/ecosystem
FEED TENSORFLOW TRAINING PIPELINE
§ Training is Limited by the Ingestion Pipeline
§ THE Number One Problem We See Today
§ Scaling GPUs Up / Out Doesn’t Help
§ GPUs are Heavily Under-Utilized
Tesla K80 Volta V100
DON’T USE FEED_DICT!!
§ feed_dict Requires Python <-> C++ Serialization
§ Not Optimized for Production Ingestion Pipelines
§ Retrieves Next Batch After Current Batch is Done
§ Single-Threaded, Synchronous
§ CPUs/GPUs Not Fully Utilized!
§ Use Queue or Dataset APIs
§ Queues are old & complex
sess.run(train_step, feed_dict={…}
DETECT UNDERUTILIZED CPUS, GPUS
§ Instrument training code to generate “timelines”
§ Analyze with Google Web
Tracing Framework (WTF)
§ Monitor CPU with top, GPU with nvidia-smi
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
QUEUES
§ More than traditional Queue
§ Uses CUDA Streams
§ Perform I/O, pre-processing, cropping, shuffling, …
§ Pull from HDFS, S3, Google Storage, Kafka, ...
§ Combine many small files into large TFRecord files
§ Use CPUs to free GPUs for compute
§ Helps saturate CPUs and GPUs
QUEUE CAPACITY PLANNING
§ batch_size
§ # examples / batch (ie. 64 jpg)
§ Limited by GPU RAM
§ num_processing_threads
§ CPU threads pull and pre-process batches of data
§ Limited by CPU Cores
§ queue_capacity
§ Limited by CPU RAM (ie. 5 * batch_size)
DATASET API
tf.Tensor => tf.data.Dataset
Functional Transformations
Python Generator => tf.data.Dataset
Dataset.from_tensors((features, labels))
Dataset.from_tensor_slices((features, labels))
TextLineDataset(filenames)
dataset.map(lambda x: tf.decode_jpeg(x))
dataset.repeat(NUM_EPOCHS)
dataset.batch(BATCH_SIZE)
def generator():
while True:
yield ...
dataset.from_generator(generator, tf.int32)
Dataset => One-Shot Iterator
Dataset => Initializable Iter
iter = dataset.make_one_shot_iterator()
next_element = iter.get_next()
while …:
sess.run(next_element)
iter = dataset.make_initializable_iterator()
sess.run(iter.initializer, feed_dict=PARAMS)
next_element = iter.get_next()
while …:
sess.run(next_element)
TIP: Use Dataset.prefetch() and parallel version of Dataset.map()
FUTURE OF DATASET API
§ Replace Queue
§ More Functional Operators
§ Automatic GPU Data Staging
§ Under-utilized GPUs Assisting with Data Ingestion
§ Advanced, RL-based Device Placement Strategies
LET’S FEED DATA WITH A QUEUE
§ Navigate to the following notebook:
02_Feed_Queue_HDFS
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
PULSE CHECK
BREAK Need Help?
Use the Chat!
§ Please Star 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
LET’S TRAIN A MODEL (CPU)
§ Navigate to the following notebook:
03_Train_Model_CPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
LET’S TRAIN A MODEL (GPU)
§ Navigate to the following notebook:
03a_Train_Model_GPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
TENSORFLOW DEBUGGER
§ Step through Operations
§ Inspect Inputs and Outputs
§ Wrap Session in Debug Session
sess = tf.Session(config=config)
sess =
tf_debug.LocalCLIDebugWrapperSession(sess)
LET’S DEBUG A MODEL
§ Navigate to the following notebook:
04_Debug_Model
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
SINGLE NODE, MULTI-GPU TRAINING
§ cpu:0
§ By default, all CPUs
§ Requires extra config to target a CPU
§ gpu:0..n
§ Each GPU has a unique id
§ TF usually prefers a single GPU
§ xla_cpu:0, xla_gpu:0..n
§ “JIT Compiler Device”
§ Hints TensorFlow to attempt JIT Compile
with tf.device(“/cpu:0”):
with tf.device(“/gpu:0”):
with tf.device(“/gpu:1”):
GPU 0 GPU 1
DISTRIBUTED, MULTI-NODE TRAINING
§ TensorFlow Automatically Inserts Send and Receive Ops into Graph
§ Parameter Server Synchronously Aggregates Updates to Variables
§ Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS
Worker0 Worker0
Worker1
Worker0 Worker1 Worker2
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0
gpu1
gpu0
gpu0
Single
Node
Multiple
Nodes
DATA PARALLEL VS. MODEL PARALLEL
§ Data Parallel (“Between-Graph Replication”)
§ Send exact same model to each device
§ Each device operates on partition of data
§ ie. Spark sends same function to many workers
§ Each worker operates on their partition of data
§ Model Parallel (“In-Graph Replication”)
§ Send different partition of model to each device
§ Each device operates on all data
§ Difficult, but required for larger models with lower-memory GPUs
SYNCHRONOUS VS. ASYNCHRONOUS
§ Synchronous
§ Nodes compute gradients
§ Nodes update Parameter Server (PS)
§ Nodes sync on PS for latest gradients
§ Asynchronous
§ Some nodes delay in computing gradients
§ Nodes don’t update PS
§ Nodes get stale gradients from PS
§ May not converge due to stale reads!
CHIEF WORKER
§ Chief Defaults to Worker Task 0
§ Task 0 is guaranteed to exist
§ Performs Maintenance Tasks
§ Writes log summaries
§ Instructs PS to checkpoint vars
§ Performs PS health checks
§ (Re-)Initialize variables at (re-)start of training
NODE AND PROCESS FAILURES
§ Checkpoint to Persistent Storage (HDFS, S3)
§ Use MonitoredTrainingSession and Hooks
§ Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos)
§ Understand Failure Modes and Recovery States
Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
ESTIMATOR + EXPERIMENT API
§ Supports Keras!
§ Unified API for Local + Distributed
§ Provide Clear Path to Production
§ Enable Rapid Model Experiments
§ Provide Flexible Parameter Tuning
§ Enable Downstream Optimizing & Serving Infra( )
§ Nudge Users to Best Practices Through Opinions
§ Provide Hooks/Callbacks to Override Opinions
ESTIMATOR API
§ “Train-to-Serve” Design
§ Create Custom Estimator or Re-Use Canned Estimator
§ Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict)
§ Hooks for All Phases of Model Training and Evaluation
§ Load Input: input_fn()
§ Train: model_fn() and train()
§ Evaluate: eval_fn() and evaluate()
§ Performance Metrics: Loss, Accuracy, …
§ Save and Export: export_savedmodel()
§ Predict: predict() Uses the slow sess.run()
https://github.com/GoogleCloudPlatform/cloudml-samples
/blob/master/census/customestimator/
EXPERIMENT API
§ Easier-to-Use Distributed TensorFlow
§ Same API for Local and Distributed (*Theoretically)
§ Combines Estimator with input_fn()
§ Used for Training, Evaluation, & Hyper-Parameter Tuning
§ Distributed Training Defaults to Data-Parallel & Async
§ Cluster Configuration is Fixed at Start of Training Job
§ No Auto-Scaling Allowed, but That’s OK for Training
ESTIMATOR & EXPERIMENT CONFIGS
§ TF_CONFIG
§ Special environment variable for config
§ Defines ClusterSpec in JSON incl. master, workers, PS’s
§ Distributed mode ‘{“environment”:“cloud”}’
§ Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’
§ RunConfig: Defines checkpoint interval, output directory,
§ HParams: Hyper-parameter tuning parameters and ranges
§ learn_runner creates RunConfig before calling run() & tune()
§ schedule is set based on {”task”:{”type”:…}}
TF_CONFIG=
'{
"environment": "cloud",
"cluster":
{
"master":["worker0:2222”],
"worker":["worker1:2222"],
"ps": ["ps0:2222"]
},
"task": {"type": "ps",
"index": "0"}
}'
ESTIMATOR + KERAS
§ Distributed TensorFlow (Estimator) + Easy to Use (Keras)
§ tf.keras.estimator.model_to_estimator()
# Instantiate a Keras inception v3 model.
keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None)
# Compile model with the optimizer, loss, and metrics you'd like to train with.
keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9),
loss='categorical_crossentropy',
metric='accuracy')
# Create an Estimator from the compiled Keras model.
est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3)
# Treat the derived Estimator as you would any other Estimator. For example,
# the following derived Estimator calls the train method:
est_inception_v3.train(input_fn=my_training_set, steps=2000)
“CANNED” ESTIMATORS
§ Commonly-Used Estimators
§ Pre-Tested and Pre-Tuned
§ DNNClassifer, TensorForestEstimator
§ Always Use Canned Estimators If Possible
§ Reduce Lines of Code, Complexity, and Bugs
§ Use FeatureColumn to Define & Create Features
Custom vs. Canned
@ Google, August 2017
ESTIMATOR + DATASET
def input_fn():
def generator():
while True:
yield ...
my_dataset = tf.data.dataset.from_generator(generator, tf.int32)
# A one-shot iterator automatically initializes itself on first use.
iter = my_dataset.make_one_shot_iterator()
# The return value of get_next() matches the dataset element type.
images, labels = iter.get_next()
return images, labels
# The input_fn can be used as a regular Estimator input function.
estimator = tf.estimator.Estimator(…)
estimator.train(train_input_fn=input_fn, …)
OPTIMIZER, ESTIMATOR API + TPU’S
run_config = tpu_config.RunConfig()
estimator = tpu_estimator.TpuEstimator(model_fn=model_fn,
config=run_config)
estimator.train(input_fn=input_fn,
num_epochs=10,
…)
optimizer = tpu_optimizer.CrossShardOptimizer(
tf.train.GradientDescentOptimizer(learning_rate=…)
)
train_op = optimizer.minimize(loss)
estimator_spec = tf.estimator.EstimatorSpec(train_op=train_op,
loss=…)
MULTIPLE HEADS (OBJECTIVES)
§ Single-Objective Estimator
§ Single classification prediction
§ Multi-Objective Estimator
§ One (1) classification prediction
§ One(1) final layer to feed into next model
§ Multiple Heads Used to Ensemble Models
§ Treats neural network as a feature engineering step
§ Supported by TensorFlow Serving
LAYERS API
§ Standalone Layer or Entire Sub-Graphs
§ Functions of Tensor Inputs & Outputs
§ Mix and Match with Operations
§ Assumes 1st Dimension is Batch Size
§ Handles One (1) to Many (*) Inputs
§ Metrics are Layers
§ Loss Metric (Per Mini-Batch)
§ Accuracy and MSE (Across Mini-Batches)
FEATURE_COLUMN API
§ Used by Canned Estimator
§ Declaratively Specify Training Inputs
§ Converts Sparse to Dense Tensors
§ Sparse Features: Query Keyword, ProductID
§ Dense Features: One-Hot, Multi-Hot
§ Wide/Linear: Use Feature-Crossing
§ Deep: Use Embeddings
FEATURE CROSSING
§ Create New Features by Combining Existing Features
§ Limitation: Combinations Must Exist in Training Dataset
base_columns = [
education, marital_status, relationship, workclass, occupation, age_buckets
]
crossed_columns = [
tf.feature_column.crossed_column(
['education', 'occupation'], hash_bucket_size=1000),
tf.feature_column.crossed_column(
['age_buckets', 'education', 'occupation'], hash_bucket_size=1000)
]
FEATURE_COLUMN EXAMPLES
§ Continuous + One-Hot + Embedding
deep_columns = [
age,
education_num,
capital_gain,
capital_loss,
hours_per_week,
tf.feature_column.indicator_column(workclass),
tf.feature_column.indicator_column(education),
tf.feature_column.indicator_column(marital_status),
tf.feature_column.indicator_column(relationship),
# To show an example of embedding
tf.feature_column.embedding_column(occupation, dimension=8),
]
SEPARATE TRAINING + EVALUATION
§ Separate Training and Evaluation Clusters
§ Evaluate Upon Checkpoint
§ Avoid Resource Contention
§ Training Continues in Parallel with Evaluation
Training
Cluster
Evaluation
Cluster
Parameter Server
Cluster
LET’S TRAIN DISTRIBUTED TENSORFLOW
§ Navigate to the following notebook:
05_Train_Model_Distributed_CPU
or 05a_Train_Model_Distributed_GPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
PULSE CHECK
BREAK Need Help?
Use the Chat!
§ Please Star 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
XLA FRAMEWORK
§ XLA: “Accelerated Linear Algebra”
§ Reduce Reliance on Custom Operators
§ Improve Execution Speed
§ Improve Memory Usage
§ Reduce Mobile Footprint
§ Improve Portability
Helps TensorFlow Stay Flexible, Yet Still Performant
XLA HIGH LEVEL OPTIMIZER (HLO)
§ HLO: “High Level Optimizer”
§ Compiler Intermediate Representation (IR)
§ Independent of source and target language
§ XLA Step 1 Emits Target-Independent HLO
§ XLA Step 2 Emits Target-Dependent LLVM
§ LLVM Emits Native Code Specific to Target
§ Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
JIT COMPILER
§ JIT: “Just-In-Time” Compiler
§ Built on XLA Framework
§ Reduce Memory Movement – Especially with GPUs
§ Reduce Overhead of Multiple Function Calls
§ Similar to Spark Operator Fusing in Spark 2.0
§ Unroll Loops, Fuse Operators, Fold Constants, …
§ Scopes: session, device, with jit_scope():
VISUALIZING JIT COMPILER IN ACTION
Before JIT After JIT
Google Web Tracing Framework:
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
sess.run(options=run_options,
run_metadata=run_metadata)
VISUALIZING FUSING OPERATORS
pip install graphviz
dot -Tpng 
/tmp/hlo_graph_1.w5LcGs.dot 
-o hlo_graph_1.png
GraphViz:
http://www.graphviz.org
hlo_*.dot files generated by XLA
LET’S TRAIN WITH XLA CPU
§ Navigate to the following notebook:
06_Train_Model_XLA_CPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
LET’S TRAIN WITH XLA GPU
§ Navigate to the following notebook:
06a_Train_Model_XLA_GPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
WE ARE NOW…
…OPTIMIZING Models
AFTER Model Training
TO IMPROVE Model Serving
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
AOT COMPILER
§ Standalone, Ahead-Of-Time (AOT) Compiler
§ Built on XLA framework
§ tfcompile
§ Creates executable with minimal TensorFlow Runtime needed
§ Includes only dependencies needed by subgraph computation
§ Creates functions with feeds (inputs) and fetches (outputs)
§ Packaged as cc_libary header and object files to link into your app
§ Commonly used for mobile device inference graph
§ Currently, only CPU x86-64 and ARM are supported - no GPU
GRAPH TRANSFORM TOOL (GTT)
§ Post-Training Optimization to Prepare for Inference
§ Remove Training-only Ops (checkpoint, drop out, logs)
§ Remove Unreachable Nodes between Given feed -> fetch
§ Fuse Adjacent Operators to Improve Memory Bandwidth
§ Fold Final Batch Norm mean and variance into Variables
§ Round Weights/Variables to improve compression (ie. 70%)
§ Quantize (FP32 -> INT8) to Speed Up Math Operations
AFTER TRAINING, BEFORE OPTIMIZATION
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
?!
POST-TRAINING GRAPH TRANSFORMS
transform_graph 
--in_graph=unoptimized_cpu_graph.pb  ß Original Graph
--out_graph=optimized_cpu_graph.pb  ß Transformed Graph
--inputs=’x_observed:0'  ß Feed (Input)
--outputs=’Add:0'  ß Fetch (Output)
--transforms=' ß List of Transforms
strip_unused_nodes
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
quantize_weights
quantize_nodes'
AFTER STRIPPING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ Results
§ Graph much simpler
§ File size much smaller
AFTER REMOVING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ Results
§ Pesky nodes removed
§ File size a bit smaller
AFTER FOLDING CONSTANTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ Results
§ Placeholders (feeds) -> Variables*
(*Why Variables and not Constants?)
AFTER FOLDING BATCH NORMS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ Results
§ Graph remains the same
§ File size approximately the same
AFTER QUANTIZING WEIGHTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ Results
§ Graph is same, file size is smaller, compute is faster
WEIGHT QUANTIZATION
§ FP16 and INT8 Are Smaller and Computationally Simpler
§ Weights/Variables are Constants
§ Easy to Linearly Quantize
LET’S OPTIMIZE FOR INFERENCE
§ Navigate to the following notebook:
07_Optimize_Model*
*Why just CPU version? Why not GPU?
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
BUT WAIT, THERE’S MORE!
ACTIVATION QUANTIZATION
§ Activations Not Known Ahead of Time
§ Depends on input, not easy to quantize
§ Requires Additional Calibration Step
§ Use a “representative” dataset
§ Per Neural Network Layer…
§ Collect histogram of activation values
§ Generate many quantized distributions with different saturation thresholds
§ Choose threshold to minimize…
KL_divergence(ref_distribution, quant_distribution)
§ Not Much Time or Data is Required (Minutes on Commodity Hardware)
AFTER ACTIVATION QUANTIZATION
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ quantize_nodes (activations)
§ Results
§ Larger graph, needs calibration!
Requires Additional
freeze_requantization_ranges
LET’S OPTIMIZE FOR INFERENCE
§ Navigate to the following notebook:
08_Optimize_Model_Activations
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
FREEZING MODEL FOR DEPLOYMENT
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ quantize_nodes
§ freeze_graph
§ Results
§ Variables -> Constants
Finally!
We’re Ready to Deploy!!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
MODEL SERVING TERMINOLOGY
§ Inference
§ Only Forward Propagation through Network
§ Predict, Classify, Regress, …
§ Bundle
§ GraphDef, Variables, Metadata, …
§ Assets
§ ie. Map of ClassificationID -> String
§ {9283: “penguin”, 9284: “bridge”}
§ Version
§ Every Model Has a Version Number (Integer)
§ Version Policy
§ ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
TENSORFLOW SERVING FEATURES
§ Supports Auto-Scaling
§ Custom Loaders beyond File-based
§ Tune for Low-latency or High-throughput
§ Serve Diff Models/Versions in Same Process
§ Customize Models Types beyond HashMap and TensorFlow
§ Customize Version Policies for A/B and Bandit Tests
§ Support Request Draining for Graceful Model Updates
§ Enable Request Batching for Diff Use Cases and HW
§ Supports Optimized Transport with GRPC and Protocol Buffers
PREDICTION SERVICE
§ Predict (Original, Generic)
§ Input: List of Tensor
§ Output: List of Tensor
§ Classify
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (class_label: String, score: float)
§ Regress
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (label: String, score: float)
PREDICTION INPUTS + OUTPUTS
§ SignatureDef
§ Defines inputs and outputs
§ Maps external (logical) to internal (physical) tensor names
§ Allows internal (physical) tensor names to change
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
graph = tf.get_default_graph()
x_observed = graph.get_tensor_by_name('x_observed:0')
y_pred = graph.get_tensor_by_name('add:0')
inputs_map = {'inputs': x_observed}
outputs_map = {'outputs': y_pred}
predict_signature =
signature_def_utils.predict_signature_def(inputs=inputs_map,
outputs=outputs_map)
MULTI-HEADED INFERENCE
§ Inputs Pass Through Model One Time
§ Model Returns Multiple Predictions:
1. Human-readable prediction (ie. “penguin”, “church”,…)
2. Final layer of scores (float vector)
§ Final Layer of floats Pass to the Next Model in Ensemble
§ Optimizes Bandwidth, CPU/GPU, Latency, Memory
§ Enables Complex Model Composing and Ensembling
BUILD YOUR OWN MODEL SERVER
§ Adapt GRPC(Google) <-> HTTP (REST of the World)
§ Perform Batch Inference vs. Request/Response
§ Handle Requests Asynchronously
§ Support Mobile, Embedded Inference
§ Customize Request Batching
§ Add Circuit Breakers, Fallbacks
§ Control Latency Requirements
§ Reduce Number of Moving Parts
#include
“tensorflow_serving/model_servers/server_core.h”
class MyTensorFlowModelServer {
ServerCore::Options options;
// set options (model name, path, etc)
std::unique_ptr<ServerCore> core;
TF_CHECK_OK(
ServerCore::Create(std::move(options), &core)
);
}
Compile and Link with
libtensorflow.so
RUNTIME OPTION: NVIDIA TENSOR-RT
§ Post-Training Model Optimizations
§ Specific to Nvidia GPU
§ Similar to TF Graph Transform Tool
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
SAVED MODEL FORMAT
§ Navigate to the following notebook:
09_Deploy_Optimized_Model
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
REQUEST BATCH TUNING
§ max_batch_size
§ Enables throughput/latency tradeoff
§ Bounded by RAM
§ batch_timeout_micros
§ Defines batch time window, latency upper-bound
§ Bounded by RAM
§ num_batch_threads
§ Defines parallelism
§ Bounded by CPU cores
§ max_enqueued_batches
§ Defines queue upper bound, throttling
§ Bounded by RAM
Reaching either threshold
will trigger a batch
ADVANCED BATCHING & SERVING TIPS
§ Batch Just the GPU/TPU Portions of the Computation Graph
§ Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops
§ Distribute Large Models Into Shards Across TensorFlow Model Servers
§ Batch RNNs Used for Sequential and Time-Series Data
§ Find Best Batching Strategy For Your Data Through Experimentation
§ BasicBatchScheduler: Homogeneous requests (ie Regress or Classify)
§ SharedBatchScheduler: Mixed requests, multi-step, ensemble predict
§ StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads
§ Serve Only One (1) Model Inside One (1) TensorFlow Serving Process
§ Much Easier to Debug, Tune, Scale, and Manage Models in Production.
LET’S DEPLOY OPTIMIZED MODEL
§ Navigate to the following notebook:
10_Optimize_Model_Server
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
THANK YOU! QUESTIONS?
§ https://github.com/PipelineAI/pipeline/
§ Please Star 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
Contact Me
chris@pipeline.ai
@cfregly

Mais conteúdo relacionado

Mais procurados

Building Google's ML Engine from Scratch on AWS with GPUs, Kubernetes, Istio,...
Building Google's ML Engine from Scratch on AWS with GPUs, Kubernetes, Istio,...Building Google's ML Engine from Scratch on AWS with GPUs, Kubernetes, Istio,...
Building Google's ML Engine from Scratch on AWS with GPUs, Kubernetes, Istio,...
Chris Fregly
 
London devops logging
London devops loggingLondon devops logging
London devops logging
Tomas Doran
 

Mais procurados (20)

Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on G...
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on G...Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on G...
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on G...
 
High Performance TensorFlow in Production - Big Data Spain - Madrid - Nov 15 ...
High Performance TensorFlow in Production - Big Data Spain - Madrid - Nov 15 ...High Performance TensorFlow in Production - Big Data Spain - Madrid - Nov 15 ...
High Performance TensorFlow in Production - Big Data Spain - Madrid - Nov 15 ...
 
PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to...
PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to...PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to...
PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to...
 
Building Google Cloud ML Engine From Scratch on AWS with PipelineAI - ODSC Lo...
Building Google Cloud ML Engine From Scratch on AWS with PipelineAI - ODSC Lo...Building Google Cloud ML Engine From Scratch on AWS with PipelineAI - ODSC Lo...
Building Google Cloud ML Engine From Scratch on AWS with PipelineAI - ODSC Lo...
 
Building Google's ML Engine from Scratch on AWS with GPUs, Kubernetes, Istio,...
Building Google's ML Engine from Scratch on AWS with GPUs, Kubernetes, Istio,...Building Google's ML Engine from Scratch on AWS with GPUs, Kubernetes, Istio,...
Building Google's ML Engine from Scratch on AWS with GPUs, Kubernetes, Istio,...
 
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUsOptimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
 
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
 
Go語言開發APM微服務在Kubernetes之經驗分享
Go語言開發APM微服務在Kubernetes之經驗分享Go語言開發APM微服務在Kubernetes之經驗分享
Go語言開發APM微服務在Kubernetes之經驗分享
 
High performance network programming on the jvm oscon 2012
High performance network programming on the jvm   oscon 2012 High performance network programming on the jvm   oscon 2012
High performance network programming on the jvm oscon 2012
 
Ndp Slides
Ndp SlidesNdp Slides
Ndp Slides
 
London devops logging
London devops loggingLondon devops logging
London devops logging
 
Introduction to Polyaxon
Introduction to PolyaxonIntroduction to Polyaxon
Introduction to Polyaxon
 
Mасштабирование микросервисов на Go, Matt Heath (Hailo)
Mасштабирование микросервисов на Go, Matt Heath (Hailo)Mасштабирование микросервисов на Go, Matt Heath (Hailo)
Mасштабирование микросервисов на Go, Matt Heath (Hailo)
 
Profiling & Testing with Spark
Profiling & Testing with SparkProfiling & Testing with Spark
Profiling & Testing with Spark
 
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHP
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHPHands on Docker - Launch your own LEMP or LAMP stack - SunshinePHP
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHP
 
Cloud Foundry 百日行 振り返り
Cloud Foundry 百日行 振り返りCloud Foundry 百日行 振り返り
Cloud Foundry 百日行 振り返り
 
Optimizing Application Performance on Kubernetes
Optimizing Application Performance on KubernetesOptimizing Application Performance on Kubernetes
Optimizing Application Performance on Kubernetes
 
Adding replication protocol support for psycopg2
Adding replication protocol support for psycopg2Adding replication protocol support for psycopg2
Adding replication protocol support for psycopg2
 
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013
 
Varnish Configuration Step by Step
Varnish Configuration Step by StepVarnish Configuration Step by Step
Varnish Configuration Step by Step
 

Semelhante a High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 - LA Big Data Meetup - SoCal PyData Meetup - Dec 2017

Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Chris Fregly
 

Semelhante a High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 - LA Big Data Meetup - SoCal PyData Meetup - Dec 2017 (20)

Optimizing, profiling and deploying high performance Spark ML and TensorFlow ...
Optimizing, profiling and deploying high performance Spark ML and TensorFlow ...Optimizing, profiling and deploying high performance Spark ML and TensorFlow ...
Optimizing, profiling and deploying high performance Spark ML and TensorFlow ...
 
Intro - End to end ML with Kubeflow @ SignalConf 2018
Intro - End to end ML with Kubeflow @ SignalConf 2018Intro - End to end ML with Kubeflow @ SignalConf 2018
Intro - End to end ML with Kubeflow @ SignalConf 2018
 
TensorFlowOnSpark: Scalable TensorFlow Learning on Spark Clusters
TensorFlowOnSpark: Scalable TensorFlow Learning on Spark ClustersTensorFlowOnSpark: Scalable TensorFlow Learning on Spark Clusters
TensorFlowOnSpark: Scalable TensorFlow Learning on Spark Clusters
 
饿了么 TensorFlow 深度学习平台:elearn
饿了么 TensorFlow 深度学习平台:elearn饿了么 TensorFlow 深度学习平台:elearn
饿了么 TensorFlow 深度学习平台:elearn
 
Tallinn Estonia Advanced Java Meetup Spark + TensorFlow = TensorFrames Oct 24...
Tallinn Estonia Advanced Java Meetup Spark + TensorFlow = TensorFrames Oct 24...Tallinn Estonia Advanced Java Meetup Spark + TensorFlow = TensorFrames Oct 24...
Tallinn Estonia Advanced Java Meetup Spark + TensorFlow = TensorFrames Oct 24...
 
The Flow of TensorFlow
The Flow of TensorFlowThe Flow of TensorFlow
The Flow of TensorFlow
 
How to Run TensorFlow Cheaper in the Cloud Using Elastic GPUs
How to Run TensorFlow Cheaper in the Cloud Using Elastic GPUsHow to Run TensorFlow Cheaper in the Cloud Using Elastic GPUs
How to Run TensorFlow Cheaper in the Cloud Using Elastic GPUs
 
Apache Submarine: Unified Machine Learning Platform
Apache Submarine: Unified Machine Learning PlatformApache Submarine: Unified Machine Learning Platform
Apache Submarine: Unified Machine Learning Platform
 
Deep Learning with Apache Spark and GPUs with Pierce Spitler
Deep Learning with Apache Spark and GPUs with Pierce SpitlerDeep Learning with Apache Spark and GPUs with Pierce Spitler
Deep Learning with Apache Spark and GPUs with Pierce Spitler
 
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
 
Clustering tensor flow con kubernetes y raspberry pi
Clustering tensor flow con kubernetes y raspberry piClustering tensor flow con kubernetes y raspberry pi
Clustering tensor flow con kubernetes y raspberry pi
 
How Many Slaves (Ukoug)
How Many Slaves (Ukoug)How Many Slaves (Ukoug)
How Many Slaves (Ukoug)
 
Quantifying Container Runtime Performance: OSCON 2017 Open Container Day
Quantifying Container Runtime Performance: OSCON 2017 Open Container DayQuantifying Container Runtime Performance: OSCON 2017 Open Container Day
Quantifying Container Runtime Performance: OSCON 2017 Open Container Day
 
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AIOptimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
 
Computação Paralela: Benefícios e Desafios - Intel Software Conference 2013
Computação Paralela: Benefícios e Desafios - Intel Software Conference 2013Computação Paralela: Benefícios e Desafios - Intel Software Conference 2013
Computação Paralela: Benefícios e Desafios - Intel Software Conference 2013
 
GPU and Deep learning best practices
GPU and Deep learning best practicesGPU and Deep learning best practices
GPU and Deep learning best practices
 
TensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsTensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.js
 
Deep Learning with Spark and GPUs
Deep Learning with Spark and GPUsDeep Learning with Spark and GPUs
Deep Learning with Spark and GPUs
 
Powering Tensorflow with big data using Apache Beam, Flink, and Spark - OSCON...
Powering Tensorflow with big data using Apache Beam, Flink, and Spark - OSCON...Powering Tensorflow with big data using Apache Beam, Flink, and Spark - OSCON...
Powering Tensorflow with big data using Apache Beam, Flink, and Spark - OSCON...
 
Atlanta Hadoop Users Meetup 09 21 2016
Atlanta Hadoop Users Meetup 09 21 2016Atlanta Hadoop Users Meetup 09 21 2016
Atlanta Hadoop Users Meetup 09 21 2016
 

Mais de Chris Fregly

Amazon reInvent 2020 Recap: AI and Machine Learning
Amazon reInvent 2020 Recap:  AI and Machine LearningAmazon reInvent 2020 Recap:  AI and Machine Learning
Amazon reInvent 2020 Recap: AI and Machine Learning
Chris Fregly
 
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
Chris Fregly
 
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Chris Fregly
 
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Chris Fregly
 
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Chris Fregly
 

Mais de Chris Fregly (16)

AWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and DataAWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and Data
 
Pandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdfPandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdf
 
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS MeetupRay AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
 
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds UpdatedSmokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
 
Amazon reInvent 2020 Recap: AI and Machine Learning
Amazon reInvent 2020 Recap:  AI and Machine LearningAmazon reInvent 2020 Recap:  AI and Machine Learning
Amazon reInvent 2020 Recap: AI and Machine Learning
 
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
 
Quantum Computing with Amazon Braket
Quantum Computing with Amazon BraketQuantum Computing with Amazon Braket
Quantum Computing with Amazon Braket
 
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
 
AWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:CapAWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:Cap
 
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
 
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
 
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
 
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
 
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
 
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
 
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
 

Último

CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
anilsa9823
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
mohitmore19
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
anilsa9823
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Último (20)

Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AISyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 

High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 - LA Big Data Meetup - SoCal PyData Meetup - Dec 2017

  • 1. HIGH PERFORMANCE DISTRIBUTED TENSORFLOW IN PRODUCTION WITH GPUS (AND KUBERNETES) NIPS CONFERENCE LOS ANGELES BIG DATA MEETUP SO-CAL PYDATA MEETUP DECEMBER 2017 CHRIS FREGLY FOUNDER @ PIPELINE.AI
  • 2. INTRODUCTIONS: ME § Chris Fregly, Founder @ PipelineAI § Formerly Netflix, Databricks, IBM Spark Tech § Advanced Spark and TensorFlow Meetup Please Join Our 60,000+ Global Members!! Contact Me chris@pipeline.ai @cfregly Global Locations * San Francisco * Chicago * Austin * Washington DC * Dusseldorf * London
  • 3. INTRODUCTIONS: YOU § Software Engineer, Data Scientist, Data Engineer, Data Analyst § Interested in Optimizing and Deploying TF Models to Production § Nice to Have a Working Knowledge of TensorFlow (Not Required)
  • 4. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving
  • 5. PIPELINE.AI IS 100% OPEN SOURCE § https://github.com/PipelineAI/pipeline/ § Please Star 🌟 this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
  • 6. PIPELINE.AI OVERVIEW 450,000 Docker Downloads 60,000 Users Registered for GA 60,000 Meetup Members 50,000 LinkedIn Followers 2,200 GitHub Stars 12 Enterprise Beta Users
  • 7. WHY HEAVY FOCUS ON MODEL SERVING? Model Training Batch & Boring Offline in Research Lab Pipeline Ends at Training No Insight into Live Production Small Number of Data Scientists Optimizations Very Well-Known Real-Time & Exciting!! Online in Live Production Pipeline Extends into Production Continuous Insight into Live Production Huuuuuuge Number of Application Users **Many Optimizations Not Yet Utilized <<< Model Serving 100’s Training Jobs per Day 1,000,000’s Predictions per Sec
  • 8. COMPARE MODELS OFFLINE & ONLINE § Offline, Batch Metrics § Validation + Training Accuracy § CPU + GPU Utilization § Live Prediction Values § Compare Relative Precision § Newly-Seen, Streaming Data § Online, Real-Time Metrics § Response Time, Throughput § Cost ($) Per Prediction
  • 10. SETUP ENVIRONMENT § Step 1: Browse to the following: http://allocator.community.pipeline.ai/allocate § Step 2: Browse to the following: http://<ip-address> § Step 3: Browse around. I will provide a Jupyter Username/Password soon. Need Help? Use the Chat!
  • 12. HANDS-ON EXERCISES § Combo of Jupyter Notebooks and Command Line § Command Line through Jupyter Terminal § Some Exercises Based on Experimental Features You May See Errors. Stay Calm. It’s OK!!
  • 13. LET’S EXPLORE OUR ENVIRONMENT § Navigate to the following notebook: 01_Explore_Environment § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 15. BREAK Need Help? Use the Chat! § Please Star 🌟 this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
  • 16. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Feed, Train, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 17. SETTING UP TENSORFLOW WITH GPUS § Very Painful! § Especially inside Docker § Use nvidia-docker § Especially on Kubernetes! § Use the Latest Kubernetes (with Init Script Support) § http://pipeline.ai for GitHub + DockerHub Links
  • 18. TENSORFLOW + CUDA + NVIDIA GPU
  • 19. GPU HALF-PRECISION SUPPORT § FP32 is “Full Precision”, FP16 is “Half Precision” § Two(2) FP16’s in Each FP32 GPU Core for 2x Throughput! § Lower Precision is OK for Approx. Deep Learning Use Cases § The Network Matters Most – Not Individual Neuron Accuracy § Supported by Pascal P100 (2016) and Volta V100 (2017) You Can Set TF_FP16_MATMUL_USE_FP32_COMPUTE=0 on GPU w/ Compute Capability(CC) 5.3+
  • 20. VOLTA V100 (2017) VS. PASCAL P100 (2016) § 84 Streaming Multiprocessors (SM’s) § 5,376 GPU Cores § 672 Tensor Cores (ie. Google TPU) § Mixed FP16/FP32 Precision § Matrix Dims Should be Multiples of 8 § More Shared Memory § New L0 Instruction Cache § Faster L1 Data Cache § V100 vs. P100 Performance § 12x Training, 6x Inference
  • 21. FP32 VS. FP16 ON AWS GPU INSTANCES FP16 Half Precision 87.2 T ops/second for p3 Volta V100 4.1 T ops/second for g3 Tesla M60 1.6 T ops/second for p2 Tesla K80 FP32 Full Precision 15.4 T ops/second for p3 Volta V100 4.0 T ops/second for g3 Tesla M60 3.3 T ops/second for p2 Tesla K80
  • 22. § Currently Supports the Following: § Tesla K80 § Pascal P100 § Volta V100 Coming Soon? § TPUs (Only in Google Cloud) § Attach GPUs to CPU Instances § Similar to AWS Elastic GPU, except less confusing WHAT ABOUT GOOGLE CLOUD?
  • 23. V100 AND CUDA 9 § Independent Thread Scheduling - Finally!! § Similar to CPU fine-grained thread synchronization semantics § Allows GPU to yield execution of any thread § Still Optimized for SIMT (Same Instruction Multi-Thread) § SIMT units automatically scheduled together § Explicit Synchronization P100 V100
  • 24. GPU CUDA PROGRAMMING § Barbaric, But Fun Barbaric § Must Know Hardware Very Well § Hardware Changes are Painful § Use the Profilers & Debuggers
  • 25. CUDA STREAMS § Asynchronous I/O Transfer § Overlap Compute and I/O § Keep GPUs Saturated! § Used Heavily by TensorFlow Bad Good Bad Good
  • 26. CUDA SHARED AND UNIFIED MEMORY
  • 27. LET’S SEE WHAT THIS THING CAN DO! § Navigate to the following notebook: 01a_Explore_GPU 01b_Explore_Numba § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 28. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Feed, Train, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 29. TRAINING TERMINOLOGY § Tensors: N-Dimensional Arrays § ie. Scalar, Vector, Matrix § Operations: MatMul, Add, SummaryLog,… § Graph: Graph of Operations (DAG) § Session: Contains Graph(s) § Feeds: Feed Inputs into Placeholder § Fetches: Fetch Output from Operation § Variables: What We Learn Through Training § aka “Weights”, “Parameters” § Devices: Hardware Device (GPU, CPU, TPU, ...) -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors with tf.device(“/cpu:0,/gpu:15”):
  • 30. TENSORFLOW SESSION Session graph: GraphDef Variables: “W” : 0.328 “b” : -1.407 Variables are Randomly Initialized, then Periodically Checkpointed GraphDef is Created During Training, then Frozen for Inference
  • 31. TENSORFLOW GRAPH EXECUTION § Lazy Execution by Default § Similar to Spark § Eager Execution Now Supported (TensorFlow 1.4+) § Similar to PyTorch § "Linearize” Execution to Minimize RAM Usage § Useful on Single GPU with Limited RAM
  • 32. OPERATION PARALLELISM § Inter-Op (Between-Op) Parallelism § By default, TensorFlow runs multiple ops in parallel § Useful for low core and small memory/cache envs § Set to one (1) § Intra-Op (Within-Op) Parallelism § Different threads can use same set of data in RAM § Useful for compute-bound workloads (CNNs) § Set to # of cores (>=2)
  • 33. TENSORFLOW MODEL § MetaGraph § Combines GraphDef and Metadata § GraphDef § Architecture of your model (nodes, edges) § Metadata § Asset: Accompanying assets to your model § SignatureDef: Maps external to internal tensors § Variables § Stored separately during training (checkpoint) § Allows training to continue from any checkpoint § Variables are “frozen” into Constants when preparing for inference GraphDef x W mul add b MetaGraph Metadata Assets SignatureDef Tags Version Variables: “W” : 0.328 “b” : -1.407
  • 34. SAVED MODEL FORMAT § Different Format than Traditional Exporter § Contains Checkpoints, 1..* MetaGraph’s, and Assets § Export Manually with SavedModelBuilder § Estimator.export_savedmodel() § Hooks to Generate SignatureDef § Use saved_model_cli to Verify § Used by TensorFlow Serving § New Standard Export Format? (Catching on Slowly…)
  • 35. BATCH NORMALIZATION (2015) § Each Mini-Batch May Have Wildly Different Distributions § Normalize per Batch (and Layer) § Faster Training, Learns Quicker § Final Model is More Accurate § TensorFlow is already on 2nd Generation Batch Algorithm § First-Class Support for Fusing Batch Norm Layers § Final mean + variance Are Folded Into Graph Later -- (Almost) Always Use Batch Normalization! -- z = tf.matmul(a_prev, W) a = tf.nn.relu(z) a_mean, a_var = tf.nn.moments(a, [0]) scale = tf.Variable(tf.ones([depth/channels])) beta = tf.Variable(tf.zeros ([depth/channels])) bn = tf.nn.batch_normalizaton(a, a_mean, a_var, beta, scale, 0.001)
  • 36. DROPOUT (2014) § Training Technique § Prevents Overfitting § Helps Avoid Local Minima § Inherent Ensembling Technique § Creates and Combines Different Neural Architectures § Expressed as Probability Percentage (ie. 50%) § Boost Other Weights During Validation & Prediction Perform Dropout (Training Phase) Boost for Dropout (Validation & Prediction Phase) 0% Dropout 50% Dropout
  • 37. EXTEND EXISTING DATA PIPELINES § Data Processing § HDFS/Hadoop § Spark § Containers § Docker § Schedulers § Kubernetes § Mesos <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow-hadoop</artifactId> </dependency> https://github.com/tensorflow/ecosystem
  • 38. FEED TENSORFLOW TRAINING PIPELINE § Training is Limited by the Ingestion Pipeline § THE Number One Problem We See Today § Scaling GPUs Up / Out Doesn’t Help § GPUs are Heavily Under-Utilized Tesla K80 Volta V100
  • 39. DON’T USE FEED_DICT!! § feed_dict Requires Python <-> C++ Serialization § Not Optimized for Production Ingestion Pipelines § Retrieves Next Batch After Current Batch is Done § Single-Threaded, Synchronous § CPUs/GPUs Not Fully Utilized! § Use Queue or Dataset APIs § Queues are old & complex sess.run(train_step, feed_dict={…}
  • 40. DETECT UNDERUTILIZED CPUS, GPUS § Instrument training code to generate “timelines” § Analyze with Google Web Tracing Framework (WTF) § Monitor CPU with top, GPU with nvidia-smi http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  • 41. QUEUES § More than traditional Queue § Uses CUDA Streams § Perform I/O, pre-processing, cropping, shuffling, … § Pull from HDFS, S3, Google Storage, Kafka, ... § Combine many small files into large TFRecord files § Use CPUs to free GPUs for compute § Helps saturate CPUs and GPUs
  • 42. QUEUE CAPACITY PLANNING § batch_size § # examples / batch (ie. 64 jpg) § Limited by GPU RAM § num_processing_threads § CPU threads pull and pre-process batches of data § Limited by CPU Cores § queue_capacity § Limited by CPU RAM (ie. 5 * batch_size)
  • 43. DATASET API tf.Tensor => tf.data.Dataset Functional Transformations Python Generator => tf.data.Dataset Dataset.from_tensors((features, labels)) Dataset.from_tensor_slices((features, labels)) TextLineDataset(filenames) dataset.map(lambda x: tf.decode_jpeg(x)) dataset.repeat(NUM_EPOCHS) dataset.batch(BATCH_SIZE) def generator(): while True: yield ... dataset.from_generator(generator, tf.int32) Dataset => One-Shot Iterator Dataset => Initializable Iter iter = dataset.make_one_shot_iterator() next_element = iter.get_next() while …: sess.run(next_element) iter = dataset.make_initializable_iterator() sess.run(iter.initializer, feed_dict=PARAMS) next_element = iter.get_next() while …: sess.run(next_element) TIP: Use Dataset.prefetch() and parallel version of Dataset.map()
  • 44. FUTURE OF DATASET API § Replace Queue § More Functional Operators § Automatic GPU Data Staging § Under-utilized GPUs Assisting with Data Ingestion § Advanced, RL-based Device Placement Strategies
  • 45. LET’S FEED DATA WITH A QUEUE § Navigate to the following notebook: 02_Feed_Queue_HDFS § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 47. BREAK Need Help? Use the Chat! § Please Star 🌟 this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
  • 48. LET’S TRAIN A MODEL (CPU) § Navigate to the following notebook: 03_Train_Model_CPU § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 49. LET’S TRAIN A MODEL (GPU) § Navigate to the following notebook: 03a_Train_Model_GPU § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 50. TENSORFLOW DEBUGGER § Step through Operations § Inspect Inputs and Outputs § Wrap Session in Debug Session sess = tf.Session(config=config) sess = tf_debug.LocalCLIDebugWrapperSession(sess)
  • 51. LET’S DEBUG A MODEL § Navigate to the following notebook: 04_Debug_Model § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 52. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Train, Inspect, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 53. SINGLE NODE, MULTI-GPU TRAINING § cpu:0 § By default, all CPUs § Requires extra config to target a CPU § gpu:0..n § Each GPU has a unique id § TF usually prefers a single GPU § xla_cpu:0, xla_gpu:0..n § “JIT Compiler Device” § Hints TensorFlow to attempt JIT Compile with tf.device(“/cpu:0”): with tf.device(“/gpu:0”): with tf.device(“/gpu:1”): GPU 0 GPU 1
  • 54. DISTRIBUTED, MULTI-NODE TRAINING § TensorFlow Automatically Inserts Send and Receive Ops into Graph § Parameter Server Synchronously Aggregates Updates to Variables § Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS Worker0 Worker0 Worker1 Worker0 Worker1 Worker2 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu0 gpu0 Single Node Multiple Nodes
  • 55. DATA PARALLEL VS. MODEL PARALLEL § Data Parallel (“Between-Graph Replication”) § Send exact same model to each device § Each device operates on partition of data § ie. Spark sends same function to many workers § Each worker operates on their partition of data § Model Parallel (“In-Graph Replication”) § Send different partition of model to each device § Each device operates on all data § Difficult, but required for larger models with lower-memory GPUs
  • 56. SYNCHRONOUS VS. ASYNCHRONOUS § Synchronous § Nodes compute gradients § Nodes update Parameter Server (PS) § Nodes sync on PS for latest gradients § Asynchronous § Some nodes delay in computing gradients § Nodes don’t update PS § Nodes get stale gradients from PS § May not converge due to stale reads!
  • 57. CHIEF WORKER § Chief Defaults to Worker Task 0 § Task 0 is guaranteed to exist § Performs Maintenance Tasks § Writes log summaries § Instructs PS to checkpoint vars § Performs PS health checks § (Re-)Initialize variables at (re-)start of training
  • 58. NODE AND PROCESS FAILURES § Checkpoint to Persistent Storage (HDFS, S3) § Use MonitoredTrainingSession and Hooks § Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos) § Understand Failure Modes and Recovery States Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
  • 59. ESTIMATOR + EXPERIMENT API § Supports Keras! § Unified API for Local + Distributed § Provide Clear Path to Production § Enable Rapid Model Experiments § Provide Flexible Parameter Tuning § Enable Downstream Optimizing & Serving Infra( ) § Nudge Users to Best Practices Through Opinions § Provide Hooks/Callbacks to Override Opinions
  • 60. ESTIMATOR API § “Train-to-Serve” Design § Create Custom Estimator or Re-Use Canned Estimator § Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict) § Hooks for All Phases of Model Training and Evaluation § Load Input: input_fn() § Train: model_fn() and train() § Evaluate: eval_fn() and evaluate() § Performance Metrics: Loss, Accuracy, … § Save and Export: export_savedmodel() § Predict: predict() Uses the slow sess.run() https://github.com/GoogleCloudPlatform/cloudml-samples /blob/master/census/customestimator/
  • 61. EXPERIMENT API § Easier-to-Use Distributed TensorFlow § Same API for Local and Distributed (*Theoretically) § Combines Estimator with input_fn() § Used for Training, Evaluation, & Hyper-Parameter Tuning § Distributed Training Defaults to Data-Parallel & Async § Cluster Configuration is Fixed at Start of Training Job § No Auto-Scaling Allowed, but That’s OK for Training
  • 62. ESTIMATOR & EXPERIMENT CONFIGS § TF_CONFIG § Special environment variable for config § Defines ClusterSpec in JSON incl. master, workers, PS’s § Distributed mode ‘{“environment”:“cloud”}’ § Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’ § RunConfig: Defines checkpoint interval, output directory, § HParams: Hyper-parameter tuning parameters and ranges § learn_runner creates RunConfig before calling run() & tune() § schedule is set based on {”task”:{”type”:…}} TF_CONFIG= '{ "environment": "cloud", "cluster": { "master":["worker0:2222”], "worker":["worker1:2222"], "ps": ["ps0:2222"] }, "task": {"type": "ps", "index": "0"} }'
  • 63. ESTIMATOR + KERAS § Distributed TensorFlow (Estimator) + Easy to Use (Keras) § tf.keras.estimator.model_to_estimator() # Instantiate a Keras inception v3 model. keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None) # Compile model with the optimizer, loss, and metrics you'd like to train with. keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metric='accuracy') # Create an Estimator from the compiled Keras model. est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3) # Treat the derived Estimator as you would any other Estimator. For example, # the following derived Estimator calls the train method: est_inception_v3.train(input_fn=my_training_set, steps=2000)
  • 64. “CANNED” ESTIMATORS § Commonly-Used Estimators § Pre-Tested and Pre-Tuned § DNNClassifer, TensorForestEstimator § Always Use Canned Estimators If Possible § Reduce Lines of Code, Complexity, and Bugs § Use FeatureColumn to Define & Create Features Custom vs. Canned @ Google, August 2017
  • 65. ESTIMATOR + DATASET def input_fn(): def generator(): while True: yield ... my_dataset = tf.data.dataset.from_generator(generator, tf.int32) # A one-shot iterator automatically initializes itself on first use. iter = my_dataset.make_one_shot_iterator() # The return value of get_next() matches the dataset element type. images, labels = iter.get_next() return images, labels # The input_fn can be used as a regular Estimator input function. estimator = tf.estimator.Estimator(…) estimator.train(train_input_fn=input_fn, …)
  • 66. OPTIMIZER, ESTIMATOR API + TPU’S run_config = tpu_config.RunConfig() estimator = tpu_estimator.TpuEstimator(model_fn=model_fn, config=run_config) estimator.train(input_fn=input_fn, num_epochs=10, …) optimizer = tpu_optimizer.CrossShardOptimizer( tf.train.GradientDescentOptimizer(learning_rate=…) ) train_op = optimizer.minimize(loss) estimator_spec = tf.estimator.EstimatorSpec(train_op=train_op, loss=…)
  • 67. MULTIPLE HEADS (OBJECTIVES) § Single-Objective Estimator § Single classification prediction § Multi-Objective Estimator § One (1) classification prediction § One(1) final layer to feed into next model § Multiple Heads Used to Ensemble Models § Treats neural network as a feature engineering step § Supported by TensorFlow Serving
  • 68. LAYERS API § Standalone Layer or Entire Sub-Graphs § Functions of Tensor Inputs & Outputs § Mix and Match with Operations § Assumes 1st Dimension is Batch Size § Handles One (1) to Many (*) Inputs § Metrics are Layers § Loss Metric (Per Mini-Batch) § Accuracy and MSE (Across Mini-Batches)
  • 69. FEATURE_COLUMN API § Used by Canned Estimator § Declaratively Specify Training Inputs § Converts Sparse to Dense Tensors § Sparse Features: Query Keyword, ProductID § Dense Features: One-Hot, Multi-Hot § Wide/Linear: Use Feature-Crossing § Deep: Use Embeddings
  • 70. FEATURE CROSSING § Create New Features by Combining Existing Features § Limitation: Combinations Must Exist in Training Dataset base_columns = [ education, marital_status, relationship, workclass, occupation, age_buckets ] crossed_columns = [ tf.feature_column.crossed_column( ['education', 'occupation'], hash_bucket_size=1000), tf.feature_column.crossed_column( ['age_buckets', 'education', 'occupation'], hash_bucket_size=1000) ]
  • 71. FEATURE_COLUMN EXAMPLES § Continuous + One-Hot + Embedding deep_columns = [ age, education_num, capital_gain, capital_loss, hours_per_week, tf.feature_column.indicator_column(workclass), tf.feature_column.indicator_column(education), tf.feature_column.indicator_column(marital_status), tf.feature_column.indicator_column(relationship), # To show an example of embedding tf.feature_column.embedding_column(occupation, dimension=8), ]
  • 72. SEPARATE TRAINING + EVALUATION § Separate Training and Evaluation Clusters § Evaluate Upon Checkpoint § Avoid Resource Contention § Training Continues in Parallel with Evaluation Training Cluster Evaluation Cluster Parameter Server Cluster
  • 73. LET’S TRAIN DISTRIBUTED TENSORFLOW § Navigate to the following notebook: 05_Train_Model_Distributed_CPU or 05a_Train_Model_Distributed_GPU § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 75. BREAK Need Help? Use the Chat! § Please Star 🌟 this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
  • 76. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Train, Inspect, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 77. XLA FRAMEWORK § XLA: “Accelerated Linear Algebra” § Reduce Reliance on Custom Operators § Improve Execution Speed § Improve Memory Usage § Reduce Mobile Footprint § Improve Portability Helps TensorFlow Stay Flexible, Yet Still Performant
  • 78. XLA HIGH LEVEL OPTIMIZER (HLO) § HLO: “High Level Optimizer” § Compiler Intermediate Representation (IR) § Independent of source and target language § XLA Step 1 Emits Target-Independent HLO § XLA Step 2 Emits Target-Dependent LLVM § LLVM Emits Native Code Specific to Target § Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
  • 79. JIT COMPILER § JIT: “Just-In-Time” Compiler § Built on XLA Framework § Reduce Memory Movement – Especially with GPUs § Reduce Overhead of Multiple Function Calls § Similar to Spark Operator Fusing in Spark 2.0 § Unroll Loops, Fuse Operators, Fold Constants, … § Scopes: session, device, with jit_scope():
  • 80. VISUALIZING JIT COMPILER IN ACTION Before JIT After JIT Google Web Tracing Framework: http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True)) run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() sess.run(options=run_options, run_metadata=run_metadata)
  • 81. VISUALIZING FUSING OPERATORS pip install graphviz dot -Tpng /tmp/hlo_graph_1.w5LcGs.dot -o hlo_graph_1.png GraphViz: http://www.graphviz.org hlo_*.dot files generated by XLA
  • 82. LET’S TRAIN WITH XLA CPU § Navigate to the following notebook: 06_Train_Model_XLA_CPU § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 83. LET’S TRAIN WITH XLA GPU § Navigate to the following notebook: 06a_Train_Model_XLA_GPU § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 84. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving
  • 85. WE ARE NOW… …OPTIMIZING Models AFTER Model Training TO IMPROVE Model Serving
  • 86. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 87. AOT COMPILER § Standalone, Ahead-Of-Time (AOT) Compiler § Built on XLA framework § tfcompile § Creates executable with minimal TensorFlow Runtime needed § Includes only dependencies needed by subgraph computation § Creates functions with feeds (inputs) and fetches (outputs) § Packaged as cc_libary header and object files to link into your app § Commonly used for mobile device inference graph § Currently, only CPU x86-64 and ARM are supported - no GPU
  • 88. GRAPH TRANSFORM TOOL (GTT) § Post-Training Optimization to Prepare for Inference § Remove Training-only Ops (checkpoint, drop out, logs) § Remove Unreachable Nodes between Given feed -> fetch § Fuse Adjacent Operators to Improve Memory Bandwidth § Fold Final Batch Norm mean and variance into Variables § Round Weights/Variables to improve compression (ie. 70%) § Quantize (FP32 -> INT8) to Speed Up Math Operations
  • 89. AFTER TRAINING, BEFORE OPTIMIZATION -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors ?!
  • 90. POST-TRAINING GRAPH TRANSFORMS transform_graph --in_graph=unoptimized_cpu_graph.pb ß Original Graph --out_graph=optimized_cpu_graph.pb ß Transformed Graph --inputs=’x_observed:0' ß Feed (Input) --outputs=’Add:0' ß Fetch (Output) --transforms=' ß List of Transforms strip_unused_nodes remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true) fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes'
  • 91. AFTER STRIPPING UNUSED NODES § Optimizations § strip_unused_nodes § Results § Graph much simpler § File size much smaller
  • 92. AFTER REMOVING UNUSED NODES § Optimizations § strip_unused_nodes § remove_nodes § Results § Pesky nodes removed § File size a bit smaller
  • 93. AFTER FOLDING CONSTANTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § Results § Placeholders (feeds) -> Variables* (*Why Variables and not Constants?)
  • 94. AFTER FOLDING BATCH NORMS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § Results § Graph remains the same § File size approximately the same
  • 95. AFTER QUANTIZING WEIGHTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § Results § Graph is same, file size is smaller, compute is faster
  • 96. WEIGHT QUANTIZATION § FP16 and INT8 Are Smaller and Computationally Simpler § Weights/Variables are Constants § Easy to Linearly Quantize
  • 97. LET’S OPTIMIZE FOR INFERENCE § Navigate to the following notebook: 07_Optimize_Model* *Why just CPU version? Why not GPU? § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 99. ACTIVATION QUANTIZATION § Activations Not Known Ahead of Time § Depends on input, not easy to quantize § Requires Additional Calibration Step § Use a “representative” dataset § Per Neural Network Layer… § Collect histogram of activation values § Generate many quantized distributions with different saturation thresholds § Choose threshold to minimize… KL_divergence(ref_distribution, quant_distribution) § Not Much Time or Data is Required (Minutes on Commodity Hardware)
  • 100. AFTER ACTIVATION QUANTIZATION § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes (activations) § Results § Larger graph, needs calibration! Requires Additional freeze_requantization_ranges
  • 101. LET’S OPTIMIZE FOR INFERENCE § Navigate to the following notebook: 08_Optimize_Model_Activations § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 102. FREEZING MODEL FOR DEPLOYMENT § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes § freeze_graph § Results § Variables -> Constants Finally! We’re Ready to Deploy!!
  • 103. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 104. MODEL SERVING TERMINOLOGY § Inference § Only Forward Propagation through Network § Predict, Classify, Regress, … § Bundle § GraphDef, Variables, Metadata, … § Assets § ie. Map of ClassificationID -> String § {9283: “penguin”, 9284: “bridge”} § Version § Every Model Has a Version Number (Integer) § Version Policy § ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
  • 105. TENSORFLOW SERVING FEATURES § Supports Auto-Scaling § Custom Loaders beyond File-based § Tune for Low-latency or High-throughput § Serve Diff Models/Versions in Same Process § Customize Models Types beyond HashMap and TensorFlow § Customize Version Policies for A/B and Bandit Tests § Support Request Draining for Graceful Model Updates § Enable Request Batching for Diff Use Cases and HW § Supports Optimized Transport with GRPC and Protocol Buffers
  • 106. PREDICTION SERVICE § Predict (Original, Generic) § Input: List of Tensor § Output: List of Tensor § Classify § Input: List of tf.Example (key, value) pairs § Output: List of (class_label: String, score: float) § Regress § Input: List of tf.Example (key, value) pairs § Output: List of (label: String, score: float)
  • 107. PREDICTION INPUTS + OUTPUTS § SignatureDef § Defines inputs and outputs § Maps external (logical) to internal (physical) tensor names § Allows internal (physical) tensor names to change from tensorflow.python.saved_model import utils from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import signature_def_utils graph = tf.get_default_graph() x_observed = graph.get_tensor_by_name('x_observed:0') y_pred = graph.get_tensor_by_name('add:0') inputs_map = {'inputs': x_observed} outputs_map = {'outputs': y_pred} predict_signature = signature_def_utils.predict_signature_def(inputs=inputs_map, outputs=outputs_map)
  • 108. MULTI-HEADED INFERENCE § Inputs Pass Through Model One Time § Model Returns Multiple Predictions: 1. Human-readable prediction (ie. “penguin”, “church”,…) 2. Final layer of scores (float vector) § Final Layer of floats Pass to the Next Model in Ensemble § Optimizes Bandwidth, CPU/GPU, Latency, Memory § Enables Complex Model Composing and Ensembling
  • 109. BUILD YOUR OWN MODEL SERVER § Adapt GRPC(Google) <-> HTTP (REST of the World) § Perform Batch Inference vs. Request/Response § Handle Requests Asynchronously § Support Mobile, Embedded Inference § Customize Request Batching § Add Circuit Breakers, Fallbacks § Control Latency Requirements § Reduce Number of Moving Parts #include “tensorflow_serving/model_servers/server_core.h” class MyTensorFlowModelServer { ServerCore::Options options; // set options (model name, path, etc) std::unique_ptr<ServerCore> core; TF_CHECK_OK( ServerCore::Create(std::move(options), &core) ); } Compile and Link with libtensorflow.so
  • 110. RUNTIME OPTION: NVIDIA TENSOR-RT § Post-Training Model Optimizations § Specific to Nvidia GPU § Similar to TF Graph Transform Tool § GPU-Optimized Prediction Runtime § Alternative to TensorFlow Serving § PipelineAI Supports TensorRT!
  • 111. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 112. SAVED MODEL FORMAT § Navigate to the following notebook: 09_Deploy_Optimized_Model § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 113. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 114. REQUEST BATCH TUNING § max_batch_size § Enables throughput/latency tradeoff § Bounded by RAM § batch_timeout_micros § Defines batch time window, latency upper-bound § Bounded by RAM § num_batch_threads § Defines parallelism § Bounded by CPU cores § max_enqueued_batches § Defines queue upper bound, throttling § Bounded by RAM Reaching either threshold will trigger a batch
  • 115. ADVANCED BATCHING & SERVING TIPS § Batch Just the GPU/TPU Portions of the Computation Graph § Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops § Distribute Large Models Into Shards Across TensorFlow Model Servers § Batch RNNs Used for Sequential and Time-Series Data § Find Best Batching Strategy For Your Data Through Experimentation § BasicBatchScheduler: Homogeneous requests (ie Regress or Classify) § SharedBatchScheduler: Mixed requests, multi-step, ensemble predict § StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads § Serve Only One (1) Model Inside One (1) TensorFlow Serving Process § Much Easier to Debug, Tune, Scale, and Manage Models in Production.
  • 116. LET’S DEPLOY OPTIMIZED MODEL § Navigate to the following notebook: 10_Optimize_Model_Server § https://github.com/PipelineAI/pipeline/tree/master/ gpu.ml/notebooks
  • 117. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving
  • 118. THANK YOU! QUESTIONS? § https://github.com/PipelineAI/pipeline/ § Please Star 🌟 this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline/tree/master/gpu.ml Contact Me chris@pipeline.ai @cfregly