VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
High Performance TensorFlow in Production - Big Data Spain - Madrid - Nov 15 2017
1. HIGH PERFORMANCE TENSORFLOW IN
PRODUCTION WITH GPUS
CHRIS FREGLY, FOUNDER @PIPELINE.AI
BIG DATA SPAIN, MADRID - NOV 15, 2017
I LOVE THIS CONFERENCE!!
2. INTRODUCTIONS: ME
§ Chris Fregly, Founder & Engineer @ PipelineAI
§ Formerly Netflix and Databricks
§ Advanced Spark and TensorFlow Meetup
Please Join Our 50,000+ Global Members!!
Contact Me
chris@pipeline.ai
@cfregly
Global Locations
* San Francisco
* Chicago
* Austin
* Washington DC
* Dusseldorf
* London
3. INTRODUCTIONS: YOU
§ Software Engineer, Data Scientist, Data Engineer, Data Analyst
§ Interested in Optimizing and Deploying TF Models to Production
§ Nice to Have a Working Knowledge of TensorFlow (Not Required)
4. CONTENT BREAKDOWN
50% Model Training Optimizations
(GPU, Ingestion Pipeline, JIT)
Boring & Batch
Offline in Research Lab
No Real-Time Serving Skills
Pipeline Stops at Training
No Feedback with Runtime
Small Number of Data Scientists
10’s Training Jobs / Day
Exciting & Real-Time!!
Online in Live Production
Unique Real-Time Serving Skills
Pipeline Extends into Production
Continuous Feedback with Training
Large Number of App Devs & Users
1,000,000’s Predictions / Sec<<<
50% Model Serving Optimizations
(Post-Processing, TF Serving, AOT)
5. AGENDA
Part 0: Latest PipelineAI Research
Part 1: Optimize TensorFlow Model Training
Part 2: Optimize TensorFlow Model Serving
6. 100% OPEN SOURCE CODE
§ https://github.com/PipelineAI/pipeline/
§ Please Star 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
7. HANDS-ON EXERCISES
§ Combo of Jupyter Notebooks and Command Line
§ Command Line through Jupyter Terminal
§ Some Exercises Based on Experimental Features
You May See Errors. Stay Calm. You Will Be OK!!
9. AGENDA
Part 0: Latest PipelineAI Research
§ Package, Deploy, and Tune Both Model + Runtime
§ Deploy Models and Experiments Safely to Prod
§ Compare Models Both Offline and Online
§ Auto-Shift Traffic to Winning Model or Cloud
10. PACKAGE MODEL + RUNTIME AS ONE
§ Package Model + Runtime into Immutable Docker Image
§ Same Environment: Local, Dev, and Prod
§ No Dependency Surprises in Production
§ Deploy and Tune Model + Runtime Together
pipeline predict-server-build --model-type=tensorflow
--model-name=mnist
--model-tag=”c”
--model-path=./models/tensorflow/mnist/
Package Model
Server C Locally
pipeline predict-server-push --model-type=tensorflow
--model-name=mnist
--model-tag=”c”
Push Image C To
Docker Registry
11. TUNE MODEL + RUNTIME TOGETHER
§ Try Different Model Hyper-Parameters + Runtime Configs
§ Even Different Runtimes: TF Serving, TensorRT
§ Auto-Quantize Model Weights + Activations
§ Auto-Fuse Neural Network Layers Together
§ Generate Native CPU + GPU Code
pipeline predict-server-start --model-type=tensorflow
--model-name=mnist
--model-tag=”c"
Start Model
Server C Locally
12. LOAD TEST MODEL + RUNTIME LOCALLY
§ Perform Mini-Load Test on Local Model Server
§ Provides Immediate Feedback on Prediction Performance
§ Relative Performance Compared to Other Variations
§ No Need to Deploy to Test or Prod for Prediction Metrics
§ See Where Time is Being Spent During Prediction
pipeline predict --model-server-url=http://localhost:6969
--model-type=tensorflow
--model-name=mnist
--model-tag=”c”
--test-request-concurrency=1000
Load Test Model
Server C Locally
13. RUNTIME OPTION: NVIDIA TENSOR-RT
§ Post-Training Model Optimizations
§ Specific to Nvidia GPU
§ Similar to TF Graph Transform Tool
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
14. AGENDA
Part 0: Latest PipelineAI Research
§ Package, Deploy, and Tune Both Model + Runtime
§ Deploy Models and Experiments Safely to Prod
§ Compare Models Both Offline and Online
§ Auto-Shift Traffic to Winning Model or Cloud
15. DEPLOY MODELS SAFELY TO PROD
§ Deploy from Jupyter Notebook in 1-Click
§ Deploy to 1-2% Split or Shadowed Traffic
§ Tear-Down or Rollback Quickly
§ Use Command Line Interface (CLI)
pipeline predict-cluster-start --model-type=tensorflow
--model-name=mnist
--model-tag=”b”
--traffic-split=“0.02”
Start Model
Cluster B in Prod
pipeline predict-cluster-start --model-type=tensorflow
--model-name=mnist
--model-tag=”c”
--traffic-split=“0.01”
Start Model
Cluster C in Prod
pipeline predict-cluster-start --model-type=tensorflow
--model-name=mnist
--model-tag=”a”
--traffic-split=“0.97”
Start Model
Cluster A in Prod
Implementation Details…
16. DEPLOY EXPERIMENTS SAFELY TO PROD
§ Create Experiments Directly from Jupyter or Command Line
§ Deploy Experiment
pipeline experiment-add --experiment-name=my_experiment
--model-type=tensorflow
--model-name=mnist
--model-tag=“a”
--traffic-split=“97%”
CLI
Drag
n’ Drop
pipeline experiment-start --experiment-name=my_experiment
--traffic-shadow=“20%”
pipeline experiment-add --experiment-name=my_experiment
--model-type=tensorflow
--model-name=mnist
--model-tag=“b”
--traffic-split=“2%”
pipeline experiment-add --experiment-name=my_experiment
--model-type=tensorflow
--model-name=mnist
--model-tag=“c”
--traffic-split=“1%”
1-Click
Start Experiment
with 20% Shadowed
of Production Traffic
17. AGENDA
Part 0: Latest PipelineAI Research
§ Package, Deploy, and Tune Both Model + Runtime
§ Deploy Models and Experiments Safely to Prod
§ Compare Models Both Offline and Online
§ Auto-Shift Traffic to Winning Model or Cloud
18. COMPARE MODELS OFFLINE & ONLINE
§ Offline, Batch Metrics
§ Validation Accuracy
§ Training Accuracy
§ CPU/GPU Utilization
§ Live Prediction Values
§ Compare Model Precision
§ Online, Real-Time Metrics
§ Response Time & Throughput
§ Cost Per Prediction
21. CONTINUOUS MODEL TRAINING
§ Identify and Fix Borderline Predictions (~50-50% Confidence)
§ Fix Along Class Boundaries
§ Retrain on New Labeled Data
§ Game-ify Labeling Process
§ Enables Crowd Sourcing
22. AGENDA
Part 0: Latest PipelineAI Research
§ Package, Deploy, and Tune Both Model + Runtime
§ Deploy Models and Experiments Safely to Prod
§ Compare Models Both Offline and Online
§ Auto-Shift Traffic to Winning Model or Cloud
23. SHIFT TRAFFIC TO MAX(REVENUE)
§ Shift Traffic to Winning Model using AI Bandit Algorithms
Implementation Details…
24. SHIFT TRAFFIC TO MIN(CLOUD CO$T)
§ Across Clouds & On-Premise
§ Real-Time Cost Per Prediction
§ Bandit-based Explore/Exploit
25. AGENDA
Part 0: Latest PipelineAI Research
Part 1: Optimize TensorFlow Model Training
Part 2: Optimize TensorFlow Model Serving
26. AGENDA
Part 1: Optimize TensorFlow Model Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Model Training on a Cluster
§ Optimize Training with JIT XLA Compiler
28. SETUP ENVIRONMENT
§ Step 1: Browse to the following:
http://allocator.community.pipeline.ai/allocate
§ Step 2: Browse to the following:
http://<ip-address>
§ Step 3: Browse around.
I will provide a Jupyter Username/Password soon.
Need Help?
Use the Chat!
30. LET’S EXPLORE OUR ENVIRONMENT
§ Navigate to the following notebook:
01_Explore_Environment
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
32. BREAK
§ Please 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
Need Help?
Use the Chat!
33. SETTING UP TENSORFLOW WITH GPUS
§ Very Painful!
§ Especially inside Docker
§ Use nvidia-docker
§ Especially on Kubernetes!
§ Use Kubernetes 1.8+
§ http://pipeline.ai for GitHub + DockerHub Links
35. GPU HALF-PRECISION SUPPORT
§ FP32 is “Full Precision”, FP16 is “Half Precision”
§ Supported by Pascal P100 (2016) and Volta V100 (2017)
§ Two(2) FP16’s in Each FP32 GPU Core for 2x Throughput!
§ Half-Precision is OK for Approximate Deep Learning Use Cases
You Can Set
TF_FP16_MATMUL_USE_FP32_COMPUTE=0
on GPU w/ Compute Capability(CC) 5.3+
36. VOLTA V100 (2017) VS. PASCAL P100 (2016)
§ 84 Streaming Multiprocessors (SM’s)
§ 5,376 GPU Cores
§ 672 Tensor Cores (ie. Google TPU)
§ Mixed FP16/FP32 Precision
§ Matrix Dims Should be Multiples of 8
§ More Shared Memory
§ New L0 Instruction Cache
§ Faster L1 Data Cache
§ V100 vs. P100 Performance
§ 12x Training, 6x Inference
37. FP32 VS. FP16 ON AWS GPU INSTANCES
FP16 Half Precision
87.2 T ops/second for p3 Volta V100
4.1 T ops/second for g3 Tesla M60
1.6 T ops/second for p2 Tesla K80
FP32 Full Precision
15.4 T ops/second for p3 Volta V100
4.0 T ops/second for g3 Tesla M60
3.3 T ops/second for p2 Tesla K80
38. § Currently Supports the Following:
§ Tesla K80
§ Pascal P100
§ TPUs
§ Attach GPUs to CPU Instances
§ Similar to AWS Elastic GPU, except less confusing
WHAT ABOUT GOOGLE CLOUD GPUS?
39. V100 AND CUDA 9
§ Independent Thread Scheduling - Finally!!
§ Similar to CPU fine-grained thread synchronization semantics
§ Allows GPU to yield execution of any thread
§ Still Optimized for SIMT (Same Instruction Multiple Thread)
§ SIMT units automatically scheduled together
§ Explicit Synchronization
P100 V100
40. GPU CUDA PROGRAMMING
§ Barbaric, But Fun Barbaric
§ Must Know Hardware Very Well
§ Hardware Changes are Painful
§ Use the Profilers & Debuggers
41. CUDA STREAMS
§ Asynchronous I/O Transfer
§ Overlap Compute and I/O
§ Keeps GPUs Saturated
§ Fundamental to Queue Framework in TensorFlow
42. LET’S SEE WHAT THIS THING CAN DO!
§ Navigate to the following notebook:
01a_Explore_GPU
01b_Explore_Numba
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
43. AGENDA
Part 1: Optimize TensorFlow Model Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Model Training on a Cluster
§ Optimize Training with JIT XLA Compiler
44. TRAINING TERMINOLOGY
§ Tensors: N-Dimensional Arrays
§ ie. Scalar, Vector, Matrix
§ Operations: MatMul, Add, SummaryLog,…
§ Graph: Graph of Operations (DAG)
§ Session: Contains Graph(s)
§ Feeds: Feed Inputs into Placeholder
§ Fetches: Fetch Output from Operation
§ Variables: What We Learn Through Training
§ aka “Weights”, “Parameters”
§ Devices: Hardware Device (GPU, CPU, TPU, ...)
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
with tf.device(“/cpu:0,/gpu:15”):
46. TENSORFLOW GRAPH EXECUTION
§ Lazy Execution by Default
§ Similar to Spark
§ Eager Execution Now Supported (TensorFlow 1.4)
§ Similar to PyTorch
§ "Linearize” Execution to Minimize RAM Usage
§ Useful on Single GPU with Limited RAM
47. TENSORFLOW MODEL
§ MetaGraph
§ Combines GraphDef and Metadata
§ GraphDef
§ Architecture of your model (nodes, edges)
§ Metadata
§ Asset: Accompanying assets to your model
§ SignatureDef: Maps external : internal tensors
§ Variables
§ Stored separately during training (checkpoint)
§ Allows training to continue from any checkpoint
§ Variables are “frozen” into Constants when preparing for inference
GraphDef
x
W
mul add
b
MetaGraph
Metadata
Assets
SignatureDef
Tags
Version
Variables:
“W” : 0.328
“b” : -1.407
48. BATCH NORMALIZATION (2015)
§ Each Mini-Batch May Have Wildly Different Distributions
§ Normalize per Batch (and Layer)
§ Faster Training, Learns Quicker
§ Final Model is More Accurate
§ TensorFlow is already on 2nd Generation Batch Algorithm
§ First-Class Support for Fusing Batch Norm Layers
§ Final mean + variance Are Folded Into Our Graph Later
-- (Almost)Always Use Batch Normalization! --
z = tf.matmul(a_prev, W)
a = tf.nn.relu(z)
a_mean, a_var = tf.nn.moments(a, [0])
scale = tf.Variable(tf.ones([depth/channels]))
beta = tf.Variable(tf.zeros ([depth/channels]))
bn = tf.nn.batch_normalizaton(a, a_mean, a_var,
beta, scale, 0.001)
49. DROPOUT (2014)
§ Training Technique
§ Prevents Overfitting
§ Helps Avoid Local Minima
§ Inherent Ensembling Technique
§ Creates and Combines Different Neural Architectures
§ Expressed as Probability Percentage (ie. 50%)
§ Boost Other Weights During Validation & Prediction
Perform Dropout
(Training Phase)
Boost for Dropout
(Validation & Prediction Phase)
0%
Dropout
50%
Dropout
52. FEED TENSORFLOW TRAINING PIPELINE
§ Training is Almost Always Limited by Ingestion Pipeline
§ THE Number One Problem We See Today
§ Scaling GPUs Up / Out Doesn’t Help
§ GPUs are Heavily Under-Utilized
Tesla K80 Volta V100
53. DON’T USE FEED_DICT!!
§ feed_dict Requires Python <-> C++ Serialization
§ Not Optimized for Production Ingestion Pipelines
§ Retrieves Next Batch After Current Batch is Done
§ Single-Threaded, Synchronous
§ CPUs/GPUs Not Fully Utilized!
§ Use Queue or Dataset APIs
§ Queues are old and complex
sess.run(train_step, feed_dict={…}
54. DETECT UNDERUTILIZED CPUS, GPUS
§ Instrument training code to generate “timelines”
§ Analyze with Google Web
Tracing Framework (WTF)
§ Monitor CPU with top, GPU with nvidia-smi
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
55. QUEUES
§ More than traditional Queue
§ Uses CUDA Streams
§ Perform I/O, pre-processing, cropping, shuffling, …
§ Pull from HDFS, S3, Google Storage, Kafka, ...
§ Combine many small files into large TFRecord files
§ Use CPUs to free GPUs for compute
§ Helps saturate CPUs and GPUs
56. QUEUE CAPACITY PLANNING
§ batch_size
§ # examples / batch (ie. 64 jpg)
§ Limited by GPU RAM
§ num_processing_threads
§ CPU threads pull and pre-process batches of data
§ Limited by CPU Cores
§ queue_capacity
§ Limited by CPU RAM (ie. 5 * batch_size)
57. DATASET API
§ tf.Tensor => tf.data.Dataset
§ Functional Transformations
§ Python Generator => tf.data.Dataset
Dataset.from_tensors((features, labels))
Dataset.from_tensor_slices((features, labels))
TextLineDataset(filenames)
dataset.map(lambda x: tf.decode_jpeg(x))
dataset.repeat(NUM_EPOCHS)
dataset.batch(BATCH_SIZE)
def generator():
while True:
yield ...
dataset.from_generator(generator, tf.int32)
§ Dataset => One-Shot Iterator
§ Dataset => Initializable Iter
iter = dataset.make_one_shot_iterator()
next_element = iter.get_next()
while …:
sess.run(next_element)
iter = dataset.make_initializable_iterator()
sess.run(iter.initializer,
feed_dict=PARAMS)
next_element = iter.get_next()
while …:
sess.run(next_element)
58. FUTURE OF DATASET API
§ Advanced, RL-based Device Placement Strategies
§ Automatic GPU Data Staging
§ More Functional Operators
59. LET’S FEED DATA WITH A QUEUE
§ Navigate to the following notebook:
02_Feed_Queue_HDFS
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
61. BREAK
§ Please 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
Need Help?
Use the Chat!
62. LET’S TRAIN A MODEL (CPU)
§ Navigate to the following notebook:
03_Train_Model_CPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
63. LET’S TRAIN A MODEL (GPU)
§ Navigate to the following notebook:
03a_Train_Model_GPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
64. TENSORFLOW DEBUGGER
§ Step through Operations
§ Inspect Inputs and Outputs
§ Wrap Session in Debug Session
sess = tf.Session(config=config)
sess =
tf_debug.LocalCLIDebugWrapperSession(sess)
65. LET’S DEBUG A MODEL
§ Navigate to the following notebook:
04_Debug_Model
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
66. AGENDA
Part 1: Optimize TensorFlow Model Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Model Training on a Cluster
§ Optimize Training with JIT XLA Compiler
67. SINGLE NODE, MULTI-GPU TRAINING
§ cpu:0
§ By default, all CPUs
§ Requires extra config to target a CPU
§ gpu:0..n
§ Each GPU has a unique id
§ TF usually prefers a single GPU
§ xla_cpu:0, xla_gpu:0..n
§ “JIT Compiler Device”
§ Hints TensorFlow to attempt JIT Compile
with tf.device(“/cpu:0”):
with tf.device(“/gpu:0”):
with tf.device(“/gpu:1”):
GPU 0 GPU 1
68. DISTRIBUTED, MULTI-NODE TRAINING
§ TensorFlow Automatically Inserts Send and Receive Ops into Graph
§ Parameter Server Synchronously Aggregates Updates to Variables
§ Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS
Worker0 Worker0
Worker1
Worker0 Worker1 Worker2
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0
gpu1
gpu0
gpu0
Single
Node
Multiple
Nodes
69. DATA PARALLEL VS MODEL PARALLEL
§ Data Parallel (“Between-Graph Replication”)
§ Send exact same model to each device
§ Each device operates on partition of data
§ ie. Spark sends same function to many workers
§ Each worker operates on their partition of data
§ Model Parallel (“In-Graph Replication”)
§ Send different partition of model to each device
§ Each device operates on all data
§ Difficult, but required for larger models with lower-memory GPUs
70. SYNCHRONOUS VS. ASYNCHRONOUS
§ Synchronous
§ Nodes compute gradients
§ Nodes update Parameter Server (PS)
§ Nodes sync on PS for latest gradients
§ Asynchronous
§ Some nodes delay in computing gradients
§ Nodes don’t update PS
§ Nodes get stale gradients from PS
§ May not converge due to stale reads!
71. CHIEF WORKER
§ Chief Defaults to Worker Task 0
§ Task 0 is guaranteed to exist
§ Performs Maintenance Tasks
§ Writes log summaries
§ Instructs PS to checkpoint vars
§ Performs PS health checks
§ (Re-)Initialize variables at (re-)start of training
72. NODE AND PROCESS FAILURES
§ Checkpoint to Persistent Storage (HDFS, S3)
§ Use MonitoredTrainingSession and Hooks
§ Use a Good Cluster Orchestrator (ie. Kubernetes,Mesos)
§ Understand Failure Modes and Recovery States
Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
73. ESTIMATOR, EXPERIMENT API
§ Simplify Model Building
§ Provide Clear Path to Production
§ Enable Rapid Model Experiments
§ Provide Flexible Parameter Tuning
§ Enable Downstream Optimizing & Serving Infra( )
§ Nudge Users to Best Practices Through Opinions
§ Provide Hooks/Callbacks to Override Opinions
§ Unified API for Local and Distributed TensorFlow
74. ESTIMATOR API
§ “Train-to-Serve” Design
§ Create Custom - or Use a Canned Estimator
§ Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict)
§ Hooks for All Phases of Model Training and Evaluation
§ Load Input: input_fn()
§ Train: model_fn() and train()
§ Evaluate: evaluate()
§ Save and Export: export_savedmodel()
§ Predict: predict() Uses sess.run() Slow Predictions!
https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/customestimator/
75. CANNED ESTIMATORS
§ Commonly-Used Estimators
§ Pre-Tested and Pre-Tuned
§ DNNClassifer, TensorForestEstimator
§ Always Use Canned Estimators If Possible
§ Reduce Lines of Code, Complexity, and Bugs
§ Use FeatureColumns to Define & Create Features
Custom vs. Canned
@ Google, August, 2017
76. COMBINE ESTIMATOR + DATASET API
def input_fn():
def generator():
while True:
yield ...
my_dataset = tf.data.dataset.from_generator(generator, tf.int32)
# A one-shot iterator automatically initializes itself on first use.
iter = my_dataset.make_one_shot_iterator()
# The return value of get_next() matches the dataset element type.
images, labels = iter.get_next()
return images, labels
# The input_fn can be used as a regular Estimator input function.
estimator = tf.estimator.Estimator(…)
estimator.train(train_input_fn=input_fn, …)
77. FEATURECOLUMN ABSTRACTION
§ Used by Canned Estimator
§ Simplifies Input Ingestion
§ Declarative Way to Specify Model Training Inputs
§ Converts Sparse Features to Dense Tensors
§ Sparse Features: Query Keyword, Url, ProductID,…
§ Wide/Linear Models Use Feature-Crossing
§ Deep Models Use Embeddings
78. SINGLE VS. MULTI-OBJECTIVES + HEADS
§ Single-Objective Estimator
§ Single classification prediction
§ Multi-Objective Estimator
§ Two (2) classification predictions
§ One (1) classification prediction + One(1) final layer
§ Multiple Heads Are Used to Ensemble Models
§ Treats neural network as a feature engineering step!
§ Supported by TensorFlow Serving
79. LAYERS API
§ Standalone Layer or Entire Sub-Graphs
§ Functions of Tensor Inputs & Outputs
§ Mix and Match with Operations
§ Assumes 1st Dimension is Batch Size
§ Handles One (1) to Many (*) Inputs
§ Metrics are Layers
§ Loss Metric (Per Mini-Batch)
§ Accuracy and MSE (Across Mini-Batches)
80. EXPERIMENT API
§ Easier-to-Use Distributed TensorFlow
§ Same API for Local and Distributed (*Theoretically)
§ Combines Estimator with input_fn()
§ Used for Training, Evaluation, & Hyper-Parameter Tuning
§ Distributed Training Defaults to Data-Parallel & Async
§ Cluster Configuration is Fixed at Start of Training Job
§ No Auto-Scaling Allowed!!
81. ESTIMATOR, EXPERIMENT CONFIGS
§ TF_CONFIG
§ Special environment variable for config
§ Defines ClusterSpec in JSON incl. master, workers, PS’s
§ Distributed mode ‘{“environment”:“cloud”}’
§ Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’
§ RunConfig: Defines checkpoint interval, output directory,
§ HParams: Hyper-parameter tuning parameters and ranges
§ learn_runner creates RunConfig before calling run() & tune()
§ schedule is set based on {”task”:{”type”}}
TF_CONFIG=
'{
"environment": "cloud",
"cluster":
{
"master":["worker0:2222”],
"worker":["worker1:2222"],
"ps": ["ps0:2222"]
},
"task": {"type": "ps",
"index": "0"}
}'
83. SEPARATE TRAINING + EVALUATION
§ Separate Training and Evaluation Clusters
§ Evaluate Upon Checkpoint
§ Avoid Resource Contention
§ Let Training Continue in Parallel with Evaluation
Training
Cluster
Evaluation
Cluster
Parameter Server
Cluster
84. LET’S TRAIN DISTRIBUTED TENSORFLOW
§ Navigate to the following notebook:
05_Train_Model_Distributed_CPU
or 05a_Train_Model_Distributed_GPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
86. BREAK
§ Please 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
Need Help?
Use the Chat!
87. AGENDA
Part 1: Optimize TensorFlow Model Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Model Training on a Cluster
§ Optimize Training with JIT XLA Compiler
88. XLA FRAMEWORK
§ XLA: “Accelerated Linear Algebra”
§ Reduce Reliance on Custom Operators
§ Improve Execution Speed
§ Improve Memory Usage
§ Reduce Mobile Footprint
§ Improve Portability
Helps TensorFlow Stay Flexible, Yet Still Performant
89. XLA HIGH LEVEL OPTIMIZER (HLO)
§ HLO: “High Level Optimizer”
§ Compiler Intermediate Representation (IR)
§ Independent of source and target language
§ XLA Step 1 Emits Target-Independent HLO
§ XLA Step 2 Emits Target-Dependent LLVM
§ LLVM Emits Native Code Specific to Target
§ Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
90. JIT COMPILER
§ JIT: “Just-In-Time” Compiler
§ Built on XLA Framework
§ Reduce Memory Movement – Especially with GPUs
§ Reduce Overhead of Multiple Function Calls
§ Similar to Spark Operator Fusing in Spark 2.0
§ Unroll Loops, Fuse Operators, Fold Constants, …
§ Scopes: session, device, `with jit_scope():`
91. VISUALIZING JIT COMPILER IN ACTION
Before JIT After JIT
Google Web Tracing Framework:
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
sess.run(options=run_options,
run_metadata=run_metadata)
93. LET’S TRAIN WITH XLA CPU
§ Navigate to the following notebook:
06_Train_Model_XLA_CPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
94. LET’S TRAIN WITH XLA GPU
§ Navigate to the following notebook:
06a_Train_Model_XLA_GPU
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
95. AGENDA
Part 0: Latest PipelineAI Research
Part 1: Optimize TensorFlow Model Training
Part 2: Optimize TensorFlow Model Serving
96. AGENDA
Part 2: Optimize TensorFlow Model Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
97. AOT COMPILER
§ Standalone, Ahead-Of-Time (AOT) Compiler
§ Built on XLA framework
§ tfcompile
§ Creates executable with minimal TensorFlow Runtime needed
§ Includes only dependencies needed by subgraph computation
§ Creates functions with feeds (inputs) and fetches (outputs)
§ Packaged as cc_libary header and object files to link into your app
§ Commonly used for mobile device inference graph
§ Currently, only CPU x86-64 and ARM are supported - no GPU
98. GRAPH TRANSFORM TOOL (GTT)
§ Post-Training Optimization to Prepare for Inference
§ Remove Training-only Ops (checkpoint, drop out, logs)
§ Remove Unreachable Nodes between Given feed -> fetch
§ Fuse Adjacent Operators to Improve Memory Bandwidth
§ Fold Final Batch Norm mean and variance into Variables
§ Round Weights/Variables to improve compression (ie. 70%)
§ Quantize (FP32 -> INT8) to Speed Up Math Operations
101. AFTER STRIPPING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ Results
§ Graph much simpler
§ File size much smaller
102. AFTER REMOVING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ Results
§ Pesky nodes removed
§ File size a bit smaller
103. AFTER FOLDING CONSTANTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ Results
§ Placeholders (feeds) -> Variables*
(*Why Variables and not Constants?)
104. AFTER FOLDING BATCH NORMS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ Results
§ Graph remains the same
§ File size approximately the same
105. AFTER QUANTIZING WEIGHTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ Results
§ Graph is same, file size is smaller, compute is faster
106. WEIGHT QUANTIZATION
§ FP16 and INT8 Are Smaller and Computationally Simpler
§ Weights/Variables are Constants
§ Easy to Linearly Quantize
107. LET’S OPTIMIZE FOR INFERENCE
§ Navigate to the following notebook:
07_Optimize_Model*
*Why just CPU version? Why not GPU?
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
109. ACTIVATION QUANTIZATION
§ Activations Not Known Ahead of Time
§ Depends on input, not easy to quantize
§ Requires Additional Calibration Step
§ Use a “representative” dataset
§ Per Neural Network Layer…
§ Collect histogram of activation values
§ Generate many quantized distributions with different saturation thresholds
§ Choose threshold to minimize…
KL_divergence(ref_distribution, quant_distribution)
§ Not Much Time or Data is Required (Minutes on Commodity Hardware)
111. LET’S OPTIMIZE FOR INFERENCE
§ Navigate to the following notebook:
08_Optimize_Model_Activations
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
113. AGENDA
Part 2: Optimize TensorFlow Model Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
114. MODEL SERVING TERMINOLOGY
§ Inference
§ Only Forward Propagation through Network
§ Predict, Classify, Regress, …
§ Bundle
§ GraphDef, Variables, Metadata, …
§ Assets
§ ie. Map of ClassificationID -> String
§ {9283: “penguin”, 9284: “bridge”}
§ Version
§ Every Model Has a Version Number (Integer)
§ Version Policy
§ ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
115. TENSORFLOW SERVING FEATURES
§ Supports Auto-Scaling
§ Custom Loaders beyond File-based
§ Tune for Low-latency or High-throughput
§ Serve Diff Models/Versions in Same Process
§ Customize Models Types beyond HashMap and TensorFlow
§ Customize Version Policies for A/B and Bandit Tests
§ Support Request Draining for Graceful Model Updates
§ Enable Request Batching for Diff Use Cases and HW
§ Supports Optimized Transport with GRPC and Protocol Buffers
116. PREDICTION SERVICE
§ Predict (Original, Generic)
§ Input: List of Tensor
§ Output: List of Tensor
§ Classify
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (class_label: String, score: float)
§ Regress
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (label: String, score: float)
118. MULTI-HEADED INFERENCE
§ Inputs Pass Through Model One Time
§ Model Returns Multiple Predictions:
1. Human-readable prediction (ie. “penguin”, “church”,…)
2. Final layer of scores (float vector)
§ Final Layer of floats Pass to the Next Model in Ensemble
§ Optimizes Bandwidth, CPU/GPU, Latency, Memory
§ Enables Complex Model Composing and Ensembling
119. BUILD YOUR OWN MODEL SERVER
§ Adapt GRPC(Google) <-> HTTP (REST of the World)
§ Perform Batch Inference vs. Request/Response
§ Handle Requests Asynchronously
§ Support Mobile, Embedded Inference
§ Customize Request Batching
§ Add Circuit Breakers, Fallbacks
§ Control Latency Requirements
§ Reduce Number of Moving Parts
#include
“tensorflow_serving/model_servers/server_core.h”
class MyTensorFlowModelServer {
ServerCore::Options options;
// set options (model name, path, etc)
std::unique_ptr<ServerCore> core;
TF_CHECK_OK(
ServerCore::Create(std::move(options), &core)
);
}
Compile and Link with
libtensorflow.so
120. RUNTIME OPTION: NVIDIA TENSOR-RT
§ Post-Training Model Optimizations
§ Specific to Nvidia GPU
§ Similar to TF Graph Transform Tool
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
121. AGENDA
Part 2: Optimize TensorFlow Model Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
122. SAVED MODEL FORMAT
§ Navigate to the following notebook:
09_Deploy_Optimized_Model
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
123. AGENDA
Part 2: Optimize TensorFlow Model Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
124. REQUEST BATCH TUNING
§ max_batch_size
§ Enables throughput/latency tradeoff
§ Bounded by RAM
§ batch_timeout_micros
§ Defines batch time window, latency upper-bound
§ Bounded by RAM
§ num_batch_threads
§ Defines parallelism
§ Bounded by CPU cores
§ max_enqueued_batches
§ Defines queue upper bound, throttling
§ Bounded by RAM
Reaching either threshold
will trigger a batch
125. ADVANCED BATCHING & SERVING TIPS
§ Batch Just the GPU/TPU Portions of the Computation Graph
§ Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops
§ Distribute Large Models Into Shards Across TensorFlow Model Servers
§ Batch RNNs Used for Sequential and Time-Series Data
§ Find Best Batching Strategy For Your Data Through Experimentation
§ BasicBatchScheduler: Homogeneous requests (ie Regress or Classify)
§ SharedBatchScheduler: Mixed requests, multi-step, ensemble predict
§ StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads
§ Serve Only One (1) Model Inside One (1) TensorFlow Serving Process
§ Much Easier to Debug, Tune, Scale, and Manage Models in Production.
126. LET’S DEPLOY OPTIMIZED MODEL
§ Navigate to the following notebook:
10_Optimize_Model_Server
§ https://github.com/PipelineAI/pipeline/tree/master/
gpu.ml/notebooks
127. AGENDA
Part 0: Latest PipelineAI Research
Part 1: Optimize TensorFlow Model Training
Part 2: Optimize TensorFlow Model Serving
128. THANK YOU!! QUESTIONS?
§ https://github.com/PipelineAI/pipeline/
§ Please Star 🌟 this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline/tree/master/gpu.ml
Contact Me
chris@pipeline.ai
@cfregly