SlideShare a Scribd company logo
1 of 173
Download to read offline
HIGH PERFORMANCE TENSORFLOW IN
PRODUCTION WITH KUBERNETES AND GPUS
STRATA CONFERENCE, SAN JOSE MARCH 2018
CHRIS FREGLY
FOUNDER @ PIPELINE.AI
KEY TAKE-AWAYS
With PipelineAI, You Can…
§ Generate Hardware-Specific Model Optimizations
§ Deploy and Compare Models in Live Production
§ Optimize Complete AI Pipeline Across Many Models
§ Hyper-Parameter Tune Both Training & Inference
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
INTRODUCTIONS: ME
§ Chris Fregly, Founder & Engineer @PipelineAI
§ Formerly Netflix, Databricks, IBM Spark Tech
§ Founder @ Advanced Spark TensorFlow Meetup
§ Please Join Our 60,000+ Global Members!!
Contact Me
chris@pipeline.ai
@cfregly
Global Locations
* San Francisco
* Chicago
* Austin
* Washington DC
* Dusseldorf
* London
INTRODUCTIONS: YOU
§ Data Scientist, Data Engineer, Data Analyst, Data Curious
§ Want to Deploy ML/AI Models Rapidly and Safely
§ Need to Trace or Explain Model Predictions
§ Have a Decent Grasp of Computer Science Fundamentals
PIPELINE.AI IS 100% OPEN SOURCE
§ https://github.com/PipelineAI/pipeline/
§ Please Star this GitHub Repo!
§ “Each Star is Worth $1,500 in Seed Money”
- A Prominent Venture Capitalist in Silicon Valley
http://jrvis.com/red-dwarf/
PIPELINE.AI SUPPORTS ALL MAJOR MODELS
PIPELINE.AI OVERVIEW
750,000 Docker Downloads
70,000 Registered Users
60,000 Meetup Members
30,000 LinkedIn Followers
2,500 GitHub Stars
20 Enterprise Beta Users
PIPELINE.AI ANNOUNCEMENTS
http://pipeline.aihttp://community.pipeline.ai
WHY HEAVY FOCUS ON MODEL SERVING?
Model Training
Batch & Boring
Offline in Research Lab
Pipeline Ends at Training
No Insight into Live Production
Small Number of Data Scientists
Optimizations Are Very Well-Known
Real-Time & Exciting!!
Online in Live Production
Pipeline Extends into Production
Continuous Insight into Live Production
Huuuuuuge Number of Application Users
Runtime Optimizations Not Yet Explored
<<<
Model Serving
100’s Training Jobs per Day 1,000,000’s Predictions per Sec
CLOUD-BASED MODEL SERVING OPTIONS
§ AWS SageMaker
§ Released Nov 2017 @ Re-invent
§ Custom Docker Images for Training/Serving (ie. PipelineAI Images)
§ Distributed TensorFlow Training through Estimator API
§ Traffic Splitting for A/B Model Testing
§ Google Cloud ML Engine
§ Mostly Command-Line Based
§ Driving TensorFlow Open Source API (ie. Estimator API)
§ Azure ML
PipelineAI Supports SageMaker
*and*
Hybrid-Cloud Deployments
BUILD MODEL WITH THE RUNTIME
§ Package Model + Runtime into 1 Docker Image
§ Emphasizes Immutable Deployment and Infrastructure
§ Same Image Across All Environments
§ No Library or Dependency Surprises from Laptop to Production
§ Allows Tuning Model + Runtime Together
pipeline predict-server-build --model-name=mnist 
--model-tag=A 
--model-type=tensorflow 
--model-runtime=tfserving 
--model-chip=gpu 
--model-path=./tensorflow/mnist/
Build Local
Model Server A
RUN A LOADTEST LOCALLY!
§ Perform Mini-Load Test on Local Model Server
§ Immediate, Local Prediction Performance Metrics
§ Compare to Previous Model + Runtime Variations
§ Gain Intuition Before Push to Prod
pipeline predict-server-start --model-name=mnist 
--model-tag=A 
--memory-limit=2G
pipeline predict-http-test --model-endpoint-url=http://localhost:8080 
--test-request-path=test_request.json 
--test-request-concurrency=1000
Start Local
LoadTest
Start Local
Model Servers
TUNE MODEL + RUNTIME TOGETHER
§ Model Training Optimizations
§ Model Hyper-Parameters (ie. Learning Rate)
§ Reduced Precision (ie. FP16 Half Precision)
§ Model Serving (Post-Train) Optimizations
§ Quantize Model Weights + Activations From 32-bit to 8-bit
§ Fuse Neural Network Layers Together
§ Model Runtime Optimizations
§ Runtime Config: Request Batch Size, etc
§ Different Runtime: TensorFlow Serving CPU/GPU, Nvidia TensorRT
DETECT UNDERUTILIZED CPUS, GPUS
§ Instrument Code to Generate “Timelines”
§ Analyze with Google Web
Tracing Framework (WTF)
§ Monitor CPU with top, GPU with nvidia-smi
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
SERVING (POST-TRAIN) OPTIMIZATIONS
§ Prepare Model for Serving
§ Simplify Network, Reduce Size
§ Reduce Precision -> Fast Math
§ Some Tools
§ Graph Transform Tool (GTT)
§ tfcompile
After Training
After
Optimizing!
pipeline optimize --optimization-list=[‘quantize_weights’,‘tfcompile’] 
--model-name=mnist 
--model-tag=A 
--model-path=./tensorflow/mnist/model 
--model-inputs=[‘x’] 
--model-outputs=[‘add’] 
--output-path=./tensorflow/mnist/optimized_model
Linear
Regression
Model Size: 70MB –> 70K (!)
NVIDIA TENSOR-RT RUNTIME
§ Post-Training Model Optimizations
§ Specific to Nvidia GPUs
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
TENSORFLOW LITE RUNTIME
§ Post-Training Model Optimizations
§ Currently Supports iOS and Android
§ On-Device Prediction Runtime
§ Low-Latency, Fast Startup
§ Selective Operator Loading
§ 70KB Min - 300KB Max Runtime Footprint
§ Supports Accelerators (GPU, TPU)
§ Falls Back to CPU without Accelerator
§ Java and C++ APIs
3 DIFFERENT RUNTIMES, SAME MODEL
pipeline predict-server-build --model-name=mnist 
--model-tag=C 
--model-type=tensorflow 
--model-runtime=tensorrt 
--model-chip=gpu 
--model-path=./tensorflow/mnist/
Build Local
Model Server C
pipeline predict-server-build --model-name=mnist 
--model-tag=A 
--model-type=tensorflow 
--model-runtime=tfserving 
--model-chip=cpu 
--model-path=./tensorflow/mnist/
Build Local
Model Server A
pipeline predict-server-build --model-name=mnist 
--model-tag=B 
--model-type=tensorflow 
--model-runtime=tfserving 
--model-chip=gpu 
--model-path=./tensorflow/mnist/
Build Local
Model Server B
Same Model,
Diff Runtime
PUSH IMAGE TO DOCKER REGISTRY
§ Supports All Public + Private Docker Registries
§ DockerHub, Artifactory, Quay, AWS, Google, …
§ Or Self-Hosted, Private Docker Registry
pipeline predict-server-push --model-name=mnist 
--model-tag=A 
--image-registry-url=<your-registry> 
--image-registry-repo=<your-repo>
Push Images to
Docker Registry
DEPLOY MODELS SAFELY TO PROD
§ Deploy from CLI or Jupyter Notebook
§ Tear-Down and Rollback Models Quickly
§ Shadow Canary: Deploy to 20% Live Traffic
§ Split Canary: Deploy to 97-2-1% Live Traffic
pipeline predict-kube-start --model-name=mnist 
--model-tag=BStart Cluster B
pipeline predict-kube-start --model-name=mnist 
--model-tag=CStart Cluster C
pipeline predict-kube-start --model-name=mnist 
--model-tag=AStart Cluster A
pipeline predict-kube-route --model-name=mnist 
--model-split-tag-and-weight-dict='{"A":97, "B":2, "C”:1}' 
--model-shadow-tag-list='[]'
Route Live Traffic
COMPARE MODELS OFFLINE & ONLINE
§ Offline, Batch Metrics
§ Validation + Training Accuracy
§ CPU + GPU Utilization
§ Online, Live Prediction Values
§ Compare Relative Precision
§ Newly-Seen, Streaming Data
§ Online, Real-Time Metrics
§ Response Time, Throughput
§ Cost ($) Per Prediction
ENSEMBLE PREDICTION AUDIT TRAIL
§ Necessary for Model Explain-ability
§ Fine-Grained Request Tracing
§ Used for Model Ensembles
REAL-TIME PREDICTION STREAMS
§ Visually Compare Real-time Predictions
Features and
Inputs
Predictions and
Confidences
Model B Model CModel A
PREDICTION PROFILING AND TUNING
§ Pinpoint Performance Bottlenecks
§ Fine-Grained Prediction Metrics
§ 3 Steps in Real-Time Prediction
1. transform_request()
2. predict()
3. transform_response()
SHIFT TRAFFIC TO MAX(REVENUE)
§ Shift Traffic to Winning Model with Multi-armed Bandits
LIVE, ADAPTIVE TRAFFIC ROUTING
§ A/B Tests
§ Inflexible and Boring
§ Multi-Armed Bandits
§ Adaptive and Exciting!
pipeline predict-kube-route --model-name=mnist 
--model-split-tag-and-weight-dict='{"A":1, "B":2, "C”:97}’ 
--model-shadow-tag-list='[]'
Route Traffic
Dynamically
SHIFT TRAFFIC TO MIN(CLOUD CO$T)
§ Based on Cost ($) Per Prediction
§ Cost Changes Throughout Day
§ Lose AWS Spot Instances
§ Google Cloud Becomes Cheaper
§ Shift Across Clouds & On-Prem
PSEUDO-CONTINUOUS TRAINING
§ Identify and Fix Borderline (Unconfident) Predictions
§ Fix Predictions Along Class Boundaries
§ Facilitate ”Human in the Loop”
§ Retrain with Newly-Labeled Data
§ Game-ify the Labeling Process
§ Path to Crowd-Sourced Labeling
CONTINUOUS MODEL TRAINING
§ The Holy Grail of Machine Learning!
§ PipelineAI Supports Continuous Model Training!
§ Kafka, Kinesis
§ Spark Streaming, Flink
§ Storm, Heron
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
SETTING UP TENSORFLOW WITH GPUS
§ Very Painful!
§ Especially inside Docker
§ Use nvidia-docker
§ Especially on Kubernetes!
§ Use the Latest Kubernetes (with Init Script Support)
§ http://pipeline.ai for GitHub + DockerHub Links
TENSORFLOW + CUDA + NVIDIA GPU
GPU HALF-PRECISION SUPPORT
§ FP32 is “Full Precision”, FP16 is “Half Precision”
§ Two(2) FP16’s in Each FP32 GPU Core for 2x Throughput!
§ Lower Precision is OK for Approx. Deep Learning Use Cases
§ The Network Matters Most – Not Individual Neuron Accuracy
§ Supported by Pascal P100 (2016) and Volta V100 (2017)
Set the following on GPU’s with CC 5.3+:
TF_FP16_MATMUL_USE_FP32_COMPUTE=0
TF_FP16_CONV_USE_FP32_COMPUTE=0
TF_XLA_FLAGS=--xla_enable_fast_math=1
VOLTA V100 (2017) VS. PASCAL P100 (2016)
§ 84 Streaming Multiprocessors (SM’s)
§ 5,376 GPU Cores
§ 672 Tensor Cores (ie. Google TPU)
§ Mixed FP16/FP32 Precision
§ Matrix Dims Should be Multiples of 8
§ More Shared Memory
§ New L0 Instruction Cache
§ Faster L1 Data Cache
§ V100 vs. P100 Performance
§ 12x Training, 6x Inference
FP32 VS. FP16 ON AWS GPU INSTANCES
FP16 Half Precision
87.2 T ops/second for p3 Volta V100
4.1 T ops/second for g3 Tesla M60
1.6 T ops/second for p2 Tesla K80
FP32 Full Precision
15.4 T ops/second for p3 Volta V100
4.0 T ops/second for g3 Tesla M60
3.3 T ops/second for p2 Tesla K80
§ Currently Supports the Following:
§ Tesla K80
§ Pascal P100
§ Volta V100 Coming Soon?
§ TPUs (Only in Google Cloud)
§ Attach GPUs to CPU Instances
§ Similar to AWS Elastic GPU, except less confusing
WHAT ABOUT GOOGLE CLOUD?
V100 AND CUDA 9
§ Independent Thread Scheduling - Finally!!
§ Similar to CPU fine-grained thread synchronization semantics
§ Allows GPU to yield execution of any thread
§ Still Optimized for SIMT (Same Instruction Multi-Thread)
§ SIMT units automatically scheduled together
§ Explicit Synchronization
P100 V100
New CUDA
Thread Cooperative Groups
https://devblogs.nvidia.com/cooperative-groups/
GPU CUDA PROGRAMMING
§ Barbaric, But Fun Barbaric
§ Must Know Hardware Very Well
§ Hardware Changes are Painful
§ Use the Profilers & Debuggers
CUDA STREAMS
§ Asynchronous I/O Transfer
§ Overlap Compute and I/O
§ Keep GPUs Saturated!
§ Used Heavily by TensorFlow
Bad
Good
Bad
Good
CUDA SHARED AND UNIFIED MEMORY
PYCUDA AND NUMBA
§ https://devblogs.nvidia.com/numba-python-cuda-
acceleration/
§ https://devblogs.nvidia.com/seven-things-numba/
LET’S SEE WHAT THIS THING CAN DO!
§ Navigate to the following notebook:
01a_Explore_GPU
01b_Explore_Numba
§ https://github.com/PipelineAI/notebooks
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Feed, Train, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
TRAINING TERMINOLOGY
§ Tensors: N-Dimensional Arrays
§ ie. Scalar, Vector, Matrix
§ Operations: MatMul, Add, SummaryLog,…
§ Graph: Graph of Operations (DAG)
§ Session: Contains Graph(s)
§ Feeds: Feed Inputs into Placeholder
§ Fetches: Fetch Output from Operation
§ Variables: What We Learn Through Training
§ aka “Weights”, “Parameters”
§ Devices: Hardware Device (GPU, CPU, TPU, ...)
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
with tf.device(“/cpu:0,/gpu:15”):
TENSORFLOW SESSION
Session
graph: GraphDef
Variables:
“W” : 0.328
“b” : -1.407
Variables are
Randomly
Initialized,
then
Periodically
Checkpointed
GraphDef is
Created During
Training, then
Frozen for
Inference
TENSORFLOW GRAPH EXECUTION
§ Lazy Execution by Default
§ Similar to Spark
§ Eager Execution Now Supported (TensorFlow 1.4+)
§ Similar to PyTorch
§ "Linearize” Execution to Minimize RAM Usage
§ Useful on Single GPU with Limited RAM
OPERATION PARALLELISM
§ Inter-Op (Between-Op) Parallelism
§ By default, TensorFlow runs multiple ops in parallel
§ Useful for low core and small memory/cache envs
§ Set to one (1)
§ Intra-Op (Within-Op) Parallelism
§ Different threads can use same set of data in RAM
§ Useful for compute-bound workloads (CNNs)
§ Set to # of cores (>=2)
TENSORFLOW MODEL
§ MetaGraph
§ Combines GraphDef and Metadata
§ GraphDef
§ Architecture of your model (nodes, edges)
§ Metadata
§ Asset: Accompanying assets to your model
§ SignatureDef: Maps external to internal tensors
§ Variables
§ Stored separately during training (checkpoint)
§ Allows training to continue from any checkpoint
§ Variables are “frozen” into Constants when preparing for inference
GraphDef
x
W
mul add
b
MetaGraph
Metadata
Assets
SignatureDef
Tags
Version
Variables:
“W” : 0.328
“b” : -1.407
EXTEND EXISTING DATA PIPELINES
§ Data Processing
§ HDFS/Hadoop
§ Spark
§ Containers
§ Docker
§ Schedulers
§ Kubernetes
§ Mesos
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow-hadoop</artifactId>
</dependency>
https://github.com/tensorflow/ecosystem
KUBERNETES AND SPARK 2.3
§ Kubernetes-Native
§ Schedule Spark Workers
# Submit Spark Job to Kubernetes Cluster
bin/spark-submit 
--master k8s://https://xx.yy.zz.ww 
--deploy-mode cluster 
--name spark-pi 
--class org.apache.spark.examples.SparkPi 
--conf spark.executor.instances=5 
--conf spark.kubernetes.container.image=<spark-image> 
--conf spark.kubernetes.driver.pod.name=spark-pi-driver 
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
# View Kubernetes Resources
kubectl get pods -l 'spark-role in (driver, executor)' -w
# View Driver Logs in Real-Time
kubectl logs –f spark-pi-driver http://blog.kubernetes.io/2018/03/
apache-spark-23-with-native-kubernetes.html
http://community.pipeline.ai
TENSORFLOW + SPARK OPTIONS
§ TensorFlow on Spark (Yahoo!)
§ TensorFrames <-Dead Project->
§ Separate Clusters for Spark and TensorFlow
§ Spark: Boring Batch ETL
§ TensorFlow: Exciting AI Model Training and Serving
§ Hand-Off Point is S3, HDFS, Google Cloud Storage
TENSORFLOW + KAFKA
§ TensorFlow Dataset API Now Supports Kafka!!
from tensorflow.contrib.kafka.python.ops import kafka_dataset_ops
repeat_dataset = kafka_dataset_ops.KafkaDataset(topics,
group="test",
eof=True)
.repeat(num_epochs)
batch_dataset = repeat_dataset.batch(batch_size)
…
TO UNDERSTAND TENSORFLOW I/O…
§ TFRecord File Format
§ TensorFlow Python and C++ Dataset API
§ Python Module and Packaging
§ Comfort with Python’s Lack of Strong Typing
§ C++ Concurrency Constructs
§ Protocol Buffers
§ Old Queue API
§ GPU/CUDA Memory Tricks And a Lot of Coffee!
FEED TENSORFLOW TRAINING PIPELINE
§ Training is Limited by the Ingestion Pipeline
§ Number One Problem We See Today
§ Scaling GPUs Up / Out Doesn’t Help
§ GPUs are Heavily Under-Utilized
§ Use tf.dataset API for best perf
§ Efficient parallel async I/O (C++)
Tesla K80 Volta V100
DON’T USE FEED_DICT!!
§ feed_dict Requires Python <-> C++ Serialization
§ Not Optimized for Production Ingestion Pipelines
§ Retrieves Next Batch After Current Batch is Done
§ Single-Threaded, Synchronous
§ CPUs/GPUs Not Fully Utilized!
§ Use Queue or Dataset APIs
§ Queues are old & complex
sess.run(train_step, feed_dict={…}
DETECT UNDERUTILIZED CPUS, GPUS
§ Instrument Code to Generate “Timelines”
§ Analyze with Google Web
Tracing Framework (WTF)
§ Monitor CPU with top, GPU with nvidia-smi
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
QUEUES
§ More than Traditional Queue
§ Uses CUDA Streams
§ Perform I/O, Pre-processing, Cropping, Shuffling, …
§ Pull from HDFS, S3, Google Storage, Kafka, ...
§ Combine Many Small Files into Large TFRecord Files
§ Use CPUs to Free GPUs for Compute
§ Helps Saturate CPUs and GPUs
QUEUE CAPACITY PLANNING
§ batch_size
§ # examples / batch (ie. 64 jpg)
§ Limited by GPU RAM
§ num_processing_threads
§ CPU threads pull and pre-process batches of data
§ Limited by CPU Cores
§ queue_capacity
§ Limited by CPU RAM (ie. 5 * batch_size)
TF.DTYPE
§ tf.float32, tf.int32, tf.string, etc
§ Default is usually tf.float32
§ Most TF operations support numpy natively
# Tuple of (tf.float32 scalar, tf.int32 array of 100 elements)
(tf.random_uniform([1]), tf.random_uniform([1, 100], dtype=tf.int32))
TF.TRAIN.FEATURE
§ Three(3) Feature Types
§ Bytes
§ Float
§ Int64
§ Actually, They Are Lists of 0..* Values of 3 Types Above
§ BytesList
§ FloatList
§ Int64List
TF.TRAIN.FEATURES
§ Map of {String -> Feature}
§ Better Name is “FeatureMap”
§ Organize Feature into Categories
§ Access Feature Using
Features[’feature_name’]
TF.TRAIN.FEATURELIST
§ List of 0..* Feature
§ Access Feature Using
FeatureList[0]
TF.TRAIN.FEATURELISTS
§ Map of {String -> FeatureList}
§ Better Name is “FeatureListMap”
§ Organize FeatureList into Categories
§ Access FeatureList Using
FeatureLists[’feature_list_name’]
TF.TRAIN.EXAMPLE
§ Key-Value Dictionary
§ String -> tf.train.Feature
§ Not a Self-Describing Format (?!)
§ Must Establish Schema Upfront by Writers and Readers
§ Must Obey the Following Conventions
§ Feature K must be of Type T in all Examples
§ Feature K can be omitted, default can be configured
§ If Feature K exists as empty, no default is applied
TF.TFRECORD
§ Contains many tf.train.Example’s
=> tf.train.Example contains many tf.train.Feature’s
=> tf.train.Feature contains BytesList, FloatList, Int64List
§ Record-Oriented Format of Binary Strings (ProtoBuffer)
§ Must Convert tf.train.Example to Serialized String
§ Use tf.train.Example.SerializeToString()
§ Used for Large Scale ML/AI Training
§ Not Meant for Random or Non-Sequential Access
§ Compression: GZIP, ZLIB
uint64 length
uint32 masked_crc32_of_length
byte data[length]
uint32 masked_crc32_of_data
EMBRACE BINARY FORMATS!
§ Unreadable and Scary, But Much More Efficient
§ Better Use of Memory and Disk Cache
§ Faster Copying and Moving
§ Smaller on the Wire
I
CONVERTING MNIST DATA TO TFRECORD
def convert_to_tfrecord(data, name):
images = data.images
labels = data.labels
num_examples = data.num_examples
rows = images.shape[1]
cols = images.shape[2]
depth = images.shape[3]
filename = os.path.join(FLAGS.directory, name + '.tfrecords’)
with tf.python_io.TFRecordWriter(filename) as writer:
for index in range(num_examples):
image_raw = images[index].tostring()
example = tf.train.Example(
features = tf.train.Features(
feature = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])),
'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])),
'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])),
'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])),
'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw]))
}))
writer.write(example.SerializeToString())
tf.python_io.TFRecordWriter
READING TF.TFRECORD’S
§ tf.data.TFRecordDatasetß Preferred (Dataset API)
§ tf.TFRecordReader()ß Not Preferred (Queue API)
§ tf.python_io.tf_record_iterator ß Preferred
§ Used as Python Generator
for serialized_example in tf.python_io.tf_record_iterator(filename):
example = tf.train.Example()
example.ParseFromString(serialized_example)
image_raw example.features.feature['image_raw’].string_list.value
height = example.features.feature[‘height'].int32_list.value[0]
…
DE-SERIALIZING TF.TFRECORD’S
feature_map = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])),
'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])),
'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])),
'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])),
'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw]))
deserialized_features = tf.parse_single_example(serialized_example, features=feature_map)
# Cast height from String to int32
height = tf.cast(deserialized_features[‘height’], tf.int32)
…
# Convert raw image from string to float32
image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
MORE TF.TRAIN.FEATURE CONSTRUCTS
§ tf.VarLenFeature
§ tf.FixedLenFeature, tf.FixedLenSequenceFeature
§ tf.SparseFeature
feature_map = {'height': tf.FixedLenFeature((), tf.int32, …)),
…
'image_raw': tf.train.VarLenFeature(tf.string, …))
deserialized_features = tf.parse_single_example(serialized_example, features=feature_map)
# Cast height from String to int32
height = tf.cast(deserialized_features[‘height’], tf.int32)
…
# Convert raw image from string to float32
image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
TF.DATA.DATASET
tf.Tensor => tf.data.Dataset
Functional Transformations
Python Generator => tf.data.Dataset
Dataset.from_tensors((features, labels))
Dataset.from_tensor_slices((features, labels))
TextLineDataset(filenames)
dataset.map(lambda x: tf.decode_jpeg(x))
dataset.repeat(NUM_EPOCHS)
dataset.batch(BATCH_SIZE)
def generator():
while True:
yield ...
dataset.from_generator(generator, tf.int32)
Dataset => One-Shot Iterator
Dataset => Initializable Iter
iter = dataset.make_one_shot_iterator()
next_element = iter.get_next()
while …:
sess.run(next_element)
iter = dataset.make_initializable_iterator()
sess.run(iter.initializer, feed_dict=PARAMS)
next_element = iter.get_next()
while …:
sess.run(next_element)
TIP: Use Dataset.prefetch() and parallel version of Dataset.map()
MORE TF.DATA.DATASET CONSTRUCTS
§ FixedLengthRecordDataset
§ Binary Files
§ TextLineDataset
§ CSV, JSON, XML, etc
§ TFRecordDataset
§ TFRecords
§ Iterator
“The TF Dataset Dude”
Tutorial: https://t.co/havjwJ46EY
DATASET TRANSFORMATIONS
Standard Custom (Contrib)
CUSTOM TF.PY_FUNC() TRANSFORMATION
§ Custom Python Function
§ Similar to Spark Python UDF (Eek!)
§ You Will Suffer a Big Performance Penalty
§ Try to Use TensorFlow-Native Operations
§ Remember, you can build your own in C++!
TF.DATA.ITERATOR TYPES
§ One Shot: Iterates Once Through the Dataset
§ Currently, best Iterator to use with Estimator API
§ Initializable: Runs iterator.initializer() Once
§ Re-Initializable: Runs iterator.initializer() Many
§ Ie. Random shuffling between iterations (epochs) of training
§ Feedable: Switch Between Different Dataset
§ Uses Feed and Placeholder to explicitly feed the iterator
§ Doesn’t require initialization when switching
TF.DATA.ITERATOR SIMPLE EXAMPLE
dataset = tf.data.Dataset.range(5)
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()
# Typically `result` will be the output of a model, or an optimizer's
# training operation.
result = tf.add(next_element, next_element)
sess.run(iterator.initializer)
while True:
try:
sess.run(result) # è 0, 2, 4, 6, 8
except tf.errors.OutOfRangeError:
print(‘End of dataset…’)
break
TF.DATA.ITERATOR TEXT EXAMPLE
filenames = ["/var/data/file1.txt", "/var/data/file2.txt"]
dataset = tf.data.TextLineDataset(filenames)
filenames = ["/var/data/file1.txt", "/var/data/file2.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.flat_map(
lambda filename: (
tf.data.TextLineDataset(filename)
.skip(1)
.filter(lambda line: tf.not_equal(tf.substr(line, 0, 1), "#"))))
§ Skip 1st Header Line and Comment Lines Starting with `#`
TF.DATA.ITERATOR NUMPY EXAMPLE
# Load the training data into two NumPy arrays, for example using `np.load()`.
with np.load("/var/data/training_data.npy") as data:
features = data["features"]
labels = data["labels"]
# Assume that each row of `features` corresponds to the same row as `labels`.
assert features.shape[0] == labels.shape[0]
features_placeholder = tf.placeholder(features.dtype, features.shape)
labels_placeholder = tf.placeholder(labels.dtype, labels.shape)
dataset = tf.data.Dataset.from_tensor_slices((features_placeholder, labels_placeholder))
# …Your Dataset Transformations…
iterator = dataset.make_initializable_iterator()
sess.run(iterator.initializer, feed_dict={features_placeholder: features,
labels_placeholder: labels})
TF.DATA.ITERATOR TFRECORD EXAMPLE
filenames = tf.placeholder(tf.string, shape=[None])
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.map(...) # Parse the record into tensors.
dataset = dataset.repeat() # Repeat the input indefinitely.
dataset = dataset.batch(32) # Batches of size 32
iterator = dataset.make_initializable_iterator()
# You can feed the initializer with the appropriate filenames for the current
# phase of execution, e.g. training vs. validation.
# Initialize `iterator` with training data.
training_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"]
sess.run(iterator.initializer, feed_dict={filenames: training_filenames})
# Initialize `iterator` with validation data.
validation_filenames = ["/var/data/validation1.tfrecord", ...]
sess.run(iterator.initializer, feed_dict={filenames: validation_filenames})
FUTURE OF DATASET API
§ Replaces Queue API
§ More Functional Operators
§ Automatic GPU Data Staging
§ Under-utilized GPUs Assisting with Data Ingestion
§ Advanced, RL-based Device Placement Strategies
TF.ESTIMATOR.ESTIMATOR (1/2)
§ Supports Keras!
§ Unified API for Local + Distributed
§ Provide Clear Path to Production
§ Enable Rapid Model Experiments
§ Provide Flexible Parameter Tuning
§ Enable Downstream Optimizing & Serving Infra( )
§ Nudge Users to Best Practices Through Opinions
§ Provide Hooks/Callbacks to Override Opinions
TF.ESTIMATOR.ESTIMATOR (2/2)
§ “Train-to-Serve” Design
§ Create Custom Estimator or Re-Use Canned Estimator
§ Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict)
§ Hooks for All Phases of Model Training and Evaluation
§ Load Input: input_fn()
§ Train: model_fn() and train()
§ Evaluate: eval_fn() and evaluate()
§ Performance Metrics: Loss, Accuracy, …
§ Save and Export: export_savedmodel()
§ Predict: predict() Uses the slow sess.run()
https://github.com/GoogleCloudPlatform/cloudml-samples
/blob/master/census/customestimator/
TF.CONTRIB.LEARN.EXPERIMENT
§ Easier-to-Use Distributed TensorFlow
§ Same API for Local and Distributed
§ Combines Estimator with input_fn()
§ Used for Training, Evaluation, & Hyper-Parameter Tuning
§ Distributed Training Defaults to Data-Parallel & Async
§ Cluster Configuration is Fixed at Start of Training Job
§ No Auto-Scaling Allowed, but That’s OK for Training
§ Note: This is Likely to be Deprecated Soon
ESTIMATOR + EXPERIMENT CONFIGS
§ TF_CONFIG
§ Special environment variable for config
§ Defines ClusterSpec in JSON incl. master, workers, PS’s
§ Distributed mode ‘{“environment”:“cloud”}’
§ Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’
§ RunConfig: Defines checkpoint interval, output directory,
§ HParams: Hyper-parameter tuning parameters and ranges
§ learn_runner creates RunConfig before calling run() & tune()
§ schedule is set based on {”task”:{”type”:…}}
TF_CONFIG=
'{
"environment": "cloud",
"cluster":
{
"master":["worker0:2222”],
"worker":["worker1:2222"],
"ps": ["ps0:2222"]
},
"task": {"type": "ps",
"index": "0"}
}'
ESTIMATOR + KERAS
§ Distributed TensorFlow (Estimator) + Easy to Use (Keras)
§ tf.keras.estimator.model_to_estimator()
# Instantiate a Keras inception v3 model.
keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None)
# Compile model with the optimizer, loss, and metrics you'd like to train with.
keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9),
loss='categorical_crossentropy',
metric='accuracy')
# Create an Estimator from the compiled Keras model.
est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3)
# Treat the derived Estimator as you would any other Estimator. For example,
# the following derived Estimator calls the train method:
est_inception_v3.train(input_fn=my_training_set, steps=2000)
“CANNED” ESTIMATORS
§ Commonly-Used Estimators
§ Pre-Tested and Pre-Tuned
§ DNNClassifer, TensorForestEstimator
§ Always Use Canned Estimators If Possible
§ Reduce Lines of Code, Complexity, and Bugs
§ Use FeatureColumn to Define & Create Features
Custom vs. Canned
@ Google, August 2017
ESTIMATOR + DATASET API
def input_fn():
def generator():
while True:
yield ...
my_dataset = tf.data.dataset.from_generator(generator, tf.int32)
# A one-shot iterator automatically initializes itself on first use.
iter = my_dataset.make_one_shot_iterator()
# The return value of get_next() matches the dataset element type.
images, labels = iter.get_next()
return images, labels
# The input_fn can be used as a regular Estimator input function.
estimator = tf.estimator.Estimator(…)
estimator.train(train_input_fn=input_fn, …)
OPTIMIZER + ESTIMATOR API + TPU’S
run_config = tpu_config.RunConfig()
estimator = tpu_estimator.TpuEstimato(model_fn=model_fn,
config=run_config)
estimator.train(input_fn=input_fn, num_epochs=10, …)
optimizer = tpu_optimizer.CrossShardOptimizer(
tf.train.GradientDescentOptimizer(learning_rate=…))
train_op = optimizer.minimize(loss)
estimator_spec = tf.estimator.EstimatorSpec(train_op=train_op, loss=…)
https://www.tensorflow.org/programmers_guide/using_tpu
TF.CONTRIB.LEARN.HEAD (OBJECTIVES)
§ Single-Objective Estimator
§ Single classification prediction
§ Multi-Objective Estimator
§ One (1) classification prediction
§ One(1) final layer to feed into next model
§ Multiple Heads Used to Ensemble Models
§ Treats neural network as a feature engineering step
§ Supported by TensorFlow Serving
TF.LAYERS
§ Standalone Layer or Entire Sub-Graphs
§ Functions of Tensor Inputs & Outputs
§ Mix and Match with Operations
§ Assumes 1st Dimension is Batch Size
§ Handles One (1) to Many (*) Inputs
§ Metrics are Layers
§ Loss Metric (Per Mini-Batch)
§ Accuracy and MSE (Across Mini-Batches)
TF.FEATURE_COLUMN
§ Used by Canned Estimator
§ Declaratively Specify Training Inputs
§ Converts Sparse to Dense Tensors
§ Sparse Features: Query Keyword, ProductID
§ Dense Features: One-Hot, Multi-Hot
§ Wide/Linear: Use Feature-Crossing
§ Deep: Use Embeddings
TF.FEATURE_COLUMN EXAMPLE
§ Continuous + One-Hot + Embedding
deep_columns = [
age,
education_num,
capital_gain,
capital_loss,
hours_per_week,
tf.feature_column.indicator_column(workclass),
tf.feature_column.indicator_column(education),
tf.feature_column.indicator_column(marital_status),
tf.feature_column.indicator_column(relationship),
# To show an example of embedding
tf.feature_column.embedding_column(occupation, dimension=8),
]
FEATURE CROSSING
§ Create New Features by Combining Existing Features
§ Limitation: Combinations Must Exist in Training Dataset
base_columns = [
education, marital_status, relationship, workclass, occupation, age_buckets
]
crossed_columns = [
tf.feature_column.crossed_column(
['education', 'occupation'], hash_bucket_size=1000),
tf.feature_column.crossed_column(
['age_buckets', 'education', 'occupation'], hash_bucket_size=1000)
]
SEPARATE TRAINING + EVALUATION
§ Separate Training and Evaluation Clusters
§ Evaluate Upon Checkpoint
§ Avoid Resource Contention
§ Training Continues in Parallel with Evaluation
Training
Cluster
Evaluation
Cluster
Parameter Server
Cluster
BATCH (RE-)NORMALIZATION (2015, 2017)
§ Each Mini-Batch May Have Wildly Different Distributions
§ Normalize per Batch (and Layer)
§ Faster Training, Learns Quicker
§ Final Model is More Accurate
§ TensorFlow is already on 2nd Generation Batch Algorithm
§ First-Class Support for Fusing Batch Norm Layers
§ Final mean + variance Are Folded Into Graph Later
-- (Almost) Always Use Batch (Re-)Normalization! --
z = tf.matmul(a_prev, W)
a = tf.nn.relu(z)
a_mean, a_var = tf.nn.moments(a, [0])
scale = tf.Variable(tf.ones([depth/channels]))
beta = tf.Variable(tf.zeros ([depth/channels]))
bn = tf.nn.batch_normalizaton(a, a_mean, a_var,
beta, scale, 0.001)
DROPOUT (2014)
§ Training Technique
§ Prevents Overfitting
§ Helps Avoid Local Minima
§ Inherent Ensembling Technique
§ Creates and Combines Different Neural Architectures
§ Expressed as Probability Percentage (ie. 50%)
§ Boost Other Weights During Validation & Prediction
Perform Dropout
(Training Phase)
Boost for Dropout
(Validation & Prediction Phase)
0%
Dropout
50%
Dropout
BATCH NORM, DROPOUT + ESTIMATOR API
§ Must Specify Eval or Training Mode with Estimator API
§ These Will Behave Differently Depending on the Mode
SAVED MODEL FORMAT
§ Different Format than Traditional Exporter
§ Contains Checkpoints, 1..* MetaGraph’s, and Assets
§ Export Manually with SavedModelBuilder
§ Estimator.export_savedmodel()
§ Hooks to Generate SignatureDef
§ Use saved_model_cli to Verify
§ Used by TensorFlow Serving
§ New Standard Export Format? (Catching on Slowly…)
TENSORFLOW DEBUGGER
§ Step through Operations
§ Inspect Inputs and Outputs
§ Wrap Session in Debug Session
sess = tf.Session(config=config)
sess =
tf_debug.LocalCLIDebugWrapperSession(sess)
https://www.tensorflow.org/
programmers_guide/debugger
LET’S DEBUG A MODEL
§ Navigate to the following notebook:
04_Debug_Model
§ https://github.com/PipelineAI/notebooks
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
SINGLE NODE, MULTI-GPU TRAINING
§ cpu:0
§ By default, all CPUs
§ Requires extra config to target a CPU
§ gpu:0..n
§ Each GPU has a unique id
§ TF usually prefers a single GPU
§ xla_cpu:0, xla_gpu:0..n
§ “JIT Compiler Device”
§ Hints TensorFlow to attempt JIT Compile
with tf.device(“/cpu:0”):
with tf.device(“/gpu:0”):
with tf.device(“/gpu:1”):
GPU 0 GPU 1
DISTRIBUTED, MULTI-NODE TRAINING
§ TensorFlow Automatically Inserts Send and Receive Ops into Graph
§ Parameter Server Synchronously Aggregates Updates to Variables
§ Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS
Worker0 Worker0
Worker1
Worker0 Worker1 Worker2
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0 gpu1
gpu2 gpu3
gpu0
gpu1
gpu0
gpu0
Single
Node
Multiple
Nodes
DATA PARALLEL VS. MODEL PARALLEL
§ Data Parallel (“Between-Graph Replication”)
§ Send exact same model to each device
§ Each device operates on partition of data
§ ie. Spark sends same function to many workers
§ Each worker operates on their partition of data
§ Model Parallel (“In-Graph Replication”)
§ Send different partition of model to each device
§ Each device operates on all data
§ Difficult, but required for larger models with lower-memory GPUs
SYNCHRONOUS VS. ASYNCHRONOUS
§ Synchronous
§ Nodes compute gradients
§ Nodes update Parameter Server (PS)
§ Nodes sync on PS for latest gradients
§ Asynchronous
§ Some nodes delay in computing gradients
§ Nodes don’t update PS
§ Nodes get stale gradients from PS
§ May not converge due to stale reads!
CHIEF WORKER
§ Chief Defaults to Worker Task 0
§ Task 0 is guaranteed to exist
§ Performs Maintenance Tasks
§ Writes log summaries
§ Instructs PS to checkpoint vars
§ Performs PS health checks
§ (Re-)Initialize variables at (re-)start of training
NODE AND PROCESS FAILURES
§ Checkpoint to Persistent Storage (HDFS, S3)
§ Use MonitoredTrainingSession and Hooks
§ Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos)
§ Understand Failure Modes and Recovery States
Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
AGENDA
Part 1: Optimize TensorFlow Training
§ GPUs and TensorFlow
§ Train, Inspect, and Debug TensorFlow Models
§ TensorFlow Distributed Cluster Model Training
§ Optimize Training with JIT XLA Compiler
XLA FRAMEWORK
§ XLA: “Accelerated Linear Algebra”
§ Reduce Reliance on Custom Operators
§ Intermediate Representation used by Hardware Vendors
§ Improve Portability
§ Increase Execution Speed
§ Decrease Memory Usage
§ Decrease Mobile Footprint
Helps TensorFlow Be Flexible AND Performant!!
XLA HIGH LEVEL OPTIMIZER (HLO)
§ HLO: “High Level Optimizer”
§ Compiler Intermediate Representation (IR)
§ Independent of source and target language
§ XLA Step 1 Emits Target-Independent HLO
§ XLA Step 2 Emits Target-Dependent LLVM
§ LLVM Emits Native Code Specific to Target
§ Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
JIT COMPILER
§ JIT: “Just-In-Time” Compiler
§ Built on XLA Framework
§ Reduce Memory Movement – Especially with GPUs
§ Reduce Overhead of Multiple Function Calls
§ Similar to Spark Operator Fusing in Spark 2.0
§ Unroll Loops, Fuse Operators, Fold Constants, …
§ Scopes: session, device, with jit_scope():
VISUALIZING JIT COMPILER IN ACTION
Before JIT After JIT
Google Web Tracing Framework:
http://google.github.io/tracing-framework/
from tensorflow.python.client import timeline
trace =
timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline.json', 'w') as trace_file:
trace_file.write(
trace.generate_chrome_trace_format(show_memory=True))
run_options = tf.RunOptions(trace_level=tf.RunOptions.SOFTWARE_TRACE)
run_metadata = tf.RunMetadata()
sess.run(options=run_options,
run_metadata=run_metadata)
VISUALIZING FUSING OPERATORS
pip install graphviz
dot -Tpng 
/tmp/hlo_graph_1.w5LcGs.dot 
-o hlo_graph_1.png
GraphViz:
http://www.graphviz.org
hlo_*.dot files generated by XLA
LET’S TRAIN WITH XLA CPU
§ Navigate to the following notebook:
06_Train_Model_XLA_CPU
§ https://github.com/PipelineAI/notebooks
LET’S TRAIN WITH XLA GPU
§ Navigate to the following notebook:
06a_Train_Model_XLA_GPU
§ https://github.com/PipelineAI/notebooks
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
WE ARE NOW…
…OPTIMIZING Models
AFTER Model Training
TO IMPROVE Model Serving
PERFORMANCE!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
AOT COMPILER
§ Standalone, Ahead-Of-Time (AOT) Compiler
§ Built on XLA framework
§ tfcompile
§ Creates executable with minimal TensorFlow Runtime needed
§ Includes only dependencies needed by subgraph computation
§ Creates functions with feeds (inputs) and fetches (outputs)
§ Packaged as cc_libary header and object files to link into your app
§ Commonly used for mobile device inference graph
§ Currently, only CPU x86-64 and ARM are supported - no GPU
GRAPH TRANSFORM TOOL (GTT)
§ Post-Training Optimization to Prepare for Inference
§ Remove Training-only Ops (checkpoint, drop out, logs)
§ Remove Unreachable Nodes between Given feed -> fetch
§ Fuse Adjacent Operators to Improve Memory Bandwidth
§ Fold Final Batch Norm mean and variance into Variables
§ Round Weights/Variables to improve compression (ie. 70%)
§ Quantize (FP32 -> INT8) to Speed Up Math Operations
AFTER TRAINING, BEFORE OPTIMIZATION
-TensorFlow-
Trains
Variables
-User-
Fetches
Outputs
-User-
Feeds
Inputs
-TensorFlow-
Performs
Operations
-TensorFlow-
Flows
Tensors
?!
POST-TRAINING GRAPH TRANSFORMS
transform_graph 
--in_graph=unoptimized_cpu_graph.pb  ß Original Graph
--out_graph=optimized_cpu_graph.pb  ß Transformed Graph
--inputs=’x_observed:0'  ß Feed (Input)
--outputs=’Add:0'  ß Fetch (Output)
--transforms=' ß List of Transforms
strip_unused_nodes
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
quantize_weights
quantize_nodes'
AFTER STRIPPING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ Results
§ Graph much simpler
§ File size much smaller
AFTER REMOVING UNUSED NODES
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ Results
§ Pesky nodes removed
§ File size a bit smaller
AFTER FOLDING CONSTANTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ Results
§ Placeholders (feeds) -> Variables*
(*Why Variables and not Constants?)
AFTER FOLDING BATCH NORMS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ Results
§ Graph remains the same
§ File size approximately the same
AFTER QUANTIZING WEIGHTS
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ Results
§ Graph is same, file size is smaller, compute is faster
WEIGHT QUANTIZATION
§ FP16 and INT8 Are Smaller and Computationally Simpler
§ Weights/Variables are Constants
§ Easy to Linearly Quantize
BUT WAIT, THERE’S MORE!
ACTIVATION QUANTIZATION
§ Activations Not Known Ahead of Time
§ Depends on input, not easy to quantize
§ Requires Additional Calibration Step
§ Use a “representative” dataset
§ Per Neural Network Layer…
§ Collect histogram of activation values
§ Generate many quantized distributions with different saturation thresholds
§ Choose threshold to minimize…
KL_divergence(ref_distribution, quant_distribution)
§ Not Much Time or Data is Required (Minutes on Commodity Hardware)
AFTER ACTIVATION QUANTIZATION
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ quantize_nodes (activations)
§ Results
§ Larger graph, needs calibration!
Requires Additional
freeze_requantization_ranges
LET’S OPTIMIZE FOR INFERENCE
§ Navigate to the following notebook:
08_Optimize_Model_Activations
§ https://github.com/PipelineAI/notebooks
FREEZING MODEL FOR DEPLOYMENT
§ Optimizations
§ strip_unused_nodes
§ remove_nodes
§ fold_constants
§ fold_batch_norms
§ quantize_weights
§ quantize_nodes
§ freeze_graph
§ Results
§ Variables -> Constants
Finally!
We’re Ready to Deploy!!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
MODEL SERVING TERMINOLOGY
§ Inference
§ Only Forward Propagation through Network
§ Predict, Classify, Regress, …
§ Bundle
§ GraphDef, Variables, Metadata, …
§ Assets
§ ie. Map of ClassificationID -> String
§ {9283: “penguin”, 9284: “bridge”}
§ Version
§ Every Model Has a Version Number (Integer)
§ Version Policy
§ ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
TENSORFLOW SERVING FEATURES
§ Supports Auto-Scaling
§ Custom Loaders beyond File-based
§ Tune for Low-latency or High-throughput
§ Serve Diff Models/Versions in Same Process
§ Customize Models Types beyond HashMap and TensorFlow
§ Customize Version Policies for A/B and Bandit Tests
§ Support Request Draining for Graceful Model Updates
§ Enable Request Batching for Diff Use Cases and HW
§ Supports Optimized Transport with GRPC and Protocol Buffers
PREDICTION SERVICE
§ Predict (Original, Generic)
§ Input: List of Tensor
§ Output: List of Tensor
§ Classify
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (class_label: String, score: float)
§ Regress
§ Input: List of tf.Example (key, value) pairs
§ Output: List of (label: String, score: float)
PREDICTION INPUTS + OUTPUTS
§ SignatureDef
§ Defines inputs and outputs
§ Maps external (logical) to internal (physical) tensor names
§ Allows internal (physical) tensor names to change
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
graph = tf.get_default_graph()
x_observed = graph.get_tensor_by_name('x_observed:0')
y_pred = graph.get_tensor_by_name('add:0')
inputs_map = {'inputs': x_observed}
outputs_map = {'outputs': y_pred}
predict_signature =
signature_def_utils.predict_signature_def(inputs=inputs_map,
outputs=outputs_map)
MULTI-HEADED INFERENCE
§ Inputs Pass Through Model One Time
§ Model Returns Multiple Predictions:
1. Human-readable prediction (ie. “penguin”, “church”,…)
2. Final layer of scores (float vector)
§ Final Layer of floats Pass to the Next Model in Ensemble
§ Optimizes Bandwidth, CPU/GPU, Latency, Memory
§ Enables Complex Model Composing and Ensembling
BUILD YOUR OWN MODEL SERVER
§ Adapt GRPC(Google) <-> HTTP (REST of the World)
§ Perform Batch Inference vs. Request/Response
§ Handle Requests Asynchronously
§ Support Mobile, Embedded Inference
§ Customize Request Batching
§ Add Circuit Breakers, Fallbacks
§ Control Latency Requirements
§ Reduce Number of Moving Parts
#include
“tensorflow_serving/model_servers/server_core.h”
class MyTensorFlowModelServer {
ServerCore::Options options;
// set options (model name, path, etc)
std::unique_ptr<ServerCore> core;
TF_CHECK_OK(
ServerCore::Create(std::move(options), &core)
);
}
Compile and Link with
libtensorflow.so
RUNTIME OPTION: NVIDIA TENSOR-RT
§ Post-Training Model Optimizations
§ Specific to Nvidia GPU
§ Similar to TF Graph Transform Tool
§ GPU-Optimized Prediction Runtime
§ Alternative to TensorFlow Serving
§ PipelineAI Supports TensorRT!
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
AGENDA
Part 2: Optimize TensorFlow Serving
§ AOT XLA Compiler and Graph Transform Tool
§ Key Components of TensorFlow Serving
§ Deploy Optimized TensorFlow Model
§ Optimize TensorFlow Serving Runtime
REQUEST BATCH TUNING
§ max_batch_size
§ Enables throughput/latency tradeoff
§ Bounded by RAM
§ batch_timeout_micros
§ Defines batch time window, latency upper-bound
§ Bounded by RAM
§ num_batch_threads
§ Defines parallelism
§ Bounded by CPU cores
§ max_enqueued_batches
§ Defines queue upper bound, throttling
§ Bounded by RAM
Reaching either threshold
will trigger a batch
Separate, Non-Batched Requests
Combined, Batched Requests
ADVANCED BATCHING & SERVING TIPS
§ Batch Just the GPU/TPU Portions of the Computation Graph
§ Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops
§ Distribute Large Models Into Shards Across TensorFlow Model Servers
§ Batch RNNs Used for Sequential and Time-Series Data
§ Find Best Batching Strategy For Your Data Through Experimentation
§ BasicBatchScheduler: Homogeneous requests (ie Regress or Classify)
§ SharedBatchScheduler: Mixed requests, multi-step, ensemble predict
§ StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads
§ Serve Only One (1) Model Inside One (1) TensorFlow Serving Process
§ Much Easier to Debug, Tune, Scale, and Manage Models in Production.
PIPELINE.AI FUNCTIONS (SERVERLESS)
§ Built on OpenFaaS
§ Supports Kubernetes
§ Supports Docker Swarm
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
AGENDA
Part 3: Advanced Model Serving + Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
KUBERNETES PRIORITY SCHEDULING
Workloads can …
§ access the entire cluster up
to the autoscaler max size
§ trigger autoscaling until
higher-priority workload
§ “fill the cracks” of resource
usage of higher-priority work
(i.e., wait to run until resources are feed
KUBERNETES INGRESS
§ Single Service
§ Can also use Service (LoadBalancer or NodePort)
§ Fan Out & Name-Based Virtual Hosting
§ Route Traffic Using Path or Host Header
§ Reduces # of load balancers needed
§ 404 Implemented as default backend
§ Federation / Hybrid-Cloud
§ Creates Ingress objects in every cluster
§ Monitors health and capacity of pods within each cluster
§ Routes clients to appropriate backend anywhere in federation
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-fanout
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
Fan Out (Path)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-virtualhost
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: foo.bar.com
http:
paths:
backend:
serviceName: s1
servicePort: 80
- host: bar.foo.com
http:
paths:
backend:
serviceName: s2
servicePort: 80
Virtual Hosting
KUBERNETES INGRESS CONTROLLER
§ Ingress Controller Types
§ Google Cloud: kubernetes.io/ingress.class: gce
§ Nginx: kubernetes.io/ingress.class: nginx
§ Istio: kubernetes.io/ingress.class: istio
§ Must Start Ingress Controller Manually
§ Just deploying Ingress is not enough
§ Not started by kube-controller-manager
§ Start Istio Ingress Controller
kubectl apply -f 
$ISTIO_INSTALL_PATH/install/kubernetes/istio.yaml
ISTIO EGRESS
§ While-list Domains To Access From Within Service Mesh
§ Apply RoutingRules
§ Apply DestinationPolicys
§ Supports TLS, HTTP, GRPC
kind: EgressRule
metadata:
name: pipeline-api-egress
spec:
destination:
service: api.pipeline.ai
ports:
- port: 80
protocol: http
- port: 443
protocol: https
AGENDA
Part 3: Advanced Model Serving + Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
ISTIO ARCHITECTURE: INGRESS
ISTIO ARCHITECTURE: ENVOY
§ Lyft Project
§ High-perf Proxy (C++)
§ Lots of Metrics
§ Zone-Aware
§ Service Discovery
§ Load Balancing
§ Fault Injection, Circuits
§ %-based Traffic Split, Shadow
§ Sidecar Pattern
§ Rate Limiting, Retries, Outlier Detection, Timeout with Budget, …
ISTIO ARCHITECTURE: MIXER
§ Enforce Access Control
§ Evaluate Request-Attrs
§ Collect Metrics
§ Platform-Independent
§ Extensible Plugin Model
ISTIO ARCHITECTURE: PILOT
§ Envoy service discovery
§ Intelligent routing
§ A/B Tests
§ Canary deployments
§ RouteRule->Envoy conf
§ Propagates to sidecars
§ Supports Kube, Consul, ...
ISTIO ARCHITECTURE: SECURITY
§ Mutual TLS Auth
§ Credential Management
§ Uses Service-Identity
§ Canary Deployments
§ Fine-grained ACLs
§ Attribute & Role-based
§ Auditing & Monitoring
AGENDA
Part 3: Advanced Model Serving + Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
ISTIO ROUTE RULES
§ Kubernetes Custom Resource Definition (CRD)
kind: CustomResourceDefinition
metadata:
name: routerules.config.istio.io
spec:
group: config.istio.io
names:
kind: RouteRule
listKind: RouteRuleList
plural: routerules
singular: routerule
scope: Namespaced
version: v1alpha2
ADVANCED ROUTING RULES
§ Content-based Routing
§ Uses headers, username, payload, …
§ Cross-Environment Routing
§ Shadow traffic prod=>staging
ISTIO DESTINATION POLICIES
§ Load Balancing
§ ROUND_ROBIN (default)
§ LEAST_CONN (between 2 randomly-selected hosts)
§ RANDOM
§ Circuit Breaker
§ Max connections
§ Max requests per conn
§ Consecutive errors
§ Penalty timer (15 mins)
§ Scan windows (5 mins)
circuitBreaker:
simpleCb:
maxConnections: 100
httpMaxRequests: 1000
httpMaxRequestsPerConnection: 10
httpConsecutiveErrors: 7
sleepWindow: 15m
httpDetectionInterval: 5m
ISTIO AUTO-SCALING
§ Traffic Routing and Auto-Scaling Occur Independently
§ Istio Continues to Obey Traffic Splits After Auto-Scaling
§ Auto-Scaling May Occur In Response to New Traffic Route
A/B & BANDIT MODEL TESTING
§ Perform Live Experiments in Production
§ Compare Existing Model A with Model B, Model C
§ Safe Split-Canary Deployment
§ Pro Tip: Keep Ingress Simple – Use Route Rules Instead!
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-20-5-75
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 20 # 20% still routes to model A
- labels:
version: B # 5% routes to new model B
weight: 5
- labels:
version: C # 75% routes to new model C
weight: 75
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-1-2-97
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 1 # 1% routes to model A
- labels:
version: B # 2% routes to new model B
weight: 2
- labels:
version: C # 97% routes to new model C
weight: 97
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: predict-mnist-97-2-1
spec:
destination:
name: predict-mnist
precedence: 2 # Greater than global deny-all
route:
- labels:
version: A
weight: 97 # 97% still routes to model A
- labels:
version: B # 2% routes to new model B
weight: 2
- labels:
version: C # 1% routes to new model C
weight: 1
AGENDA
Part 3: Advanced Model Serving + Routing
§ Kubernetes Ingress, Egress, Networking
§ Istio and Envoy Architecture
§ Intelligent Traffic Routing and Scaling
§ Metrics, Chaos Monkey, Production Readiness
ISTIO METRICS AND MONITORING
§ Verify Traffic Splits
§ Fine-Grained Request Tracing
ISTIO & CHAOS + LATENCY MONKEY
§ Fault Injection
§ Delay
§ Abort
kind: RouteRule
metadata:
name: predict-mnist
spec:
destination:
name: predict-mnist
httpFault:
abort:
httpStatus: 420
percent: 100
kind: RouteRule
metadata:
name: predict-mnist
spec:
destination:
name: predict-mnist
httpFault:
delay:
fixedDelay: 7.000s
percent: 100
SPECIAL THANKS TO CHRISTIAN POSTA
§ http://blog.christianposta.com/istio-workshop
AGENDA
Part 0: Introductions and Setup
Part 1: Optimize TensorFlow Training
Part 2: Optimize TensorFlow Serving
Part 3: Advanced Model Serving + Routing
PIPELINE.AI SUPPORTS ALL MAJOR MODELS
THANK YOU!!
§ Please Star this GitHub Repo!
§ All slides, code, notebooks, and Docker images here:
https://github.com/PipelineAI/pipeline
Contact Me
chris@pipeline.ai
@cfregly

More Related Content

What's hot

High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...Chris Fregly
 
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...Chris Fregly
 
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017Chris Fregly
 
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...Chris Fregly
 
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...Chris Fregly
 
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...Chris Fregly
 
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...Chris Fregly
 
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...Chris Fregly
 
Go語言開發APM微服務在Kubernetes之經驗分享
Go語言開發APM微服務在Kubernetes之經驗分享Go語言開發APM微服務在Kubernetes之經驗分享
Go語言開發APM微服務在Kubernetes之經驗分享Te-Yen Liu
 
Quest for the Perfect Workflow for McrFRED
Quest for the Perfect Workflow for McrFREDQuest for the Perfect Workflow for McrFRED
Quest for the Perfect Workflow for McrFREDAndi Smith
 
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHP
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHPHands on Docker - Launch your own LEMP or LAMP stack - SunshinePHP
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHPDana Luther
 
Migrating to a Bazel-based CI System: 6 Learnings - Or Shachar
Migrating to a Bazel-based CI System: 6 Learnings - Or ShacharMigrating to a Bazel-based CI System: 6 Learnings - Or Shachar
Migrating to a Bazel-based CI System: 6 Learnings - Or ShacharWix Engineering
 
Cloud Foundry 百日行 振り返り
Cloud Foundry 百日行 振り返りCloud Foundry 百日行 振り返り
Cloud Foundry 百日行 振り返りnota-ja
 
Migrating to a bazel based CI system: 6 learnings
Migrating to a bazel based CI system: 6 learnings Migrating to a bazel based CI system: 6 learnings
Migrating to a bazel based CI system: 6 learnings Or Shachar
 
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013Marcus Barczak
 
Puppet and the Model-Driven Infrastructure
Puppet and the Model-Driven InfrastructurePuppet and the Model-Driven Infrastructure
Puppet and the Model-Driven Infrastructurelkanies
 
The OMR GC talk - Ruby Kaigi 2015
The OMR GC talk - Ruby Kaigi 2015The OMR GC talk - Ruby Kaigi 2015
The OMR GC talk - Ruby Kaigi 2015craig lehmann
 
OSGi for real in the enterprise: Apache Karaf - NLJUG J-FALL 2010
OSGi for real in the enterprise: Apache Karaf - NLJUG J-FALL 2010OSGi for real in the enterprise: Apache Karaf - NLJUG J-FALL 2010
OSGi for real in the enterprise: Apache Karaf - NLJUG J-FALL 2010Adrian Trenaman
 
Deep Learning - Continuous Operations
Deep Learning - Continuous Operations Deep Learning - Continuous Operations
Deep Learning - Continuous Operations Haggai Philip Zagury
 

What's hot (20)

High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
 
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
 
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017
 
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
 
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with ...
 
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
 
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
High Performance Distributed TensorFlow with GPUs - Nvidia GPU Tech Conferenc...
 
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
 
Go語言開發APM微服務在Kubernetes之經驗分享
Go語言開發APM微服務在Kubernetes之經驗分享Go語言開發APM微服務在Kubernetes之經驗分享
Go語言開發APM微服務在Kubernetes之經驗分享
 
Quest for the Perfect Workflow for McrFRED
Quest for the Perfect Workflow for McrFREDQuest for the Perfect Workflow for McrFRED
Quest for the Perfect Workflow for McrFRED
 
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHP
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHPHands on Docker - Launch your own LEMP or LAMP stack - SunshinePHP
Hands on Docker - Launch your own LEMP or LAMP stack - SunshinePHP
 
Migrating to a Bazel-based CI System: 6 Learnings - Or Shachar
Migrating to a Bazel-based CI System: 6 Learnings - Or ShacharMigrating to a Bazel-based CI System: 6 Learnings - Or Shachar
Migrating to a Bazel-based CI System: 6 Learnings - Or Shachar
 
Cloud Foundry 百日行 振り返り
Cloud Foundry 百日行 振り返りCloud Foundry 百日行 振り返り
Cloud Foundry 百日行 振り返り
 
Migrating to a bazel based CI system: 6 learnings
Migrating to a bazel based CI system: 6 learnings Migrating to a bazel based CI system: 6 learnings
Migrating to a bazel based CI system: 6 learnings
 
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013
 
Scaling Django
Scaling DjangoScaling Django
Scaling Django
 
Puppet and the Model-Driven Infrastructure
Puppet and the Model-Driven InfrastructurePuppet and the Model-Driven Infrastructure
Puppet and the Model-Driven Infrastructure
 
The OMR GC talk - Ruby Kaigi 2015
The OMR GC talk - Ruby Kaigi 2015The OMR GC talk - Ruby Kaigi 2015
The OMR GC talk - Ruby Kaigi 2015
 
OSGi for real in the enterprise: Apache Karaf - NLJUG J-FALL 2010
OSGi for real in the enterprise: Apache Karaf - NLJUG J-FALL 2010OSGi for real in the enterprise: Apache Karaf - NLJUG J-FALL 2010
OSGi for real in the enterprise: Apache Karaf - NLJUG J-FALL 2010
 
Deep Learning - Continuous Operations
Deep Learning - Continuous Operations Deep Learning - Continuous Operations
Deep Learning - Continuous Operations
 

Similar to PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to Scalable Predicting - Strata Conference - San Jose - March 2018

High Performance Distributed TensorFlow with GPUs and Kubernetes
High Performance Distributed TensorFlow with GPUs and KubernetesHigh Performance Distributed TensorFlow with GPUs and Kubernetes
High Performance Distributed TensorFlow with GPUs and Kubernetesinside-BigData.com
 
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AIOptimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AIData Con LA
 
The Convergence of HPC and Deep Learning
The Convergence of HPC and Deep LearningThe Convergence of HPC and Deep Learning
The Convergence of HPC and Deep Learninginside-BigData.com
 
Performance Benchmarking of Clouds Evaluating OpenStack
Performance Benchmarking of Clouds                Evaluating OpenStackPerformance Benchmarking of Clouds                Evaluating OpenStack
Performance Benchmarking of Clouds Evaluating OpenStackPradeep Kumar
 
[PASS Summit 2016] Blazing Fast, Planet-Scale Customer Scenarios with Azure D...
[PASS Summit 2016] Blazing Fast, Planet-Scale Customer Scenarios with Azure D...[PASS Summit 2016] Blazing Fast, Planet-Scale Customer Scenarios with Azure D...
[PASS Summit 2016] Blazing Fast, Planet-Scale Customer Scenarios with Azure D...Andrew Liu
 
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...SQUADEX
 
TensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsTensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsStijn Decubber
 
OS for AI: Elastic Microservices & the Next Gen of ML
OS for AI: Elastic Microservices & the Next Gen of MLOS for AI: Elastic Microservices & the Next Gen of ML
OS for AI: Elastic Microservices & the Next Gen of MLNordic APIs
 
Quick trip around the Cosmos - Things every astronaut supposed to know
Quick trip around the Cosmos - Things every astronaut supposed to knowQuick trip around the Cosmos - Things every astronaut supposed to know
Quick trip around the Cosmos - Things every astronaut supposed to knowRafał Hryniewski
 
Dsdt meetup 2017 11-21
Dsdt meetup 2017 11-21Dsdt meetup 2017 11-21
Dsdt meetup 2017 11-21JDA Labs MTL
 
DSDT Meetup Nov 2017
DSDT Meetup Nov 2017DSDT Meetup Nov 2017
DSDT Meetup Nov 2017DSDT_MTL
 
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUsOptimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUsChris Fregly
 
Managing and Scaling Puppet - PuppetConf 2014
Managing and Scaling Puppet - PuppetConf 2014Managing and Scaling Puppet - PuppetConf 2014
Managing and Scaling Puppet - PuppetConf 2014Puppet
 
JConWorld_ Continuous SQL with Kafka and Flink
JConWorld_ Continuous SQL with Kafka and FlinkJConWorld_ Continuous SQL with Kafka and Flink
JConWorld_ Continuous SQL with Kafka and FlinkTimothy Spann
 
Integrating ChatGPT with Apache Airflow
Integrating ChatGPT with Apache AirflowIntegrating ChatGPT with Apache Airflow
Integrating ChatGPT with Apache AirflowTatiana Al-Chueyr
 
GPS Insight on Using Presto with Scylla for Data Analytics and Data Archival
GPS Insight on Using Presto with Scylla for Data Analytics and Data ArchivalGPS Insight on Using Presto with Scylla for Data Analytics and Data Archival
GPS Insight on Using Presto with Scylla for Data Analytics and Data ArchivalScyllaDB
 
Apache Samza 1.0 - What's New, What's Next
Apache Samza 1.0 - What's New, What's NextApache Samza 1.0 - What's New, What's Next
Apache Samza 1.0 - What's New, What's NextPrateek Maheshwari
 
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...NETWAYS
 
How to Puppetize Google Cloud Platform - PuppetConf 2014
How to Puppetize Google Cloud Platform - PuppetConf 2014How to Puppetize Google Cloud Platform - PuppetConf 2014
How to Puppetize Google Cloud Platform - PuppetConf 2014Puppet
 
Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...
Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...
Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...HostedbyConfluent
 

Similar to PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to Scalable Predicting - Strata Conference - San Jose - March 2018 (20)

High Performance Distributed TensorFlow with GPUs and Kubernetes
High Performance Distributed TensorFlow with GPUs and KubernetesHigh Performance Distributed TensorFlow with GPUs and Kubernetes
High Performance Distributed TensorFlow with GPUs and Kubernetes
 
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AIOptimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
Optimizing, Profiling, and Deploying High Performance Spark ML and TensorFlow AI
 
The Convergence of HPC and Deep Learning
The Convergence of HPC and Deep LearningThe Convergence of HPC and Deep Learning
The Convergence of HPC and Deep Learning
 
Performance Benchmarking of Clouds Evaluating OpenStack
Performance Benchmarking of Clouds                Evaluating OpenStackPerformance Benchmarking of Clouds                Evaluating OpenStack
Performance Benchmarking of Clouds Evaluating OpenStack
 
[PASS Summit 2016] Blazing Fast, Planet-Scale Customer Scenarios with Azure D...
[PASS Summit 2016] Blazing Fast, Planet-Scale Customer Scenarios with Azure D...[PASS Summit 2016] Blazing Fast, Planet-Scale Customer Scenarios with Azure D...
[PASS Summit 2016] Blazing Fast, Planet-Scale Customer Scenarios with Azure D...
 
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...
 
TensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.jsTensorFlow meetup: Keras - Pytorch - TensorFlow.js
TensorFlow meetup: Keras - Pytorch - TensorFlow.js
 
OS for AI: Elastic Microservices & the Next Gen of ML
OS for AI: Elastic Microservices & the Next Gen of MLOS for AI: Elastic Microservices & the Next Gen of ML
OS for AI: Elastic Microservices & the Next Gen of ML
 
Quick trip around the Cosmos - Things every astronaut supposed to know
Quick trip around the Cosmos - Things every astronaut supposed to knowQuick trip around the Cosmos - Things every astronaut supposed to know
Quick trip around the Cosmos - Things every astronaut supposed to know
 
Dsdt meetup 2017 11-21
Dsdt meetup 2017 11-21Dsdt meetup 2017 11-21
Dsdt meetup 2017 11-21
 
DSDT Meetup Nov 2017
DSDT Meetup Nov 2017DSDT Meetup Nov 2017
DSDT Meetup Nov 2017
 
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUsOptimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
 
Managing and Scaling Puppet - PuppetConf 2014
Managing and Scaling Puppet - PuppetConf 2014Managing and Scaling Puppet - PuppetConf 2014
Managing and Scaling Puppet - PuppetConf 2014
 
JConWorld_ Continuous SQL with Kafka and Flink
JConWorld_ Continuous SQL with Kafka and FlinkJConWorld_ Continuous SQL with Kafka and Flink
JConWorld_ Continuous SQL with Kafka and Flink
 
Integrating ChatGPT with Apache Airflow
Integrating ChatGPT with Apache AirflowIntegrating ChatGPT with Apache Airflow
Integrating ChatGPT with Apache Airflow
 
GPS Insight on Using Presto with Scylla for Data Analytics and Data Archival
GPS Insight on Using Presto with Scylla for Data Analytics and Data ArchivalGPS Insight on Using Presto with Scylla for Data Analytics and Data Archival
GPS Insight on Using Presto with Scylla for Data Analytics and Data Archival
 
Apache Samza 1.0 - What's New, What's Next
Apache Samza 1.0 - What's New, What's NextApache Samza 1.0 - What's New, What's Next
Apache Samza 1.0 - What's New, What's Next
 
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
 
How to Puppetize Google Cloud Platform - PuppetConf 2014
How to Puppetize Google Cloud Platform - PuppetConf 2014How to Puppetize Google Cloud Platform - PuppetConf 2014
How to Puppetize Google Cloud Platform - PuppetConf 2014
 
Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...
Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...
Use ksqlDB to migrate core-banking processing from batch to streaming | Mark ...
 

More from Chris Fregly

AWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and DataAWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and DataChris Fregly
 
Pandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdfPandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdfChris Fregly
 
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS MeetupRay AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS MeetupChris Fregly
 
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds UpdatedSmokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds UpdatedChris Fregly
 
Amazon reInvent 2020 Recap: AI and Machine Learning
Amazon reInvent 2020 Recap:  AI and Machine LearningAmazon reInvent 2020 Recap:  AI and Machine Learning
Amazon reInvent 2020 Recap: AI and Machine LearningChris Fregly
 
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...Chris Fregly
 
Quantum Computing with Amazon Braket
Quantum Computing with Amazon BraketQuantum Computing with Amazon Braket
Quantum Computing with Amazon BraketChris Fregly
 
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-PersonChris Fregly
 
AWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:CapAWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:CapChris Fregly
 
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...Chris Fregly
 
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...Chris Fregly
 
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Chris Fregly
 
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...Chris Fregly
 
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...Chris Fregly
 
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...Chris Fregly
 

More from Chris Fregly (15)

AWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and DataAWS reInvent 2022 reCap AI/ML and Data
AWS reInvent 2022 reCap AI/ML and Data
 
Pandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdfPandas on AWS - Let me count the ways.pdf
Pandas on AWS - Let me count the ways.pdf
 
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS MeetupRay AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
 
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds UpdatedSmokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
 
Amazon reInvent 2020 Recap: AI and Machine Learning
Amazon reInvent 2020 Recap:  AI and Machine LearningAmazon reInvent 2020 Recap:  AI and Machine Learning
Amazon reInvent 2020 Recap: AI and Machine Learning
 
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...Waking the Data Scientist at 2am:  Detect Model Degradation on Production Mod...
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
 
Quantum Computing with Amazon Braket
Quantum Computing with Amazon BraketQuantum Computing with Amazon Braket
Quantum Computing with Amazon Braket
 
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
 
AWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:CapAWS Re:Invent 2019 Re:Cap
AWS Re:Invent 2019 Re:Cap
 
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
 
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
 
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
 
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
 
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
 
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
 

Recently uploaded

Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...OnePlan Solutions
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsChristian Birchler
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsAhmed Mohamed
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Mater
 
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Cizo Technology Services
 
Comparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfComparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfDrew Moseley
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...OnePlan Solutions
 
Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Rob Geurden
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalLionel Briand
 
VK Business Profile - provides IT solutions and Web Development
VK Business Profile - provides IT solutions and Web DevelopmentVK Business Profile - provides IT solutions and Web Development
VK Business Profile - provides IT solutions and Web Developmentvyaparkranti
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
UI5ers live - Custom Controls wrapping 3rd-party libs.pptx
UI5ers live - Custom Controls wrapping 3rd-party libs.pptxUI5ers live - Custom Controls wrapping 3rd-party libs.pptx
UI5ers live - Custom Controls wrapping 3rd-party libs.pptxAndreas Kunz
 
Xen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfXen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfStefano Stabellini
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作qr0udbr0
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
A healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfA healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfMarharyta Nedzelska
 
Salesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZSalesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZABSYZ Inc
 
How to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationHow to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationBradBedford3
 

Recently uploaded (20)

Advantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your BusinessAdvantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your Business
 
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML Diagrams
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)
 
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
Global Identity Enrolment and Verification Pro Solution - Cizo Technology Ser...
 
Comparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfComparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdf
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
 
Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive Goal
 
VK Business Profile - provides IT solutions and Web Development
VK Business Profile - provides IT solutions and Web DevelopmentVK Business Profile - provides IT solutions and Web Development
VK Business Profile - provides IT solutions and Web Development
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
UI5ers live - Custom Controls wrapping 3rd-party libs.pptx
UI5ers live - Custom Controls wrapping 3rd-party libs.pptxUI5ers live - Custom Controls wrapping 3rd-party libs.pptx
UI5ers live - Custom Controls wrapping 3rd-party libs.pptx
 
Xen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfXen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdf
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
A healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdfA healthy diet for your Java application Devoxx France.pdf
A healthy diet for your Java application Devoxx France.pdf
 
Salesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZSalesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZ
 
How to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationHow to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion Application
 

PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to Scalable Predicting - Strata Conference - San Jose - March 2018

  • 1. HIGH PERFORMANCE TENSORFLOW IN PRODUCTION WITH KUBERNETES AND GPUS STRATA CONFERENCE, SAN JOSE MARCH 2018 CHRIS FREGLY FOUNDER @ PIPELINE.AI
  • 2. KEY TAKE-AWAYS With PipelineAI, You Can… § Generate Hardware-Specific Model Optimizations § Deploy and Compare Models in Live Production § Optimize Complete AI Pipeline Across Many Models § Hyper-Parameter Tune Both Training & Inference
  • 3. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Routing
  • 4. INTRODUCTIONS: ME § Chris Fregly, Founder & Engineer @PipelineAI § Formerly Netflix, Databricks, IBM Spark Tech § Founder @ Advanced Spark TensorFlow Meetup § Please Join Our 60,000+ Global Members!! Contact Me chris@pipeline.ai @cfregly Global Locations * San Francisco * Chicago * Austin * Washington DC * Dusseldorf * London
  • 5. INTRODUCTIONS: YOU § Data Scientist, Data Engineer, Data Analyst, Data Curious § Want to Deploy ML/AI Models Rapidly and Safely § Need to Trace or Explain Model Predictions § Have a Decent Grasp of Computer Science Fundamentals
  • 6. PIPELINE.AI IS 100% OPEN SOURCE § https://github.com/PipelineAI/pipeline/ § Please Star this GitHub Repo! § “Each Star is Worth $1,500 in Seed Money” - A Prominent Venture Capitalist in Silicon Valley http://jrvis.com/red-dwarf/
  • 8. PIPELINE.AI OVERVIEW 750,000 Docker Downloads 70,000 Registered Users 60,000 Meetup Members 30,000 LinkedIn Followers 2,500 GitHub Stars 20 Enterprise Beta Users
  • 10. WHY HEAVY FOCUS ON MODEL SERVING? Model Training Batch & Boring Offline in Research Lab Pipeline Ends at Training No Insight into Live Production Small Number of Data Scientists Optimizations Are Very Well-Known Real-Time & Exciting!! Online in Live Production Pipeline Extends into Production Continuous Insight into Live Production Huuuuuuge Number of Application Users Runtime Optimizations Not Yet Explored <<< Model Serving 100’s Training Jobs per Day 1,000,000’s Predictions per Sec
  • 11. CLOUD-BASED MODEL SERVING OPTIONS § AWS SageMaker § Released Nov 2017 @ Re-invent § Custom Docker Images for Training/Serving (ie. PipelineAI Images) § Distributed TensorFlow Training through Estimator API § Traffic Splitting for A/B Model Testing § Google Cloud ML Engine § Mostly Command-Line Based § Driving TensorFlow Open Source API (ie. Estimator API) § Azure ML PipelineAI Supports SageMaker *and* Hybrid-Cloud Deployments
  • 12. BUILD MODEL WITH THE RUNTIME § Package Model + Runtime into 1 Docker Image § Emphasizes Immutable Deployment and Infrastructure § Same Image Across All Environments § No Library or Dependency Surprises from Laptop to Production § Allows Tuning Model + Runtime Together pipeline predict-server-build --model-name=mnist --model-tag=A --model-type=tensorflow --model-runtime=tfserving --model-chip=gpu --model-path=./tensorflow/mnist/ Build Local Model Server A
  • 13. RUN A LOADTEST LOCALLY! § Perform Mini-Load Test on Local Model Server § Immediate, Local Prediction Performance Metrics § Compare to Previous Model + Runtime Variations § Gain Intuition Before Push to Prod pipeline predict-server-start --model-name=mnist --model-tag=A --memory-limit=2G pipeline predict-http-test --model-endpoint-url=http://localhost:8080 --test-request-path=test_request.json --test-request-concurrency=1000 Start Local LoadTest Start Local Model Servers
  • 14. TUNE MODEL + RUNTIME TOGETHER § Model Training Optimizations § Model Hyper-Parameters (ie. Learning Rate) § Reduced Precision (ie. FP16 Half Precision) § Model Serving (Post-Train) Optimizations § Quantize Model Weights + Activations From 32-bit to 8-bit § Fuse Neural Network Layers Together § Model Runtime Optimizations § Runtime Config: Request Batch Size, etc § Different Runtime: TensorFlow Serving CPU/GPU, Nvidia TensorRT
  • 15. DETECT UNDERUTILIZED CPUS, GPUS § Instrument Code to Generate “Timelines” § Analyze with Google Web Tracing Framework (WTF) § Monitor CPU with top, GPU with nvidia-smi http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  • 16. SERVING (POST-TRAIN) OPTIMIZATIONS § Prepare Model for Serving § Simplify Network, Reduce Size § Reduce Precision -> Fast Math § Some Tools § Graph Transform Tool (GTT) § tfcompile After Training After Optimizing! pipeline optimize --optimization-list=[‘quantize_weights’,‘tfcompile’] --model-name=mnist --model-tag=A --model-path=./tensorflow/mnist/model --model-inputs=[‘x’] --model-outputs=[‘add’] --output-path=./tensorflow/mnist/optimized_model Linear Regression Model Size: 70MB –> 70K (!)
  • 17. NVIDIA TENSOR-RT RUNTIME § Post-Training Model Optimizations § Specific to Nvidia GPUs § GPU-Optimized Prediction Runtime § Alternative to TensorFlow Serving § PipelineAI Supports TensorRT!
  • 18. TENSORFLOW LITE RUNTIME § Post-Training Model Optimizations § Currently Supports iOS and Android § On-Device Prediction Runtime § Low-Latency, Fast Startup § Selective Operator Loading § 70KB Min - 300KB Max Runtime Footprint § Supports Accelerators (GPU, TPU) § Falls Back to CPU without Accelerator § Java and C++ APIs
  • 19. 3 DIFFERENT RUNTIMES, SAME MODEL pipeline predict-server-build --model-name=mnist --model-tag=C --model-type=tensorflow --model-runtime=tensorrt --model-chip=gpu --model-path=./tensorflow/mnist/ Build Local Model Server C pipeline predict-server-build --model-name=mnist --model-tag=A --model-type=tensorflow --model-runtime=tfserving --model-chip=cpu --model-path=./tensorflow/mnist/ Build Local Model Server A pipeline predict-server-build --model-name=mnist --model-tag=B --model-type=tensorflow --model-runtime=tfserving --model-chip=gpu --model-path=./tensorflow/mnist/ Build Local Model Server B Same Model, Diff Runtime
  • 20. PUSH IMAGE TO DOCKER REGISTRY § Supports All Public + Private Docker Registries § DockerHub, Artifactory, Quay, AWS, Google, … § Or Self-Hosted, Private Docker Registry pipeline predict-server-push --model-name=mnist --model-tag=A --image-registry-url=<your-registry> --image-registry-repo=<your-repo> Push Images to Docker Registry
  • 21. DEPLOY MODELS SAFELY TO PROD § Deploy from CLI or Jupyter Notebook § Tear-Down and Rollback Models Quickly § Shadow Canary: Deploy to 20% Live Traffic § Split Canary: Deploy to 97-2-1% Live Traffic pipeline predict-kube-start --model-name=mnist --model-tag=BStart Cluster B pipeline predict-kube-start --model-name=mnist --model-tag=CStart Cluster C pipeline predict-kube-start --model-name=mnist --model-tag=AStart Cluster A pipeline predict-kube-route --model-name=mnist --model-split-tag-and-weight-dict='{"A":97, "B":2, "C”:1}' --model-shadow-tag-list='[]' Route Live Traffic
  • 22. COMPARE MODELS OFFLINE & ONLINE § Offline, Batch Metrics § Validation + Training Accuracy § CPU + GPU Utilization § Online, Live Prediction Values § Compare Relative Precision § Newly-Seen, Streaming Data § Online, Real-Time Metrics § Response Time, Throughput § Cost ($) Per Prediction
  • 23. ENSEMBLE PREDICTION AUDIT TRAIL § Necessary for Model Explain-ability § Fine-Grained Request Tracing § Used for Model Ensembles
  • 24. REAL-TIME PREDICTION STREAMS § Visually Compare Real-time Predictions Features and Inputs Predictions and Confidences Model B Model CModel A
  • 25. PREDICTION PROFILING AND TUNING § Pinpoint Performance Bottlenecks § Fine-Grained Prediction Metrics § 3 Steps in Real-Time Prediction 1. transform_request() 2. predict() 3. transform_response()
  • 26. SHIFT TRAFFIC TO MAX(REVENUE) § Shift Traffic to Winning Model with Multi-armed Bandits
  • 27. LIVE, ADAPTIVE TRAFFIC ROUTING § A/B Tests § Inflexible and Boring § Multi-Armed Bandits § Adaptive and Exciting! pipeline predict-kube-route --model-name=mnist --model-split-tag-and-weight-dict='{"A":1, "B":2, "C”:97}’ --model-shadow-tag-list='[]' Route Traffic Dynamically
  • 28. SHIFT TRAFFIC TO MIN(CLOUD CO$T) § Based on Cost ($) Per Prediction § Cost Changes Throughout Day § Lose AWS Spot Instances § Google Cloud Becomes Cheaper § Shift Across Clouds & On-Prem
  • 29. PSEUDO-CONTINUOUS TRAINING § Identify and Fix Borderline (Unconfident) Predictions § Fix Predictions Along Class Boundaries § Facilitate ”Human in the Loop” § Retrain with Newly-Labeled Data § Game-ify the Labeling Process § Path to Crowd-Sourced Labeling
  • 30. CONTINUOUS MODEL TRAINING § The Holy Grail of Machine Learning! § PipelineAI Supports Continuous Model Training! § Kafka, Kinesis § Spark Streaming, Flink § Storm, Heron
  • 31. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Routing
  • 32. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Feed, Train, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 33. SETTING UP TENSORFLOW WITH GPUS § Very Painful! § Especially inside Docker § Use nvidia-docker § Especially on Kubernetes! § Use the Latest Kubernetes (with Init Script Support) § http://pipeline.ai for GitHub + DockerHub Links
  • 34. TENSORFLOW + CUDA + NVIDIA GPU
  • 35. GPU HALF-PRECISION SUPPORT § FP32 is “Full Precision”, FP16 is “Half Precision” § Two(2) FP16’s in Each FP32 GPU Core for 2x Throughput! § Lower Precision is OK for Approx. Deep Learning Use Cases § The Network Matters Most – Not Individual Neuron Accuracy § Supported by Pascal P100 (2016) and Volta V100 (2017) Set the following on GPU’s with CC 5.3+: TF_FP16_MATMUL_USE_FP32_COMPUTE=0 TF_FP16_CONV_USE_FP32_COMPUTE=0 TF_XLA_FLAGS=--xla_enable_fast_math=1
  • 36. VOLTA V100 (2017) VS. PASCAL P100 (2016) § 84 Streaming Multiprocessors (SM’s) § 5,376 GPU Cores § 672 Tensor Cores (ie. Google TPU) § Mixed FP16/FP32 Precision § Matrix Dims Should be Multiples of 8 § More Shared Memory § New L0 Instruction Cache § Faster L1 Data Cache § V100 vs. P100 Performance § 12x Training, 6x Inference
  • 37. FP32 VS. FP16 ON AWS GPU INSTANCES FP16 Half Precision 87.2 T ops/second for p3 Volta V100 4.1 T ops/second for g3 Tesla M60 1.6 T ops/second for p2 Tesla K80 FP32 Full Precision 15.4 T ops/second for p3 Volta V100 4.0 T ops/second for g3 Tesla M60 3.3 T ops/second for p2 Tesla K80
  • 38. § Currently Supports the Following: § Tesla K80 § Pascal P100 § Volta V100 Coming Soon? § TPUs (Only in Google Cloud) § Attach GPUs to CPU Instances § Similar to AWS Elastic GPU, except less confusing WHAT ABOUT GOOGLE CLOUD?
  • 39. V100 AND CUDA 9 § Independent Thread Scheduling - Finally!! § Similar to CPU fine-grained thread synchronization semantics § Allows GPU to yield execution of any thread § Still Optimized for SIMT (Same Instruction Multi-Thread) § SIMT units automatically scheduled together § Explicit Synchronization P100 V100 New CUDA Thread Cooperative Groups https://devblogs.nvidia.com/cooperative-groups/
  • 40. GPU CUDA PROGRAMMING § Barbaric, But Fun Barbaric § Must Know Hardware Very Well § Hardware Changes are Painful § Use the Profilers & Debuggers
  • 41. CUDA STREAMS § Asynchronous I/O Transfer § Overlap Compute and I/O § Keep GPUs Saturated! § Used Heavily by TensorFlow Bad Good Bad Good
  • 42. CUDA SHARED AND UNIFIED MEMORY
  • 43. PYCUDA AND NUMBA § https://devblogs.nvidia.com/numba-python-cuda- acceleration/ § https://devblogs.nvidia.com/seven-things-numba/
  • 44. LET’S SEE WHAT THIS THING CAN DO! § Navigate to the following notebook: 01a_Explore_GPU 01b_Explore_Numba § https://github.com/PipelineAI/notebooks
  • 45. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Feed, Train, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 46. TRAINING TERMINOLOGY § Tensors: N-Dimensional Arrays § ie. Scalar, Vector, Matrix § Operations: MatMul, Add, SummaryLog,… § Graph: Graph of Operations (DAG) § Session: Contains Graph(s) § Feeds: Feed Inputs into Placeholder § Fetches: Fetch Output from Operation § Variables: What We Learn Through Training § aka “Weights”, “Parameters” § Devices: Hardware Device (GPU, CPU, TPU, ...) -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors with tf.device(“/cpu:0,/gpu:15”):
  • 47. TENSORFLOW SESSION Session graph: GraphDef Variables: “W” : 0.328 “b” : -1.407 Variables are Randomly Initialized, then Periodically Checkpointed GraphDef is Created During Training, then Frozen for Inference
  • 48. TENSORFLOW GRAPH EXECUTION § Lazy Execution by Default § Similar to Spark § Eager Execution Now Supported (TensorFlow 1.4+) § Similar to PyTorch § "Linearize” Execution to Minimize RAM Usage § Useful on Single GPU with Limited RAM
  • 49. OPERATION PARALLELISM § Inter-Op (Between-Op) Parallelism § By default, TensorFlow runs multiple ops in parallel § Useful for low core and small memory/cache envs § Set to one (1) § Intra-Op (Within-Op) Parallelism § Different threads can use same set of data in RAM § Useful for compute-bound workloads (CNNs) § Set to # of cores (>=2)
  • 50. TENSORFLOW MODEL § MetaGraph § Combines GraphDef and Metadata § GraphDef § Architecture of your model (nodes, edges) § Metadata § Asset: Accompanying assets to your model § SignatureDef: Maps external to internal tensors § Variables § Stored separately during training (checkpoint) § Allows training to continue from any checkpoint § Variables are “frozen” into Constants when preparing for inference GraphDef x W mul add b MetaGraph Metadata Assets SignatureDef Tags Version Variables: “W” : 0.328 “b” : -1.407
  • 51. EXTEND EXISTING DATA PIPELINES § Data Processing § HDFS/Hadoop § Spark § Containers § Docker § Schedulers § Kubernetes § Mesos <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow-hadoop</artifactId> </dependency> https://github.com/tensorflow/ecosystem
  • 52. KUBERNETES AND SPARK 2.3 § Kubernetes-Native § Schedule Spark Workers # Submit Spark Job to Kubernetes Cluster bin/spark-submit --master k8s://https://xx.yy.zz.ww --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=<spark-image> --conf spark.kubernetes.driver.pod.name=spark-pi-driver local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar # View Kubernetes Resources kubectl get pods -l 'spark-role in (driver, executor)' -w # View Driver Logs in Real-Time kubectl logs –f spark-pi-driver http://blog.kubernetes.io/2018/03/ apache-spark-23-with-native-kubernetes.html http://community.pipeline.ai
  • 53. TENSORFLOW + SPARK OPTIONS § TensorFlow on Spark (Yahoo!) § TensorFrames <-Dead Project-> § Separate Clusters for Spark and TensorFlow § Spark: Boring Batch ETL § TensorFlow: Exciting AI Model Training and Serving § Hand-Off Point is S3, HDFS, Google Cloud Storage
  • 54. TENSORFLOW + KAFKA § TensorFlow Dataset API Now Supports Kafka!! from tensorflow.contrib.kafka.python.ops import kafka_dataset_ops repeat_dataset = kafka_dataset_ops.KafkaDataset(topics, group="test", eof=True) .repeat(num_epochs) batch_dataset = repeat_dataset.batch(batch_size) …
  • 55. TO UNDERSTAND TENSORFLOW I/O… § TFRecord File Format § TensorFlow Python and C++ Dataset API § Python Module and Packaging § Comfort with Python’s Lack of Strong Typing § C++ Concurrency Constructs § Protocol Buffers § Old Queue API § GPU/CUDA Memory Tricks And a Lot of Coffee!
  • 56. FEED TENSORFLOW TRAINING PIPELINE § Training is Limited by the Ingestion Pipeline § Number One Problem We See Today § Scaling GPUs Up / Out Doesn’t Help § GPUs are Heavily Under-Utilized § Use tf.dataset API for best perf § Efficient parallel async I/O (C++) Tesla K80 Volta V100
  • 57. DON’T USE FEED_DICT!! § feed_dict Requires Python <-> C++ Serialization § Not Optimized for Production Ingestion Pipelines § Retrieves Next Batch After Current Batch is Done § Single-Threaded, Synchronous § CPUs/GPUs Not Fully Utilized! § Use Queue or Dataset APIs § Queues are old & complex sess.run(train_step, feed_dict={…}
  • 58. DETECT UNDERUTILIZED CPUS, GPUS § Instrument Code to Generate “Timelines” § Analyze with Google Web Tracing Framework (WTF) § Monitor CPU with top, GPU with nvidia-smi http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  • 59. QUEUES § More than Traditional Queue § Uses CUDA Streams § Perform I/O, Pre-processing, Cropping, Shuffling, … § Pull from HDFS, S3, Google Storage, Kafka, ... § Combine Many Small Files into Large TFRecord Files § Use CPUs to Free GPUs for Compute § Helps Saturate CPUs and GPUs
  • 60. QUEUE CAPACITY PLANNING § batch_size § # examples / batch (ie. 64 jpg) § Limited by GPU RAM § num_processing_threads § CPU threads pull and pre-process batches of data § Limited by CPU Cores § queue_capacity § Limited by CPU RAM (ie. 5 * batch_size)
  • 61. TF.DTYPE § tf.float32, tf.int32, tf.string, etc § Default is usually tf.float32 § Most TF operations support numpy natively # Tuple of (tf.float32 scalar, tf.int32 array of 100 elements) (tf.random_uniform([1]), tf.random_uniform([1, 100], dtype=tf.int32))
  • 62. TF.TRAIN.FEATURE § Three(3) Feature Types § Bytes § Float § Int64 § Actually, They Are Lists of 0..* Values of 3 Types Above § BytesList § FloatList § Int64List
  • 63. TF.TRAIN.FEATURES § Map of {String -> Feature} § Better Name is “FeatureMap” § Organize Feature into Categories § Access Feature Using Features[’feature_name’]
  • 64. TF.TRAIN.FEATURELIST § List of 0..* Feature § Access Feature Using FeatureList[0]
  • 65. TF.TRAIN.FEATURELISTS § Map of {String -> FeatureList} § Better Name is “FeatureListMap” § Organize FeatureList into Categories § Access FeatureList Using FeatureLists[’feature_list_name’]
  • 66. TF.TRAIN.EXAMPLE § Key-Value Dictionary § String -> tf.train.Feature § Not a Self-Describing Format (?!) § Must Establish Schema Upfront by Writers and Readers § Must Obey the Following Conventions § Feature K must be of Type T in all Examples § Feature K can be omitted, default can be configured § If Feature K exists as empty, no default is applied
  • 67. TF.TFRECORD § Contains many tf.train.Example’s => tf.train.Example contains many tf.train.Feature’s => tf.train.Feature contains BytesList, FloatList, Int64List § Record-Oriented Format of Binary Strings (ProtoBuffer) § Must Convert tf.train.Example to Serialized String § Use tf.train.Example.SerializeToString() § Used for Large Scale ML/AI Training § Not Meant for Random or Non-Sequential Access § Compression: GZIP, ZLIB uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_data
  • 68. EMBRACE BINARY FORMATS! § Unreadable and Scary, But Much More Efficient § Better Use of Memory and Disk Cache § Faster Copying and Moving § Smaller on the Wire I
  • 69. CONVERTING MNIST DATA TO TFRECORD def convert_to_tfrecord(data, name): images = data.images labels = data.labels num_examples = data.num_examples rows = images.shape[1] cols = images.shape[2] depth = images.shape[3] filename = os.path.join(FLAGS.directory, name + '.tfrecords’) with tf.python_io.TFRecordWriter(filename) as writer: for index in range(num_examples): image_raw = images[index].tostring() example = tf.train.Example( features = tf.train.Features( feature = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])), 'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])), 'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])), 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])), 'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw])) })) writer.write(example.SerializeToString()) tf.python_io.TFRecordWriter
  • 70. READING TF.TFRECORD’S § tf.data.TFRecordDatasetß Preferred (Dataset API) § tf.TFRecordReader()ß Not Preferred (Queue API) § tf.python_io.tf_record_iterator ß Preferred § Used as Python Generator for serialized_example in tf.python_io.tf_record_iterator(filename): example = tf.train.Example() example.ParseFromString(serialized_example) image_raw example.features.feature['image_raw’].string_list.value height = example.features.feature[‘height'].int32_list.value[0] …
  • 71. DE-SERIALIZING TF.TFRECORD’S feature_map = {'height': tf.train.Feature(int64_list=tf.train.Int64List(value=[rows])), 'width': tf.train.Feature(int64_list=tf.train.Int64List(value=[cols])), 'depth': tf.train.Feature(int64_list=tf.train.Int64List(value=[depth])), 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])), 'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_raw])) deserialized_features = tf.parse_single_example(serialized_example, features=feature_map) # Cast height from String to int32 height = tf.cast(deserialized_features[‘height’], tf.int32) … # Convert raw image from string to float32 image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
  • 72. MORE TF.TRAIN.FEATURE CONSTRUCTS § tf.VarLenFeature § tf.FixedLenFeature, tf.FixedLenSequenceFeature § tf.SparseFeature feature_map = {'height': tf.FixedLenFeature((), tf.int32, …)), … 'image_raw': tf.train.VarLenFeature(tf.string, …)) deserialized_features = tf.parse_single_example(serialized_example, features=feature_map) # Cast height from String to int32 height = tf.cast(deserialized_features[‘height’], tf.int32) … # Convert raw image from string to float32 image_raw = tf.decode_raw(deserialized_features[‘image_raw'], tf.float32)
  • 73. TF.DATA.DATASET tf.Tensor => tf.data.Dataset Functional Transformations Python Generator => tf.data.Dataset Dataset.from_tensors((features, labels)) Dataset.from_tensor_slices((features, labels)) TextLineDataset(filenames) dataset.map(lambda x: tf.decode_jpeg(x)) dataset.repeat(NUM_EPOCHS) dataset.batch(BATCH_SIZE) def generator(): while True: yield ... dataset.from_generator(generator, tf.int32) Dataset => One-Shot Iterator Dataset => Initializable Iter iter = dataset.make_one_shot_iterator() next_element = iter.get_next() while …: sess.run(next_element) iter = dataset.make_initializable_iterator() sess.run(iter.initializer, feed_dict=PARAMS) next_element = iter.get_next() while …: sess.run(next_element) TIP: Use Dataset.prefetch() and parallel version of Dataset.map()
  • 74. MORE TF.DATA.DATASET CONSTRUCTS § FixedLengthRecordDataset § Binary Files § TextLineDataset § CSV, JSON, XML, etc § TFRecordDataset § TFRecords § Iterator “The TF Dataset Dude” Tutorial: https://t.co/havjwJ46EY
  • 76. CUSTOM TF.PY_FUNC() TRANSFORMATION § Custom Python Function § Similar to Spark Python UDF (Eek!) § You Will Suffer a Big Performance Penalty § Try to Use TensorFlow-Native Operations § Remember, you can build your own in C++!
  • 77. TF.DATA.ITERATOR TYPES § One Shot: Iterates Once Through the Dataset § Currently, best Iterator to use with Estimator API § Initializable: Runs iterator.initializer() Once § Re-Initializable: Runs iterator.initializer() Many § Ie. Random shuffling between iterations (epochs) of training § Feedable: Switch Between Different Dataset § Uses Feed and Placeholder to explicitly feed the iterator § Doesn’t require initialization when switching
  • 78. TF.DATA.ITERATOR SIMPLE EXAMPLE dataset = tf.data.Dataset.range(5) iterator = dataset.make_initializable_iterator() next_element = iterator.get_next() # Typically `result` will be the output of a model, or an optimizer's # training operation. result = tf.add(next_element, next_element) sess.run(iterator.initializer) while True: try: sess.run(result) # è 0, 2, 4, 6, 8 except tf.errors.OutOfRangeError: print(‘End of dataset…’) break
  • 79. TF.DATA.ITERATOR TEXT EXAMPLE filenames = ["/var/data/file1.txt", "/var/data/file2.txt"] dataset = tf.data.TextLineDataset(filenames) filenames = ["/var/data/file1.txt", "/var/data/file2.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.flat_map( lambda filename: ( tf.data.TextLineDataset(filename) .skip(1) .filter(lambda line: tf.not_equal(tf.substr(line, 0, 1), "#")))) § Skip 1st Header Line and Comment Lines Starting with `#`
  • 80. TF.DATA.ITERATOR NUMPY EXAMPLE # Load the training data into two NumPy arrays, for example using `np.load()`. with np.load("/var/data/training_data.npy") as data: features = data["features"] labels = data["labels"] # Assume that each row of `features` corresponds to the same row as `labels`. assert features.shape[0] == labels.shape[0] features_placeholder = tf.placeholder(features.dtype, features.shape) labels_placeholder = tf.placeholder(labels.dtype, labels.shape) dataset = tf.data.Dataset.from_tensor_slices((features_placeholder, labels_placeholder)) # …Your Dataset Transformations… iterator = dataset.make_initializable_iterator() sess.run(iterator.initializer, feed_dict={features_placeholder: features, labels_placeholder: labels})
  • 81. TF.DATA.ITERATOR TFRECORD EXAMPLE filenames = tf.placeholder(tf.string, shape=[None]) dataset = tf.data.TFRecordDataset(filenames) dataset = dataset.map(...) # Parse the record into tensors. dataset = dataset.repeat() # Repeat the input indefinitely. dataset = dataset.batch(32) # Batches of size 32 iterator = dataset.make_initializable_iterator() # You can feed the initializer with the appropriate filenames for the current # phase of execution, e.g. training vs. validation. # Initialize `iterator` with training data. training_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"] sess.run(iterator.initializer, feed_dict={filenames: training_filenames}) # Initialize `iterator` with validation data. validation_filenames = ["/var/data/validation1.tfrecord", ...] sess.run(iterator.initializer, feed_dict={filenames: validation_filenames})
  • 82. FUTURE OF DATASET API § Replaces Queue API § More Functional Operators § Automatic GPU Data Staging § Under-utilized GPUs Assisting with Data Ingestion § Advanced, RL-based Device Placement Strategies
  • 83. TF.ESTIMATOR.ESTIMATOR (1/2) § Supports Keras! § Unified API for Local + Distributed § Provide Clear Path to Production § Enable Rapid Model Experiments § Provide Flexible Parameter Tuning § Enable Downstream Optimizing & Serving Infra( ) § Nudge Users to Best Practices Through Opinions § Provide Hooks/Callbacks to Override Opinions
  • 84. TF.ESTIMATOR.ESTIMATOR (2/2) § “Train-to-Serve” Design § Create Custom Estimator or Re-Use Canned Estimator § Hides Session, Graph, Layers, Iterative Loops (Train, Eval, Predict) § Hooks for All Phases of Model Training and Evaluation § Load Input: input_fn() § Train: model_fn() and train() § Evaluate: eval_fn() and evaluate() § Performance Metrics: Loss, Accuracy, … § Save and Export: export_savedmodel() § Predict: predict() Uses the slow sess.run() https://github.com/GoogleCloudPlatform/cloudml-samples /blob/master/census/customestimator/
  • 85. TF.CONTRIB.LEARN.EXPERIMENT § Easier-to-Use Distributed TensorFlow § Same API for Local and Distributed § Combines Estimator with input_fn() § Used for Training, Evaluation, & Hyper-Parameter Tuning § Distributed Training Defaults to Data-Parallel & Async § Cluster Configuration is Fixed at Start of Training Job § No Auto-Scaling Allowed, but That’s OK for Training § Note: This is Likely to be Deprecated Soon
  • 86. ESTIMATOR + EXPERIMENT CONFIGS § TF_CONFIG § Special environment variable for config § Defines ClusterSpec in JSON incl. master, workers, PS’s § Distributed mode ‘{“environment”:“cloud”}’ § Local: ‘{environment”:“local”, {“task”:{”type”:”worker”}}’ § RunConfig: Defines checkpoint interval, output directory, § HParams: Hyper-parameter tuning parameters and ranges § learn_runner creates RunConfig before calling run() & tune() § schedule is set based on {”task”:{”type”:…}} TF_CONFIG= '{ "environment": "cloud", "cluster": { "master":["worker0:2222”], "worker":["worker1:2222"], "ps": ["ps0:2222"] }, "task": {"type": "ps", "index": "0"} }'
  • 87. ESTIMATOR + KERAS § Distributed TensorFlow (Estimator) + Easy to Use (Keras) § tf.keras.estimator.model_to_estimator() # Instantiate a Keras inception v3 model. keras_inception_v3 = tf.keras.applications.inception_v3.InceptionV3(weights=None) # Compile model with the optimizer, loss, and metrics you'd like to train with. keras_inception_v3.compile(optimizer=tf.keras.optimizers.SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metric='accuracy') # Create an Estimator from the compiled Keras model. est_inception_v3 = tf.keras.estimator.model_to_estimator(keras_model=keras_inception_v3) # Treat the derived Estimator as you would any other Estimator. For example, # the following derived Estimator calls the train method: est_inception_v3.train(input_fn=my_training_set, steps=2000)
  • 88. “CANNED” ESTIMATORS § Commonly-Used Estimators § Pre-Tested and Pre-Tuned § DNNClassifer, TensorForestEstimator § Always Use Canned Estimators If Possible § Reduce Lines of Code, Complexity, and Bugs § Use FeatureColumn to Define & Create Features Custom vs. Canned @ Google, August 2017
  • 89. ESTIMATOR + DATASET API def input_fn(): def generator(): while True: yield ... my_dataset = tf.data.dataset.from_generator(generator, tf.int32) # A one-shot iterator automatically initializes itself on first use. iter = my_dataset.make_one_shot_iterator() # The return value of get_next() matches the dataset element type. images, labels = iter.get_next() return images, labels # The input_fn can be used as a regular Estimator input function. estimator = tf.estimator.Estimator(…) estimator.train(train_input_fn=input_fn, …)
  • 90. OPTIMIZER + ESTIMATOR API + TPU’S run_config = tpu_config.RunConfig() estimator = tpu_estimator.TpuEstimato(model_fn=model_fn, config=run_config) estimator.train(input_fn=input_fn, num_epochs=10, …) optimizer = tpu_optimizer.CrossShardOptimizer( tf.train.GradientDescentOptimizer(learning_rate=…)) train_op = optimizer.minimize(loss) estimator_spec = tf.estimator.EstimatorSpec(train_op=train_op, loss=…) https://www.tensorflow.org/programmers_guide/using_tpu
  • 91. TF.CONTRIB.LEARN.HEAD (OBJECTIVES) § Single-Objective Estimator § Single classification prediction § Multi-Objective Estimator § One (1) classification prediction § One(1) final layer to feed into next model § Multiple Heads Used to Ensemble Models § Treats neural network as a feature engineering step § Supported by TensorFlow Serving
  • 92. TF.LAYERS § Standalone Layer or Entire Sub-Graphs § Functions of Tensor Inputs & Outputs § Mix and Match with Operations § Assumes 1st Dimension is Batch Size § Handles One (1) to Many (*) Inputs § Metrics are Layers § Loss Metric (Per Mini-Batch) § Accuracy and MSE (Across Mini-Batches)
  • 93. TF.FEATURE_COLUMN § Used by Canned Estimator § Declaratively Specify Training Inputs § Converts Sparse to Dense Tensors § Sparse Features: Query Keyword, ProductID § Dense Features: One-Hot, Multi-Hot § Wide/Linear: Use Feature-Crossing § Deep: Use Embeddings
  • 94. TF.FEATURE_COLUMN EXAMPLE § Continuous + One-Hot + Embedding deep_columns = [ age, education_num, capital_gain, capital_loss, hours_per_week, tf.feature_column.indicator_column(workclass), tf.feature_column.indicator_column(education), tf.feature_column.indicator_column(marital_status), tf.feature_column.indicator_column(relationship), # To show an example of embedding tf.feature_column.embedding_column(occupation, dimension=8), ]
  • 95. FEATURE CROSSING § Create New Features by Combining Existing Features § Limitation: Combinations Must Exist in Training Dataset base_columns = [ education, marital_status, relationship, workclass, occupation, age_buckets ] crossed_columns = [ tf.feature_column.crossed_column( ['education', 'occupation'], hash_bucket_size=1000), tf.feature_column.crossed_column( ['age_buckets', 'education', 'occupation'], hash_bucket_size=1000) ]
  • 96. SEPARATE TRAINING + EVALUATION § Separate Training and Evaluation Clusters § Evaluate Upon Checkpoint § Avoid Resource Contention § Training Continues in Parallel with Evaluation Training Cluster Evaluation Cluster Parameter Server Cluster
  • 97. BATCH (RE-)NORMALIZATION (2015, 2017) § Each Mini-Batch May Have Wildly Different Distributions § Normalize per Batch (and Layer) § Faster Training, Learns Quicker § Final Model is More Accurate § TensorFlow is already on 2nd Generation Batch Algorithm § First-Class Support for Fusing Batch Norm Layers § Final mean + variance Are Folded Into Graph Later -- (Almost) Always Use Batch (Re-)Normalization! -- z = tf.matmul(a_prev, W) a = tf.nn.relu(z) a_mean, a_var = tf.nn.moments(a, [0]) scale = tf.Variable(tf.ones([depth/channels])) beta = tf.Variable(tf.zeros ([depth/channels])) bn = tf.nn.batch_normalizaton(a, a_mean, a_var, beta, scale, 0.001)
  • 98. DROPOUT (2014) § Training Technique § Prevents Overfitting § Helps Avoid Local Minima § Inherent Ensembling Technique § Creates and Combines Different Neural Architectures § Expressed as Probability Percentage (ie. 50%) § Boost Other Weights During Validation & Prediction Perform Dropout (Training Phase) Boost for Dropout (Validation & Prediction Phase) 0% Dropout 50% Dropout
  • 99. BATCH NORM, DROPOUT + ESTIMATOR API § Must Specify Eval or Training Mode with Estimator API § These Will Behave Differently Depending on the Mode
  • 100. SAVED MODEL FORMAT § Different Format than Traditional Exporter § Contains Checkpoints, 1..* MetaGraph’s, and Assets § Export Manually with SavedModelBuilder § Estimator.export_savedmodel() § Hooks to Generate SignatureDef § Use saved_model_cli to Verify § Used by TensorFlow Serving § New Standard Export Format? (Catching on Slowly…)
  • 101. TENSORFLOW DEBUGGER § Step through Operations § Inspect Inputs and Outputs § Wrap Session in Debug Session sess = tf.Session(config=config) sess = tf_debug.LocalCLIDebugWrapperSession(sess) https://www.tensorflow.org/ programmers_guide/debugger
  • 102. LET’S DEBUG A MODEL § Navigate to the following notebook: 04_Debug_Model § https://github.com/PipelineAI/notebooks
  • 103. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Train, Inspect, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 104. SINGLE NODE, MULTI-GPU TRAINING § cpu:0 § By default, all CPUs § Requires extra config to target a CPU § gpu:0..n § Each GPU has a unique id § TF usually prefers a single GPU § xla_cpu:0, xla_gpu:0..n § “JIT Compiler Device” § Hints TensorFlow to attempt JIT Compile with tf.device(“/cpu:0”): with tf.device(“/gpu:0”): with tf.device(“/gpu:1”): GPU 0 GPU 1
  • 105. DISTRIBUTED, MULTI-NODE TRAINING § TensorFlow Automatically Inserts Send and Receive Ops into Graph § Parameter Server Synchronously Aggregates Updates to Variables § Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS Worker0 Worker0 Worker1 Worker0 Worker1 Worker2 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu0 gpu0 Single Node Multiple Nodes
  • 106. DATA PARALLEL VS. MODEL PARALLEL § Data Parallel (“Between-Graph Replication”) § Send exact same model to each device § Each device operates on partition of data § ie. Spark sends same function to many workers § Each worker operates on their partition of data § Model Parallel (“In-Graph Replication”) § Send different partition of model to each device § Each device operates on all data § Difficult, but required for larger models with lower-memory GPUs
  • 107. SYNCHRONOUS VS. ASYNCHRONOUS § Synchronous § Nodes compute gradients § Nodes update Parameter Server (PS) § Nodes sync on PS for latest gradients § Asynchronous § Some nodes delay in computing gradients § Nodes don’t update PS § Nodes get stale gradients from PS § May not converge due to stale reads!
  • 108. CHIEF WORKER § Chief Defaults to Worker Task 0 § Task 0 is guaranteed to exist § Performs Maintenance Tasks § Writes log summaries § Instructs PS to checkpoint vars § Performs PS health checks § (Re-)Initialize variables at (re-)start of training
  • 109. NODE AND PROCESS FAILURES § Checkpoint to Persistent Storage (HDFS, S3) § Use MonitoredTrainingSession and Hooks § Use a Good Cluster Orchestrator (ie. Kubernetes, Mesos) § Understand Failure Modes and Recovery States Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
  • 110. AGENDA Part 1: Optimize TensorFlow Training § GPUs and TensorFlow § Train, Inspect, and Debug TensorFlow Models § TensorFlow Distributed Cluster Model Training § Optimize Training with JIT XLA Compiler
  • 111. XLA FRAMEWORK § XLA: “Accelerated Linear Algebra” § Reduce Reliance on Custom Operators § Intermediate Representation used by Hardware Vendors § Improve Portability § Increase Execution Speed § Decrease Memory Usage § Decrease Mobile Footprint Helps TensorFlow Be Flexible AND Performant!!
  • 112. XLA HIGH LEVEL OPTIMIZER (HLO) § HLO: “High Level Optimizer” § Compiler Intermediate Representation (IR) § Independent of source and target language § XLA Step 1 Emits Target-Independent HLO § XLA Step 2 Emits Target-Dependent LLVM § LLVM Emits Native Code Specific to Target § Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
  • 113. JIT COMPILER § JIT: “Just-In-Time” Compiler § Built on XLA Framework § Reduce Memory Movement – Especially with GPUs § Reduce Overhead of Multiple Function Calls § Similar to Spark Operator Fusing in Spark 2.0 § Unroll Loops, Fuse Operators, Fold Constants, … § Scopes: session, device, with jit_scope():
  • 114. VISUALIZING JIT COMPILER IN ACTION Before JIT After JIT Google Web Tracing Framework: http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True)) run_options = tf.RunOptions(trace_level=tf.RunOptions.SOFTWARE_TRACE) run_metadata = tf.RunMetadata() sess.run(options=run_options, run_metadata=run_metadata)
  • 115. VISUALIZING FUSING OPERATORS pip install graphviz dot -Tpng /tmp/hlo_graph_1.w5LcGs.dot -o hlo_graph_1.png GraphViz: http://www.graphviz.org hlo_*.dot files generated by XLA
  • 116. LET’S TRAIN WITH XLA CPU § Navigate to the following notebook: 06_Train_Model_XLA_CPU § https://github.com/PipelineAI/notebooks
  • 117. LET’S TRAIN WITH XLA GPU § Navigate to the following notebook: 06a_Train_Model_XLA_GPU § https://github.com/PipelineAI/notebooks
  • 118. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Routing
  • 119. WE ARE NOW… …OPTIMIZING Models AFTER Model Training TO IMPROVE Model Serving PERFORMANCE!
  • 120. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 121. AOT COMPILER § Standalone, Ahead-Of-Time (AOT) Compiler § Built on XLA framework § tfcompile § Creates executable with minimal TensorFlow Runtime needed § Includes only dependencies needed by subgraph computation § Creates functions with feeds (inputs) and fetches (outputs) § Packaged as cc_libary header and object files to link into your app § Commonly used for mobile device inference graph § Currently, only CPU x86-64 and ARM are supported - no GPU
  • 122. GRAPH TRANSFORM TOOL (GTT) § Post-Training Optimization to Prepare for Inference § Remove Training-only Ops (checkpoint, drop out, logs) § Remove Unreachable Nodes between Given feed -> fetch § Fuse Adjacent Operators to Improve Memory Bandwidth § Fold Final Batch Norm mean and variance into Variables § Round Weights/Variables to improve compression (ie. 70%) § Quantize (FP32 -> INT8) to Speed Up Math Operations
  • 123. AFTER TRAINING, BEFORE OPTIMIZATION -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors ?!
  • 124. POST-TRAINING GRAPH TRANSFORMS transform_graph --in_graph=unoptimized_cpu_graph.pb ß Original Graph --out_graph=optimized_cpu_graph.pb ß Transformed Graph --inputs=’x_observed:0' ß Feed (Input) --outputs=’Add:0' ß Fetch (Output) --transforms=' ß List of Transforms strip_unused_nodes remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true) fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes'
  • 125. AFTER STRIPPING UNUSED NODES § Optimizations § strip_unused_nodes § Results § Graph much simpler § File size much smaller
  • 126. AFTER REMOVING UNUSED NODES § Optimizations § strip_unused_nodes § remove_nodes § Results § Pesky nodes removed § File size a bit smaller
  • 127. AFTER FOLDING CONSTANTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § Results § Placeholders (feeds) -> Variables* (*Why Variables and not Constants?)
  • 128. AFTER FOLDING BATCH NORMS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § Results § Graph remains the same § File size approximately the same
  • 129. AFTER QUANTIZING WEIGHTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § Results § Graph is same, file size is smaller, compute is faster
  • 130. WEIGHT QUANTIZATION § FP16 and INT8 Are Smaller and Computationally Simpler § Weights/Variables are Constants § Easy to Linearly Quantize
  • 132. ACTIVATION QUANTIZATION § Activations Not Known Ahead of Time § Depends on input, not easy to quantize § Requires Additional Calibration Step § Use a “representative” dataset § Per Neural Network Layer… § Collect histogram of activation values § Generate many quantized distributions with different saturation thresholds § Choose threshold to minimize… KL_divergence(ref_distribution, quant_distribution) § Not Much Time or Data is Required (Minutes on Commodity Hardware)
  • 133. AFTER ACTIVATION QUANTIZATION § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes (activations) § Results § Larger graph, needs calibration! Requires Additional freeze_requantization_ranges
  • 134. LET’S OPTIMIZE FOR INFERENCE § Navigate to the following notebook: 08_Optimize_Model_Activations § https://github.com/PipelineAI/notebooks
  • 135. FREEZING MODEL FOR DEPLOYMENT § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes § freeze_graph § Results § Variables -> Constants Finally! We’re Ready to Deploy!!
  • 136. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 137. MODEL SERVING TERMINOLOGY § Inference § Only Forward Propagation through Network § Predict, Classify, Regress, … § Bundle § GraphDef, Variables, Metadata, … § Assets § ie. Map of ClassificationID -> String § {9283: “penguin”, 9284: “bridge”} § Version § Every Model Has a Version Number (Integer) § Version Policy § ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
  • 138. TENSORFLOW SERVING FEATURES § Supports Auto-Scaling § Custom Loaders beyond File-based § Tune for Low-latency or High-throughput § Serve Diff Models/Versions in Same Process § Customize Models Types beyond HashMap and TensorFlow § Customize Version Policies for A/B and Bandit Tests § Support Request Draining for Graceful Model Updates § Enable Request Batching for Diff Use Cases and HW § Supports Optimized Transport with GRPC and Protocol Buffers
  • 139. PREDICTION SERVICE § Predict (Original, Generic) § Input: List of Tensor § Output: List of Tensor § Classify § Input: List of tf.Example (key, value) pairs § Output: List of (class_label: String, score: float) § Regress § Input: List of tf.Example (key, value) pairs § Output: List of (label: String, score: float)
  • 140. PREDICTION INPUTS + OUTPUTS § SignatureDef § Defines inputs and outputs § Maps external (logical) to internal (physical) tensor names § Allows internal (physical) tensor names to change from tensorflow.python.saved_model import utils from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import signature_def_utils graph = tf.get_default_graph() x_observed = graph.get_tensor_by_name('x_observed:0') y_pred = graph.get_tensor_by_name('add:0') inputs_map = {'inputs': x_observed} outputs_map = {'outputs': y_pred} predict_signature = signature_def_utils.predict_signature_def(inputs=inputs_map, outputs=outputs_map)
  • 141. MULTI-HEADED INFERENCE § Inputs Pass Through Model One Time § Model Returns Multiple Predictions: 1. Human-readable prediction (ie. “penguin”, “church”,…) 2. Final layer of scores (float vector) § Final Layer of floats Pass to the Next Model in Ensemble § Optimizes Bandwidth, CPU/GPU, Latency, Memory § Enables Complex Model Composing and Ensembling
  • 142. BUILD YOUR OWN MODEL SERVER § Adapt GRPC(Google) <-> HTTP (REST of the World) § Perform Batch Inference vs. Request/Response § Handle Requests Asynchronously § Support Mobile, Embedded Inference § Customize Request Batching § Add Circuit Breakers, Fallbacks § Control Latency Requirements § Reduce Number of Moving Parts #include “tensorflow_serving/model_servers/server_core.h” class MyTensorFlowModelServer { ServerCore::Options options; // set options (model name, path, etc) std::unique_ptr<ServerCore> core; TF_CHECK_OK( ServerCore::Create(std::move(options), &core) ); } Compile and Link with libtensorflow.so
  • 143. RUNTIME OPTION: NVIDIA TENSOR-RT § Post-Training Model Optimizations § Specific to Nvidia GPU § Similar to TF Graph Transform Tool § GPU-Optimized Prediction Runtime § Alternative to TensorFlow Serving § PipelineAI Supports TensorRT!
  • 144. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 145. AGENDA Part 2: Optimize TensorFlow Serving § AOT XLA Compiler and Graph Transform Tool § Key Components of TensorFlow Serving § Deploy Optimized TensorFlow Model § Optimize TensorFlow Serving Runtime
  • 146. REQUEST BATCH TUNING § max_batch_size § Enables throughput/latency tradeoff § Bounded by RAM § batch_timeout_micros § Defines batch time window, latency upper-bound § Bounded by RAM § num_batch_threads § Defines parallelism § Bounded by CPU cores § max_enqueued_batches § Defines queue upper bound, throttling § Bounded by RAM Reaching either threshold will trigger a batch Separate, Non-Batched Requests Combined, Batched Requests
  • 147. ADVANCED BATCHING & SERVING TIPS § Batch Just the GPU/TPU Portions of the Computation Graph § Batch Arbitrary Sub-Graphs using Batch / Unbatch Graph Ops § Distribute Large Models Into Shards Across TensorFlow Model Servers § Batch RNNs Used for Sequential and Time-Series Data § Find Best Batching Strategy For Your Data Through Experimentation § BasicBatchScheduler: Homogeneous requests (ie Regress or Classify) § SharedBatchScheduler: Mixed requests, multi-step, ensemble predict § StreamingBatchScheduler: Mixed CPU/GPU/IO-bound Workloads § Serve Only One (1) Model Inside One (1) TensorFlow Serving Process § Much Easier to Debug, Tune, Scale, and Manage Models in Production.
  • 148. PIPELINE.AI FUNCTIONS (SERVERLESS) § Built on OpenFaaS § Supports Kubernetes § Supports Docker Swarm
  • 149. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Routing
  • 150. AGENDA Part 3: Advanced Model Serving + Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  • 151. KUBERNETES PRIORITY SCHEDULING Workloads can … § access the entire cluster up to the autoscaler max size § trigger autoscaling until higher-priority workload § “fill the cracks” of resource usage of higher-priority work (i.e., wait to run until resources are feed
  • 152. KUBERNETES INGRESS § Single Service § Can also use Service (LoadBalancer or NodePort) § Fan Out & Name-Based Virtual Hosting § Route Traffic Using Path or Host Header § Reduces # of load balancers needed § 404 Implemented as default backend § Federation / Hybrid-Cloud § Creates Ingress objects in every cluster § Monitors health and capacity of pods within each cluster § Routes clients to appropriate backend anywhere in federation apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-fanout annotations: kubernetes.io/ingress.class: istio spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80 Fan Out (Path) apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-virtualhost annotations: kubernetes.io/ingress.class: istio spec: rules: - host: foo.bar.com http: paths: backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: backend: serviceName: s2 servicePort: 80 Virtual Hosting
  • 153. KUBERNETES INGRESS CONTROLLER § Ingress Controller Types § Google Cloud: kubernetes.io/ingress.class: gce § Nginx: kubernetes.io/ingress.class: nginx § Istio: kubernetes.io/ingress.class: istio § Must Start Ingress Controller Manually § Just deploying Ingress is not enough § Not started by kube-controller-manager § Start Istio Ingress Controller kubectl apply -f $ISTIO_INSTALL_PATH/install/kubernetes/istio.yaml
  • 154. ISTIO EGRESS § While-list Domains To Access From Within Service Mesh § Apply RoutingRules § Apply DestinationPolicys § Supports TLS, HTTP, GRPC kind: EgressRule metadata: name: pipeline-api-egress spec: destination: service: api.pipeline.ai ports: - port: 80 protocol: http - port: 443 protocol: https
  • 155. AGENDA Part 3: Advanced Model Serving + Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  • 157. ISTIO ARCHITECTURE: ENVOY § Lyft Project § High-perf Proxy (C++) § Lots of Metrics § Zone-Aware § Service Discovery § Load Balancing § Fault Injection, Circuits § %-based Traffic Split, Shadow § Sidecar Pattern § Rate Limiting, Retries, Outlier Detection, Timeout with Budget, …
  • 158. ISTIO ARCHITECTURE: MIXER § Enforce Access Control § Evaluate Request-Attrs § Collect Metrics § Platform-Independent § Extensible Plugin Model
  • 159. ISTIO ARCHITECTURE: PILOT § Envoy service discovery § Intelligent routing § A/B Tests § Canary deployments § RouteRule->Envoy conf § Propagates to sidecars § Supports Kube, Consul, ...
  • 160. ISTIO ARCHITECTURE: SECURITY § Mutual TLS Auth § Credential Management § Uses Service-Identity § Canary Deployments § Fine-grained ACLs § Attribute & Role-based § Auditing & Monitoring
  • 161. AGENDA Part 3: Advanced Model Serving + Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  • 162. ISTIO ROUTE RULES § Kubernetes Custom Resource Definition (CRD) kind: CustomResourceDefinition metadata: name: routerules.config.istio.io spec: group: config.istio.io names: kind: RouteRule listKind: RouteRuleList plural: routerules singular: routerule scope: Namespaced version: v1alpha2
  • 163. ADVANCED ROUTING RULES § Content-based Routing § Uses headers, username, payload, … § Cross-Environment Routing § Shadow traffic prod=>staging
  • 164. ISTIO DESTINATION POLICIES § Load Balancing § ROUND_ROBIN (default) § LEAST_CONN (between 2 randomly-selected hosts) § RANDOM § Circuit Breaker § Max connections § Max requests per conn § Consecutive errors § Penalty timer (15 mins) § Scan windows (5 mins) circuitBreaker: simpleCb: maxConnections: 100 httpMaxRequests: 1000 httpMaxRequestsPerConnection: 10 httpConsecutiveErrors: 7 sleepWindow: 15m httpDetectionInterval: 5m
  • 165. ISTIO AUTO-SCALING § Traffic Routing and Auto-Scaling Occur Independently § Istio Continues to Obey Traffic Splits After Auto-Scaling § Auto-Scaling May Occur In Response to New Traffic Route
  • 166. A/B & BANDIT MODEL TESTING § Perform Live Experiments in Production § Compare Existing Model A with Model B, Model C § Safe Split-Canary Deployment § Pro Tip: Keep Ingress Simple – Use Route Rules Instead! apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-20-5-75 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 20 # 20% still routes to model A - labels: version: B # 5% routes to new model B weight: 5 - labels: version: C # 75% routes to new model C weight: 75 apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-1-2-97 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 1 # 1% routes to model A - labels: version: B # 2% routes to new model B weight: 2 - labels: version: C # 97% routes to new model C weight: 97 apiVersion: config.istio.io/v1alpha2 kind: RouteRule metadata: name: predict-mnist-97-2-1 spec: destination: name: predict-mnist precedence: 2 # Greater than global deny-all route: - labels: version: A weight: 97 # 97% still routes to model A - labels: version: B # 2% routes to new model B weight: 2 - labels: version: C # 1% routes to new model C weight: 1
  • 167. AGENDA Part 3: Advanced Model Serving + Routing § Kubernetes Ingress, Egress, Networking § Istio and Envoy Architecture § Intelligent Traffic Routing and Scaling § Metrics, Chaos Monkey, Production Readiness
  • 168. ISTIO METRICS AND MONITORING § Verify Traffic Splits § Fine-Grained Request Tracing
  • 169. ISTIO & CHAOS + LATENCY MONKEY § Fault Injection § Delay § Abort kind: RouteRule metadata: name: predict-mnist spec: destination: name: predict-mnist httpFault: abort: httpStatus: 420 percent: 100 kind: RouteRule metadata: name: predict-mnist spec: destination: name: predict-mnist httpFault: delay: fixedDelay: 7.000s percent: 100
  • 170. SPECIAL THANKS TO CHRISTIAN POSTA § http://blog.christianposta.com/istio-workshop
  • 171. AGENDA Part 0: Introductions and Setup Part 1: Optimize TensorFlow Training Part 2: Optimize TensorFlow Serving Part 3: Advanced Model Serving + Routing
  • 172. PIPELINE.AI SUPPORTS ALL MAJOR MODELS
  • 173. THANK YOU!! § Please Star this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline Contact Me chris@pipeline.ai @cfregly