Currently, Siddha is an Architect at Nvidia focusing on the Self-Driving initiative. She works towards stable and scalable training of neural networks on very large data centers, and utilizes simulation to validate the neural networks.
In 2017 Siddha led NASA’s Long-Period Comets team within their AI accelerator, called Frontier Development Lab, where she used machine learning to develop meteor detectors. Recently this project was able to provide the first-ever instrumental evidence of an outburst of 5 meteors coming from a previously known comet, called C/1907 G1 (Grigg-Mellish). As a member of the NASA FDL AI Technical Committee, Siddha is working towards incorporating AI in many space science projects!
Previously Siddha was a Deep Learning Data Scientist at Deep Vision where she worked on developing and deploying deep learning models on resource constraint edge devices. Siddha graduated from Carnegie Mellon University with a Master’s in Computational Data Science and a Bachelor’s in Computer Science and Technology from the National Institute of Technology (NIT), Hamirpur, India. She has also authored a book on Practical Deep Learning for Cloud, Mobile & Edge – O’Reilly Publishers
Speech Overview:
Over the last few years, convolutional neural networks (CNN) have risen in popularity, especially in the area of computer vision. Many mobile applications running on smartphones and wearable devices would potentially benefit from the new opportunities enabled by deep learning techniques. However, CNN’s are by nature computationally and memory intensive, making them challenging to deploy on a mobile device. We explain how to practically bring the power of convolutional neural networks and deep learning to memory and power-constrained devices like smartphones and web browsers.
15. Fine tuning
15
Assemble a
dataset
Find a pre-
trained
model
Fine-tune a
pre-trained
model
Run using
existing
frameworks
“Don’t Be A Hero” — Andrej Karpathy
16. CustomVision.ai
16
Use Fatkun Browser Extension to download images from Search Engine, or use Bing Image Search
API to programmatically download photos with proper rights
22. Apple Ecosystem
22
Metal
2014
BNNS +
MPS
2016
Core ML
2017
Core ML 2
2018
Core ML 3
2019
§ Tiny models (~ KB)!
§ 1-bit model quantization support
§ Batch API for improved performance
§ Conversion support for MXNet, ONNX
§ tf-coreml
26. TensorFlow Lite is small
26
1.5MB
TensorFlow
Mobile
300KB
Core Interpreter +
Supported
Operations
27. TensorFlow Lite is Fast
27
Takes advantage of on-
device hardware
acceleration
FlatBuffers
Reduces code footprint,
memory usage
Reduces CPU cycles on
serialization and
deserialization
Improves startup time
Pre-fused activations
Combining batch
normalization layer with
previous convolution
Static memory and static
execution plan
Decreases load time
31. ML Kit
31
Easy to use
Abstraction over TensorFlow Lite
Built-in APIs for image labeling, OCR, face detection, barcode scanning,
landmark detection, smart reply
Model management with Firebase
A/B testing
var vision = Vision.vision()
let faceDetector = vision.faceDetector(options: options)
let image = VisionImage(image: uiImage)
faceDetector.process(visionImage) { // callback }
34. 35
Does my
model make
me look fat?
§ Apple does not allow apps over 200 MB to be
downloaded over cellular network.
§ Download on demand, and interpret on device instead.
41. 42
Which devices
should I support?
§ To get 95% device coverage, support phones released in
the last 4 years.
§ For unsupported phones, offer graceful degradation
(lower frame rate, cloud inference, etc.)
43. Energy Considerations
44
§ You don’t usually run AI models constantly; you run it for a few seconds.
§ With a modern flagship phone, running MobileNet at 30 FPS should burn
battery in 2–3 hours.
§ Bigger question — do you really need to run it at 30 FPS? Could it be run
at 1 FPS?
46. Seeing AI
Audible Barcode recognition
Aim: Help blind users identify products using barcode
Issue: Blind users don’t know where the barcode is
Solution: Guide user in finding a barcode with audio cues
47
47. AR Hand Puppets
Hart Woolery from 2020CV
Object Detection (Hand) + Key Point Estimation
48
[https://twitter.com/2020cv_inc/status/1093219359676280832]
AR Hand Puppets, Hart Woolery from 2020CV, Object Detection (Hand) + Key Point Estimation
48. Remove objects
Brian Schulman, Adventurous Co.
Object Segmentation + Image In Painting
50
https://twitter.com/smashfactory/status/1139461813710442496
54. Model Pruning
Aim: Remove all connections with absolute weights below a threshold
56
Song Han, Jeff Pool, John Tran, William J. Dally, "Learning both Weights and Connections for Efficient Neural Networks", 2015
55. Pruning in Keras
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
57
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
prune.Prune(tf.keras.layers.Dense(512, activation=tf.nn.relu)),
tf.keras.layers.Dropout(0.2),
prune.Prune(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
])
56. Model Quantization
58
Quantized to
Percent size
reduction
(approx)
Percent match with 32-bit results
linear linear_lut kmeans_lut
16-bit 50% 100% 100% 100%
8-bit 75% 88.37% 80.62% 98.45%
4-bit 88% 0% 0% 81.4%
2-bit 94% 0% 0% 10.08%
1-bit 97% 0% 0% 7.75%
1
0.0039
0.00780 0.9921
0.9961
Quantized
Value
Intervals
0 …
…
1 254 255
57. So many techniques — so little time!
Channel pruning
Model quantization
ThiNet (Filter pruning)
Weight sharing
Automatic Mixed Precision
Network distillation
59
58. Pocket Flow – 1 Line to Make a Model Efficient
Tencent AI Labs created an Automatic Model Compression (AutoMC) framework
60
59. 61
Can I design a better
architecture myself?
Maybe? But AI can
do it much better!
60. AutoML – Let AI Design an Efficient Arch
62
§Neural Architecture Search (NAS) — An
automated approach for designing models
using reinforcement learning while
maximizing accuracy.
§Hardware Aware NAS = Maximizes accuracy
while minimizing run-time on device
§Incorporates latency information into the
reward objective function
§Measure real-world inference latency by
executing on a particular platform
§1.5x faster than MobileNetV2 (MnasNet)
§ResNet-50 accuracy with 19x less
parameters
§SSD300 mAP with 35x fewer FLOPs
62. ProxylessNAS – Per Hardware Tuned CNNs
64
Han Cai and Ligeng Zhu and Song Han, "ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware", ICLR 2019
64. On-Device Training in Core ML
67
let updateTask = try MLUpdateTask(
forModelAt: modelUrl,
trainingData: trainingData,
configuration: configuration,
completionHandler: { [weak self]
self.model = context.model context.model.write(to: newModelUrl)
})
§ Core ML 3 introduced on device learning.
§ Never have to send training data to the server with the help
of MLUpdateTask.
§ Schedule training when device is charging to save power.
66. 69
TensorFlow
Federated
Train a global model using 1000s of devices
without access to data
Encryption + Secure Aggregation Protocol
Can take a few days to wait for
aggregations to build up
https://github.com/tensorflow/federated
67. 70
1. Collect
Data
2. Label
Data
3. Train
Model
4. Convert
Model
5. Optimize
Performance
6. Deploy
7. Monitor
Mobile AI
Development
Lifecycle
68. What we learned today
71
§ Why deep learning on mobile?
§ Building a model
§ Running a model
§ Hardware factors
§ Benchmarking
§ State-of-the-art applications
§ Making a model more efficient
§ Federated learning
69. How do I access the slides
instantly?
http://PracticalDeepLearning.ai
@PracticalDLBook
Icons credit - Orion Icon Library - https://orioniconlibrary.com