Deep Learning in Computer Vision

Sungjoon Choi
Sungjoon ChoiResearch Scientist em Kakao Brain
Introduction to
Deep Learning
Presenter: Sungjoon Choi
(sungjoon.choi@cpslab.snu.ac.kr)
Optimization methods
CNN basics
Semantic segmentation
Weakly supervised localization
Image detection
RNN
Visual QnA
Word2Vec
Image Captioning
Contents
What is deep learning?
3
“Deep learning is a branch of machine learning based on a set of
algorithms that attempt to model high-level abstractions in data by
using multiple processing layers, with complex structures or otherwise,
composed of multiple non-linear transformations.”
Wikipedia says:
Machine
Learning
High-level
abstraction Network
Is it brand new?
4
Neural Nets McCulloch & Pitt 1943
Perception Rosenblatt 1958
RNN Grossberg 1973
CNN Fukushima 1979
RBM Hinton 1999
DBN Hinton 2006
D-AE Vincent 2008
AlexNet Alex 2012
GoogLeNet Szegedy 2015
Deep architectures
5
Feed-Forward: multilayer neural nets, convolutional nets
Feed-Back: Stacked Sparse Coding, Deconvolutional Nets
Bi-Directional: Deep Boltzmann Machines, Stacked Auto-Encoders
Recurrent: Recurrent Nets, Long-Short Term Memory
CNN basics
CNN
7
CNNs are basically layers of convolutions followed by
subsampling and fully connected layers.
Intuitively speaking, convolutions and subsampling
layers works as feature extraction layers while a fully
connected layer classifies which category current input
belongs to using extracted features.
8
9
10
11
12
13
14
15
16
Optimization
methods
Gradient descent?
Gradient descent?
There are three variants of gradient descent
Differ in how much data we use to compute
gradient
We make a trade-off between the accuracy
and computing time
Batch gradient descent
In batch gradient decent, we use the entire
training dataset to compute the gradient.
Stochastic gradient descent
In stochastic gradient descent (SGD), the
gradient is computed from each training
sample, one by one.
Mini-batch gradient decent
In mini-batch gradient decent, we take the
best of both worlds.
Common mini-batch sizes range between 50
and 256 (but can vary).
Challenges
Choosing a proper learning rate is cumbersome.
 Learning rate schedule
Avoiding getting trapped in suboptimal local
minima
Momentum
Nesterov accelerated gradient
Adagrad
It adapts the learning rate to the parameters,
performing larger updates for infrequent and
smaller updates for frequent parameters.
𝜃𝑡+1,𝑖 = 𝜃𝑡,𝑖 −
𝜂
𝐺𝑡,𝑖𝑖 + 𝜖
𝑔𝑡,𝑖
Performing larger updates for infrequent and
smaller updates for frequent parameters.
Adadelta
Adadelta is an extension of Adagrad that seeks
to reduce its monotonically decreasing learning
rate.
It restricts the window of accumulated past
gradients to some fixed size 𝑤.
𝐸 𝑔2
𝑡 = 𝛾𝐸 𝑔2
𝑡−1 + 1 − 𝛾 𝑔𝑡
2
𝐸 ∆𝜃2
𝑡 = 𝛾𝐸 ∆𝜃2
𝑡−1 + 1 − 𝛾 ∆𝜃𝑡
2
𝜃𝑡+1 = 𝜃𝑡 −
𝐸 ∆𝜃2
𝑡 + 𝜖
𝐸 𝑔2
𝑡 + 𝜖
𝑔𝑡
No learning rate!
Exponential moving average
28
RMSprop
RMSprop is an unpublished, adaptive learning
rate method proposed by Geoff Hinton in his
lecture..
𝐸 𝑔2
𝑡 = 𝛾𝐸 𝑔2
𝑡−1 + 1 − 𝛾 𝑔𝑡
2
𝜃𝑡+1 = 𝜃𝑡 −
𝜂
𝐸 𝑔2
𝑡 + 𝜖
𝑔𝑡
Adam
Adaptive Moment Estimation (Adam) stores both
exponentially decaying average of past gradients
and and squared gradients.
𝑚 𝑡 = 𝛽1 𝑚 𝑡−1 + 1 − 𝛽1 𝑔𝑡
𝑣 𝑡 = 𝛽2 𝑣 𝑡−1 + 1 − 𝛽2 𝑔𝑡
2
𝜃𝑡+1 = 𝜃𝑡 −
𝜂
𝑣 𝑡 + 𝜖
1 − 𝛽2
𝑡
1 − 𝛽1
𝑡 𝑚 𝑡
Momentum
Running average of
gradient squares
Adam
Adaptive Moment Estimation (Adam) stores both
exponentially decaying average of past gradients
and and squared gradients.
𝑚 𝑡 = 𝛽1 𝑚 𝑡−1 + 1 − 𝛽1 𝑔𝑡
𝑣 𝑡 = 𝛽2 𝑣 𝑡−1 + 1 − 𝛽2 𝑔𝑡
2
𝜃𝑡+1 = 𝜃𝑡 −
𝜂
𝑣 𝑡 + 𝜖
1 − 𝛽2
𝑡
1 − 𝛽1
𝑡 𝑚 𝑡
Visualization
Semantic
segmentation
Semantic Segmentation?
lion
dog
giraffe
Image Classification
bicycle
person
ball
dog
Object Detection
person
person
person
person person
bicyclebicycle
Semantic Segmentation
Semantic segmentation
35
36
37
38
39
40
41
42
43
44
Results
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
Results
73
Results
74
Weakly
supervised
localization
Weakly supervised localization
76
Weakly supervised localization
77
Weakly Supervised Object Localization
78
Usually supervised learning of localization is annotated with bounding box
What if localization is possible with image label without bounding box
annotations?
Today’s seminar: Learning Deep Features for Discriminative
Localization
1512.04150v1 Zhou et al. 2015 CVPR2016
Architecture
79
AlexNet+GAP+places205
Living room
11x11 Avg Pooling: Global Average Pooling (GAP)
11x11x512
512 205
227x227x3
Class activation map (CAM)
80
• Identify important image regions by projecting back
the weights of output layer to convolutional feature
maps.
• CAMs can be generated for each class in single image.
• Regions for each categories are different in given image.
• palace, dome, church …
Results
81
• CAM on top 5 predictions on an image
• CAM for one object class in images
GAP vs. GMP
82
• Oquab et al. CVPR2015
Is object localization for free? weakly-supervised learning with convolutional neural
networks.
• Use global max pooling(GMP)
• Intuitive difference between GMP and GAP?
• GAP loss encourages identification on the extent of an object.
• GMP loss encourages it to identify just one discriminative part.
• GAP, average of a map maximized by finding all discriminative
parts of object
• if activations is all low, output of particular map reduces.
• GMP, low scores for all image regions except the most
discriminative part
• do not impact the score when perform MAX
pooling
GAP & GMP
83
• GAP (upper) vs GMP (lower)
• GAP outperforms GMP
• GAP highlights more complete
object regions and less
background noise.
• Loss for average pooling
benefits when the network
identifies all discriminative
regions of an object
84
Concept localization
85
Concept localization in weakly
labeled images
• Positive set: short phrase in text caption
• Negative set: randomly selected images
• Model catch the concept, phrases are
much more abstract than object name.
Weakly supervised text detector
• Positive set: 350 Google StreeView
images that contain text.
• Negative set: outdoor scene images in
SUN dataset
• Text highlighted without bounding box
annotations.
Image detection
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
Results
102
SPPnet
104
105
106
107
108
109
110
111
112
Results
113
Results
114
Fast R-CNN
116
117
118
119
120
121
122
123
124
125
Faster R-CNN
127
128
129
130
131
132
133
134
135
136
137
138
Results
139
Results
140
R-CNN
141
Image Regions Resize Convolution
Features
Classify
SPP net
142
Image Convolution Features SPPRegions Classify
R-CNN vs. SPP net
143
R-CNN SPP net
Fast R-CNN
144
Image
Convolution Features
Regions
RoI Pooling
Layer
Class Label
Confidence
RoI Pooling
Layer
Class Label
Confidence
R-CNN vs. SPP net vs. Fast R-CNN
145
R-CNN SPP net
Fast R-CNN
Faster R-CNN
146
Image Fully Convolutional
Features
Bounding Box
Regression
BB Classification
FastR-CNN
R-CNN vs. SPP net vs. Fast R-CNN
147
R-CNN SPP net
Fast R-CNN Faster R-CNN
148
Results
149
150
151
152
Deep Learning in Computer Vision
RNN
Recurrent Neural Network
155
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Recurrent Neural Network
156
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM comes in!
157
Long Short Term Memory
This is just a standard RNN.
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM comes in!
158
Long Short Term Memory
This is just a standard RNN.This is the LSTM!
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Overall Architecture
159
(Cell) state
Hidden State
Forget Gate
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Input Gate
Output Gate
Next (Cell) State
Next Hidden State
Input
Output
Output = Hidden state
The Core Idea
160
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Visual QnA
VQA: Dataset and Problem definition
162
VQA dataset - Example
Q: How many dogs are seen?
Q: What animal is this?
Q: What color is the car?
Q: What is the mustache made of?Q: Is this vegetarian pizza?
Solving VQA
163
Approach
[Malinowski et al., 2015] [Ren et al., 2015] [Andres et al., 2015]
[Ma et al., 2015] [Jiang et al., 2015]
Various methods have been proposed
DPPnet
164
Motivation
Common pipeline of using deep learning for vision
CNN trained on ImageNet
Switch the final layer and fine-tune for the New Task
In VQA, Task is determined by a question
Observation:
DPPnet
165
Main Idea
Switching parameters of a layer based on a question
Dynamic Parameter Layer
Question Parameter Prediction Network
DPPnet
166
Parameter Explosion
Number of parameter for fc-layer (R):
DynamicParameterLayer
Question Feature
Predicted Parameter
M
N
Q
P
: Dimension of hidden state
fc-layer
N=Q×P R=Q×P×M Q=1000, P=1000, M=500
For example:
R=500,000,000
1.86GB for single layer
Number of parameters for
VGG19: 144,000,000
DPPnet
167
Parameter Explosion
Number of parameter for fc-layer (R):
DynamicParameterLayer
Question Feature
Predicted Parameter
M
N
Q
P
: Dimension of hidden state
fc-layer
Solution:
R=Q×P×M R= N×M
N=Q×P N<Q×P
We can control N
DPPnet
168
Weight Sharing with Hashing Trick
Weights of Dynamic Parameter Layer are picked from Candidate weights by Hashing
Question Feature
Candidate Weights
fc-layer
0.11.2-0.70.3-0.2
0.1 0.1 -0.2 -0.7
1.2 -0.2 0.1 -0.7
-0.7 1.2 0.3 -0.2
0.3 0.3 0.1 1.2
DynamicParameterLayer
Hasing
[Chen et al., 2015]
DPPnet
169
Final Architecture
End-to-End Fine-tuning is possible (Fully-differentiable)
DPPnet
170
Qualitative Results
Q: What is the boy holding?
DPPnet: surfboard DPPnet: bat
DPPnet
171
Qualitative Results
Q: What animal is shown?
DPPnet: giraffe DPPnet: elephant
DPPnet
172
Qualitative Results
Q: How does the woman feel?
DPPnet: happy
Q: What type of hat is she wearing?
DPPnet: cowboy
DPPnet
173
Qualitative Results
Q: How many cranes are in the image?
DPPnet: 2 (3)
Q: How many people are on the bench?
DPPnet: 2 (1)
How to combine image and question?
174
How to combine image and question?
175
How to combine image and question?
176
How to combine image and question?
177
How to combine image and question?
178
How to combine image and question?
179
How to combine image and question?
180
How to combine image and question?
181
Multimodal Compact Bilinear Pooling
182
Multimodal Compact Bilinear Pooling
183
Multimodal Compact Bilinear Pooling
184
Multimodal Compact Bilinear Pooling
185
MCB without Attention
186
MCB with Attention
187
Results
188
Results
189
Results
190
Results
191
Results
192
Results
193
Word2Vec
Word2vec?
195
196
197
198
199
200
201
202
203
204
205
206
207
208
Image
Captioning
Image Captioning?
210
Overall Architecture
211
Language Model
212
Language Model
213
Language Model
214
Language Model
215
Language Model
216
Training phase
217
Training phase
218
Training phase
219
Training phase
220
Training phase
221
Training phase
222
Test phase
223
Test phase
224
Test phase
225
Test phase
226
Test phase
227
Test phase
228
Test phase
229
Test phase
230
Test phase
231
Results
232
Results
233
But not always..
234
235
Show, attend and tell
236
237
238
239
240
Results
241
Results
242
Results (mistakes)
243
Neural Art
Preliminaries
245
Understanding Deep Image
Representations by Inverting Them
CVPR2015
Texture Synthesis Using
Convolutional Neural Networks
NIPS2015
A Neural Algorithm of Artistic Style
246
A Neural Algorithm of Artistic Style
247
248
Texture Synthesis Using
Convolutional Neural Networks
-NIPS2015
Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
Texture?
249
Visual texture synthesis
250
Which one do you think is real?
Right one is real.
Goal of texture synthesis is to produce (arbitrarily many)
new samples from an example texture.
Results of this work
251
Right ones are given sources!
How?
252
Texture Model
253
𝑋 𝑎
Input a
𝐹𝑎
1
𝐹𝑎
2
𝐹𝑎
3
𝑋 𝑏
Input b
𝐹𝑏
1
𝐹𝑏
2
𝐹𝑏
3
number of filters
Feature Correlations
254
𝑋 𝑎
Input a
𝐹𝑎
1
𝐹𝑎
2
𝐹𝑎
3
𝑋 𝑏
Input b
𝐹𝑏
1
𝐹𝑏
2
𝐹𝑏
3
number of filters
𝐺 𝑎
2
= 𝐹𝑎
2 𝑇
𝐹𝑎
2
(Gram matrix)
Feature Correlations
255
𝐺 𝑎
2
𝐹𝑎
2
𝐹𝑎
2
=
number of filters W*H
𝐹𝑎
2
𝐺 𝑎
2 = 𝐹𝑎
2 𝑇 𝐹𝑎
2
(Gram matrix)
number of filters
Texture Generation
256
𝑋 𝑎
Input a
𝐹𝑎
1 𝐹𝑎
2 𝐹𝑎
3
𝑋 𝑏
Input b
𝐹𝑏
1
𝐹𝑏
2
𝐹𝑏
3
𝐺 𝑎
1
𝐺 𝑏
1
𝐺 𝑎
1
𝐺 𝑏
1
𝐺 𝑎
1
𝐺 𝑏
1
Texture Generation
257
𝑋 𝑎
Input a
𝐹𝑎
1 𝐹𝑎
2 𝐹𝑎
3
𝑋 𝑏
Input b
𝐹𝑏
1
𝐹𝑏
2
𝐹𝑏
3
𝐺 𝑎
1
𝐺 𝑏
1
𝐺 𝑎
1
𝐺 𝑏
1
𝐺 𝑎
1
𝐺 𝑏
1
Element-wise squared loss
Total layer-wise loss function
Results
258
Results
259
260
Understanding Deep Image
Representations by Inverting Them
-CVPR2015
Aravindh Mahendran, Andrea Vedaldi (VGGgroup)
Reconstruction from feature map
261
Reconstruction from feature map
262
𝑋 𝑎
Input a
𝐹𝑎
1 𝐹𝑎
2 𝐹𝑎
3
𝑋 𝑏
Input b
𝐹𝑏
1
𝐹𝑏
2
𝐹𝑏
3
number of filters
Let’s make this features similar!
By changing the input image!
Receptive Field
263
264
A Neural Algorithm of Artistic Style
Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
How?
265
Style Image
Content Image
Mixed ImageNeural Art
How?
266
Style Image
Content Image
Mixed ImageNeural Art
Texture Synthesis Using
Convolutional Neural Networks
Understanding Deep Image
Representations by Inverting Them
How?
267
Gram matrix
Neural Art
268
𝑝: original photo, 𝑎: original artwork
𝑥: image to be generated
Content Style
Total loss = content loss + style loss
Results
269
Results
270
271
1 de 271

Recomendados

Deep learningDeep learning
Deep learningBenha University
8.4K visualizações36 slides
CnnCnn
CnnNirthika Rajendran
13.8K visualizações31 slides
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Suraj Aavula
13.3K visualizações22 slides
Deep learning Deep learning
Deep learning Rajgupta258
1.6K visualizações20 slides
Convolutional neural network Convolutional neural network
Convolutional neural network Yan Xu
5.3K visualizações68 slides

Mais conteúdo relacionado

Mais procurados(20)

Intro to Deep Learning for Computer VisionIntro to Deep Learning for Computer Vision
Intro to Deep Learning for Computer Vision
Christoph Körner2.2K visualizações
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
Gaurav Mittal58.5K visualizações
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural Networks
Yogendra Tamang7.5K visualizações
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
Jörgen Sandig7.9K visualizações
Image classification using CNNImage classification using CNN
Image classification using CNN
Noura Hussein5.2K visualizações
Deep Learning TutorialDeep Learning Tutorial
Deep Learning Tutorial
Amr Rashed5.4K visualizações
Deep Learning - Overview of my work IIDeep Learning - Overview of my work II
Deep Learning - Overview of my work II
Benha University15.6K visualizações
Deep Learning - CNN and RNNDeep Learning - CNN and RNN
Deep Learning - CNN and RNN
Ashray Bhandare5.9K visualizações
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
leopauly1.7K visualizações
Object detection with deep learningObject detection with deep learning
Object detection with deep learning
Sushant Shrivastava7.9K visualizações
Convolutional Neural Network Models - Deep LearningConvolutional Neural Network Models - Deep Learning
Convolutional Neural Network Models - Deep Learning
Benha University12.6K visualizações
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
Oswald Campesato3.8K visualizações
An introduction to Deep LearningAn introduction to Deep Learning
An introduction to Deep Learning
Julien SIMON4.3K visualizações
Support Vector Machines ( SVM ) Support Vector Machines ( SVM )
Support Vector Machines ( SVM )
Mohammad Junaid Khan35.3K visualizações
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)
Muhammad Haroon813 visualizações
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)
Fellowship at Vodafone FutureLab2K visualizações
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual Introduction
Lukas Masuch57.5K visualizações
1.Introduction to deep learning1.Introduction to deep learning
1.Introduction to deep learning
KONGU ENGINEERING COLLEGE1.9K visualizações

Similar a Deep Learning in Computer Vision(20)

Recent Progress on Object Detection_20170331Recent Progress on Object Detection_20170331
Recent Progress on Object Detection_20170331
Jihong Kang3.8K visualizações
深度學習在AOI的應用深度學習在AOI的應用
深度學習在AOI的應用
CHENHuiMei1.4K visualizações
Improving Hardware Efficiency for DNN ApplicationsImproving Hardware Efficiency for DNN Applications
Improving Hardware Efficiency for DNN Applications
Chester Chen1.5K visualizações
Cvpr 2018 papers review (efficient computing)Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)
DonghyunKang12400 visualizações
物件偵測與辨識技術物件偵測與辨識技術
物件偵測與辨識技術
CHENHuiMei368 visualizações
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspective
Anirban Santara1.2K visualizações
DefenseTalk_TrimmedDefenseTalk_Trimmed
DefenseTalk_Trimmed
Abhishek Sharma269 visualizações
Batch normalization presentationBatch normalization presentation
Batch normalization presentation
Owin Will670 visualizações
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
Junaid Bhat1.4K visualizações
Pr045 deep lab_semantic_segmentationPr045 deep lab_semantic_segmentation
Pr045 deep lab_semantic_segmentation
Taeoh Kim2.7K visualizações

Mais de Sungjoon Choi

RNN and its applicationsRNN and its applications
RNN and its applicationsSungjoon Choi
8.1K visualizações108 slides
Recent Trends in Deep LearningRecent Trends in Deep Learning
Recent Trends in Deep LearningSungjoon Choi
4.2K visualizações89 slides

Mais de Sungjoon Choi(20)

RNN and its applicationsRNN and its applications
RNN and its applications
Sungjoon Choi8.1K visualizações
Modeling uncertainty in deep learning Modeling uncertainty in deep learning
Modeling uncertainty in deep learning
Sungjoon Choi2.9K visualizações
Gaussian Process Latent Variable ModelGaussian Process Latent Variable Model
Gaussian Process Latent Variable Model
Sungjoon Choi584 visualizações
Uncertainty Modeling in Deep LearningUncertainty Modeling in Deep Learning
Uncertainty Modeling in Deep Learning
Sungjoon Choi1.4K visualizações
Recent Trends in Deep LearningRecent Trends in Deep Learning
Recent Trends in Deep Learning
Sungjoon Choi4.2K visualizações
Leveraged Gaussian ProcessLeveraged Gaussian Process
Leveraged Gaussian Process
Sungjoon Choi1.1K visualizações
LevDNNLevDNN
LevDNN
Sungjoon Choi327 visualizações
IROS 2017 SlidesIROS 2017 Slides
IROS 2017 Slides
Sungjoon Choi414 visualizações
Domain Adaptation MethodsDomain Adaptation Methods
Domain Adaptation Methods
Sungjoon Choi1.1K visualizações
InfoGAIL InfoGAIL
InfoGAIL
Sungjoon Choi2.7K visualizações
Kernel, RKHS, and Gaussian ProcessesKernel, RKHS, and Gaussian Processes
Kernel, RKHS, and Gaussian Processes
Sungjoon Choi2.4K visualizações
Inverse Reinforcement Learning AlgorithmsInverse Reinforcement Learning Algorithms
Inverse Reinforcement Learning Algorithms
Sungjoon Choi1.1K visualizações
Value iteration networksValue iteration networks
Value iteration networks
Sungjoon Choi2.6K visualizações
Deep Learning in RoboticsDeep Learning in Robotics
Deep Learning in Robotics
Sungjoon Choi3.5K visualizações
Semantic Segmentation Methods using Deep LearningSemantic Segmentation Methods using Deep Learning
Semantic Segmentation Methods using Deep Learning
Sungjoon Choi1.9K visualizações
Object Detection Methods using Deep LearningObject Detection Methods using Deep Learning
Object Detection Methods using Deep Learning
Sungjoon Choi2.4K visualizações
CNN TutorialCNN Tutorial
CNN Tutorial
Sungjoon Choi5.6K visualizações
TensorFlow Tutorial Part2TensorFlow Tutorial Part2
TensorFlow Tutorial Part2
Sungjoon Choi1.1K visualizações

Último(20)

cloud computing-virtualization.pptxcloud computing-virtualization.pptx
cloud computing-virtualization.pptx
RajaulKarim2066 visualizações
Data Communication and Computer NetworksData Communication and Computer Networks
Data Communication and Computer Networks
Sreedhar Chowdam362 visualizações
Pointers.pptxPointers.pptx
Pointers.pptx
Ananthi Palanisamy55 visualizações
Electronic Devices - Integrated Circuit.pdfElectronic Devices - Integrated Circuit.pdf
Electronic Devices - Integrated Circuit.pdf
booksarpita10 visualizações
CHI-SQUARE ( χ2) TESTS.pptxCHI-SQUARE ( χ2) TESTS.pptx
CHI-SQUARE ( χ2) TESTS.pptx
ssusera597c59 visualizações
EV in Bangladesh.pptxEV in Bangladesh.pptx
EV in Bangladesh.pptx
Sohel Mahboob23 visualizações
Solar PVSolar PV
Solar PV
Iwiss Tools Co.,Ltd8 visualizações
SEMI CONDUCTORSSEMI CONDUCTORS
SEMI CONDUCTORS
pavaniaalla200513 visualizações
SNMPxSNMPx
SNMPx
Amatullahbutt10 visualizações
Dynamics of Hard-Magnetic Soft MaterialsDynamics of Hard-Magnetic Soft Materials
Dynamics of Hard-Magnetic Soft Materials
Shivendra Nandan10 visualizações
SWM L15-L28_drhasan (Part 2).pdfSWM L15-L28_drhasan (Part 2).pdf
SWM L15-L28_drhasan (Part 2).pdf
MahmudHasan74787023 visualizações
Extensions of Time - Contract ManagementExtensions of Time - Contract Management
Extensions of Time - Contract Management
brainquisitive10 visualizações
IWISS Catalog 2022IWISS Catalog 2022
IWISS Catalog 2022
Iwiss Tools Co.,Ltd22 visualizações
Transformed: Moving to the Product Operating ModelTransformed: Moving to the Product Operating Model
Transformed: Moving to the Product Operating Model
Salvatore Cordiano9 visualizações

Deep Learning in Computer Vision