SlideShare a Scribd company logo
1 of 49
Download to read offline
Convolutional
Neural Networks
Outlook
โ€ข Part 1: ํŒŒ์ด์ฌ๊ณผ ํ…์„œํ”Œ๋กœ์šฐ ์†Œ๊ฐœ
โ€ข Part 2: ํšŒ๊ท€ ๋ถ„์„๊ณผ ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€
โ€ข Part 3: ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ ์•Œ๊ณ ๋ฆฌ์ฆ˜
โ€ข Part 4: ์ฝ˜๋ณผ๋ฃจ์…˜ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ
2
์ง€๋‚œ ์‹œ๊ฐ„์—...
3
Perceptron vs Logistic Neuron
ํ™œ์„ฑํ™” ํ•จ์ˆ˜
๊ฐ€์ค‘์น˜
๐‘ค"~๐‘ค$%
์œ„์Šค์ฝ˜์‹  ๋ฐ์ดํ„ฐ
๐‘ฅ"~๐‘ฅ$%
4
๋ ˆ์ด์–ด๊ฐ„ ํ–‰๋ ฌ ๊ณ„์‚ฐ
5
๐‘ฅ"
0.6
โ‹ฏ
๐‘ฅ$%
0.2
โ‹ฎ โ‹ฑ โ‹ฎ
0.5 โ‹ฏ 0.4
โ‹…
๐‘ค"
"
โ‹ฎ
๐‘ค$%
"
โ‹ฑ
๐‘ค"
"%
โ‹ฎ
๐‘ค$%
"%
=
1.5
5.9
โ‹ฎ
0.7
โ‹ฑ
1.1
0.2
โ‹ฎ
0.5
		+ ๐‘" โ‹ฏ ๐‘$% =
1.2
2.9
โ‹ฎ
1.7
โ‹ฑ
1.6
2.2
โ‹ฎ
4.1
569 x 10 ํฌ๊ธฐ
569๊ฐœ
์ƒ˜ํ”Œ
๐‘ฅ							ร—		๐‘Š																				 + ๐‘ = ๐‘ง
[569, 30] x [30, 10] = [569, 10] + [10] = [569, 10]
10๊ฐœ ํŽธํ–ฅ(bias)
30๊ฐœ ํŠน์„ฑ
569 x 10๊ฐœ ๊ฒฐ๊ณผ
(logits)
...
๐‘ฅ%
๐‘ฅ"
๐‘ฅ$%
๐‘%
๐‘"%
30 x 10๊ฐœ
๊ฐ€์ค‘์น˜
๋‹ค์ค‘ ๋ถ„๋ฅ˜
Dog
Cat
Rabbit
๐‘ฆ<" =
0.9
0.8
0.7
๐‘ฆ<> =
0.5
0.2
0.1
๐‘ฆ<" =
0.59
0.26
0.15
๐‘ฆ<> =
0.74
0.18
0.08
์ถœ๋ ฅ๊ฐ’ ์ •๊ทœํ™”
์‹œ๊ทธ๋ชจ์ด๋“œ ์ถœ๋ ฅ
์†Œํ”„ํŠธ๋งฅ์Šค ์ถœ๋ ฅ
6
์™„์ „ ์—ฐ๊ฒฐ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ
...
...
ํžˆ๋“  ๋ ˆ์ด์–ด
์ž…๋ ฅ ๋ ˆ์ด์–ด
์ถœ๋ ฅ ๋ ˆ์ด์–ด
์ž…๋ ฅ ๋ฐ์ดํ„ฐ
	
	
	
	
	
	
	
	
	
	
	
	
์ถœ๋ ฅ ๋ ˆ์ด๋ธ”
	
	
	
	
	
	
	
	
	
	
	
	
๐‘ฅ ๐‘ฆ<
๐‘ค", ๐‘" ๐‘ค>, ๐‘>
์‹œ๊ทธ๋ชจ์ด๋“œ, ๋ ๋ฃจ
์†Œํ”„ํŠธ๋งฅ์Šค
7
์—ญ์ „ํŒŒ ์•Œ๊ณ ๋ฆฌ์ฆ˜
8
Neuron
*
+
Softmax
๐‘ก
๐‘ฆ<
๐œ•๐ฝ
๐œ•๐‘ง
= (๐‘ฆ โˆ’ ๐‘ฆ<)
๐‘ง = ๐‘คร—๐‘ก + ๐‘
๐‘ = ๐‘ +
๐œ•๐ฝ
๐œ•๐‘
=
๐œ•๐ฝ
๐œ•๐‘ง
ร—
๐œ•๐‘ง
๐œ•๐‘
= (๐‘ฆ โˆ’ ๐‘ฆ<)
Neuron
๐‘ค = ๐‘ค +
๐œ•๐ฝ
๐œ•๐‘ค
=
๐œ•๐ฝ
๐œ•๐‘ง
ร—
๐œ•๐‘ง
๐œ•๐‘ค
= ๐‘ฆ โˆ’ ๐‘ฆ< ๐‘ก
๐‘ง
๐œ•๐ฝ
๐œ•๐‘ก
=
๐œ•๐ฝ
๐œ•๐‘ง
ร—
๐œ•๐‘ง
๐œ•๐‘ก
= ๐‘ฆ โˆ’ ๐‘ฆ< ๐‘ค
๐ฝ๐œ•๐ฝ
๐œ•๐‘ 
=
๐œ•๐ฝ
๐œ•๐‘ก
ร—
๐œ•๐‘ก
๐œ•๐‘ 
= ๐‘ฆ โˆ’ ๐‘ฆ< ๐‘คร—๐‘ก(1 โˆ’ ๐‘ก)
๐‘ 
์‹œ๊ทธ๋ชจ์ด๋“œ
SGD, mini-batch GD
โ€ข Batch ๊ทธ๋ž˜๋””์–ธํŠธ ๋””์„ผํŠธ
โ€ข ์ „์ฒด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉ
โ€ข ๊ฐ€์žฅ ์ข‹์€ ๋ฐฉ๋ฒ•, ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์…‹์— ์ ์šฉํ•˜๊ธฐ ํž˜
๋“ฌ
โ€ข SGD(Stochastic Gradient Descent):
โ€ข ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ํ•˜๋‚˜์”ฉ ์‚ฌ์šฉ
โ€ข ๋น ๋ฅธ ํ•™์Šต ๊ฐ€๋Šฅ, ๋…ธ์ด์ฆˆ ๋ฐ์ดํ„ฐ๋กœ ์ธํ•ด ๋ณ€๋™์ด ํผ
โ€ข mini-batch GD
โ€ข ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ์กฐ๊ธˆ์”ฉ ๋‚˜๋ˆ„์–ด ์‚ฌ์šฉ
โ€ข Batch ์™€ SGD์˜ ์ ˆ์ถฉ์•ˆ์œผ๋กœ ์ผ์ • ๊ฐœ์ˆ˜์˜ ๋ฐ์ดํ„ฐ
๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•™์Šต
9
๋ ˆ์ด์–ด ๊ตฌ์„ฑ
...
...
ํžˆ๋“  ๋ ˆ์ด์–ด
100๊ฐœ
์ž…๋ ฅ ๋ ˆ์ด์–ด
784๊ฐœ
์ถœ๋ ฅ ๋ ˆ์ด์–ด
10๊ฐœ
์ž…๋ ฅ ๋ฐ์ดํ„ฐ
28x28=784
	
	
	
	
	
	
	
	
	
	
	
	
์ถœ๋ ฅ ๋ ˆ์ด๋ธ”
	
	
	
	
	
	
	
	
	
	
	
	
๐‘ฅ ๐‘ฆ<
๐‘ค", ๐‘" ๐‘ค>, ๐‘>
์‹œ๊ทธ๋ชจ์ด๋“œ
์†Œํ”„ํŠธ๋งฅ์Šค
10
๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ ๊ตฌํ˜„
CNN
Fully Connected
โ€ข ์ด๋ฏธ์ง€ ํ”ฝ์…€์„ ์ผ๋ ฌ๋กœ ํŽผ์ณ์„œ ๋„คํŠธ์›Œํฌ์— ์ฃผ์ž…ํ•ฉ๋‹ˆ๋‹ค.
...
784ร—100 + [100]
...
Convolution
โ€ข ์ด๋ฏธ์ง€์˜ 2์ฐจ์› ๊ตฌ์กฐ๋ฅผ ๊ทธ๋Œ€๋กœ ์ด์šฉํ•ฉ๋‹ˆ๋‹ค.
โ€ข ๊ฐ€์ค‘์น˜๊ฐ€ ์žฌํ™œ์šฉ๋˜์–ด ์‚ฌ์ด์ฆˆ๊ฐ€ ํฌ๊ฒŒ ์ค„์–ด ๋“ญ๋‹ˆ๋‹ค.
...
...
...
...
3ร—3 + [1]
...
Convolving
Feature Map
โ€ข ์ฝ˜๋ณผ๋ฃจ์…˜์œผ๋กœ ๋งŒ๋“ค์–ด์ง„ 2์ฐจ์› ๋งต์„ ํŠน์„ฑ ๋งต์ด๋ผ๊ณ  ๋ถ€๋ฆ…๋‹ˆ๋‹ค.
โ€ข ๋ณดํ†ต ํ•œ ๋ ˆ์ด์–ด์—์„œ ์—ฌ๋Ÿฌ๊ฐœ์˜ ํŠน์„ฑ ๋งต์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค.
...
...
...
...
โ‹… 3ร—3 + [1]
ํŠน์„ฑ ๋งต
(Feature Map)
์ „ํ˜•์ ์ธ ํŠน์„ฑ ๋งต
โ€ข ํŠน์„ฑ ๋งต์„ ๋งŒ๋“œ๋Š” ๊ฐ€์ค‘์น˜๋ฅผ ์ปค๋„(kernel), ํ˜น์€ ํ•„ํ„ฐ(filter)๋ผ๊ณ ๋„ ๋ถ€๋ฆ…๋‹ˆ๋‹ค.
โ€ข ๋ณดํ†ต ํ•œ๊ฐœ ์ด์ƒ์˜ ์ปค๋„์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
ํŠน์„ฑ ๋งต(3D)์ปค๋„, ํ•„ํ„ฐ(w)
conv2d()
โ€ข ๊ฐ€์ค‘์น˜์™€ ๋ฐ”์ด์–ด์Šค๋ฅผ ์ง์ ‘ ์ƒ์„ฑํ•ด ์ „๋‹ฌ
โ€ข tf.layers.conv2d()๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํŽธ๋ฆฌํ•จ
...
...
...
...
โ‹… 3ร—3 + [1]
W = tf.Variable(tf.truncated_normal([3, 3, 1, 10], stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[10]))
conv = tf.nn.conv2d(x, W) + b
DNN
CNN
์†Œํ”„ํŠธ๋งฅ์Šค ํ•จ์ˆ˜
Stride, Padding
์ŠคํŠธ๋ผ์ด๋“œ(stride)
โ€ข ํ•„ํ„ฐ๊ฐ€ ์Šฌ๋ผ์ด๋”ฉํ•˜๋Š” ํฌ๊ธฐ๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค.
์ŠคํŠธ๋ผ์ด๋“œ ๊ณ„์‚ฐ
๐‘œ =
๐‘– โˆ’ ๐‘“
๐‘ 
+ 1 =
4 โˆ’ 3
1
+ 1 = 2
์ž…๋ ฅ(i): 4x4
ํ•„ํ„ฐ(f): 3x3
์ŠคํŠธ๋ผ์ด๋“œ(s): 1
W = tf.Variable(tf.truncated_normal([3, 3, 1, 10], stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[10]))
conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1]) + b
ํŒจ๋”ฉ(padding)
โ€ข ์ž…๋ ฅ ๊ฐ’ ์ฃผ์œ„์— 0 ์œผ๋กœ ์ฑ„์šด ํŒจ๋”ฉ์„ ๋”ํ•ฉ๋‹ˆ๋‹ค(zero-padding)
โ€ข ํ•„ํ„ฐ๊ฐ€ ๋” ๋งŽ์ด ์Šฌ๋ผ์ด๋“œ๋˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค.
ํŒจ๋”ฉ ๊ณ„์‚ฐ
๐‘œ =
๐‘– โˆ’ ๐‘“ + 2๐‘
๐‘ 
+ 1 =
5 โˆ’ 4 + 2ร—2
1
+ 1 = 6
์ž…๋ ฅ(i): 5x5
ํ•„ํ„ฐ(f): 4x4
์ŠคํŠธ๋ผ์ด๋“œ(s): 1
ํ…์„œํ”Œ๋กœ์šฐ ํŒจ๋”ฉ ๊ณ„์‚ฐ
โ€ข ํŒจ๋”ฉํฌ๊ธฐ๋ฅผ ์ง์ ‘ ์ง€์ •, tf.pad()
โ€ข ํŒจ๋”ฉ ํƒ€์ž…(same/valid), ์ŠคํŠธ๋ผ์ด๋“œ ํฌ๊ธฐ ร  ํŒจ๋”ฉ ํฌ๊ธฐ ์ž๋™ ๊ฒฐ์ •
โ€ข same
โ€ข ์ถœ๋ ฅํฌ๊ธฐ=์ž…๋ ฅํฌ๊ธฐ/์ŠคํŠธ๋ผ์ด๋“œ
โ€ข tf.layer.conv2d(.., padding=โ€˜sameโ€™, ..)
โ€ข ํŒจ๋”ฉ ํฌ๊ธฐ๊ฐ€ ํ•„ํ„ฐ์˜ ์ ˆ๋ฐ˜ ์ •๋„๋ผ ํ•˜ํ”„ ํŒจ๋”ฉ์ด๋ผ๊ณ ๋„ ๋ถ€๋ฆ„
โ€ข valid
โ€ข ํŒจ๋”ฉ์„ ๋„ฃ์ง€ ์•Š์Œ
โ€ข tf.layer.conv2d(.., padding=โ€˜validโ€™, ..)
W = tf.Variable(tf.truncated_normal([3, 3, 1, 10], stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[10]))
conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding=โ€˜SAMEโ€™) + b
ReLU
โ€ข Rectified Linear Unit
โ€ข -โˆž~+โˆž์ž…๋ ฅ์— ๋Œ€ํ•ด 0~+โˆž ์‚ฌ์ด์˜ ๊ฐ’์„
์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค.
๐‘ฆ< = max	(0, ๐‘ง)	
W = tf.Variable(tf.truncated_normal([3, 3, 1, 10], stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[10]))
conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding=โ€˜SAMEโ€™) + b
acti = tf.nn.relu(conv)
subsampling
์„œ๋ธŒ์ƒ˜ํ”Œ๋ง
๋ณดํ†ต ํ’€๋ง(Pooling)์ด๋ผ๊ณ  ๋ถ€๋ฆ„
ํ‰๊ท  ํ’€๋ง(Average Pooling), ๋งฅ์Šค ํ’€๋ง(Max Pooling)
๋ฐ์ดํ„ฐ ์••์ถ•์˜ ํšจ๊ณผ
ํ’€๋ง
โ€ข ๊ฐ€์ค‘์น˜๋ฅผ ๊ณฑํ•˜๊ฑฐ๋‚˜ ๋ฐ”์ด์–ด์Šค๋ฅผ ๋”ํ•˜๋Š” ๊ฒƒ์ด ์—†์Œ
โ€ข ์ž…๋ ฅ ๋งต์—์„œ ์ฝ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์žฌ ๊ฐ€๊ณตํ•จ
โ€ข ๋ณดํ†ต ํ’€๋ง ํฌ๊ธฐ์™€ ์ŠคํŠธ๋ผ์ด๋“œ ํฌ๊ธฐ๊ฐ€ ๊ฐ™์Œ(๊ฒน์น˜๋Š” ๋ถ€๋ถ„ ์—†์Œ)
โ€ข ์ถœ๋ ฅํฌ๊ธฐ=์ž…๋ ฅํฌ๊ธฐ/ํ’€๋งํฌ๊ธฐ
๋งฅ์Šค ํ’€๋ง
๐‘œ =
๐‘– โˆ’ ๐‘“
๐‘ 
+ 1
=
5 โˆ’ 3
1
+ 1 = 3
ํ‰๊ท  ํ’€๋ง
๐‘œ =
๐‘– โˆ’ ๐‘“
๐‘ 
+ 1
=
5 โˆ’ 3
1
+ 1 = 3
max_pool()
โ€ข ์ปค๋„ ํฌ๊ธฐ์™€ ์ŠคํŠธ๋ผ์ด๋“œ ํฌ๊ธฐ๋ฅผ ๊ตฌ์ฒด์ ์œผ๋กœ ์ง€์‹œํ•จ
โ€ข tf.layers.max_pooling2d()๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํŽธ๋ฆฌ
W = tf.Variable(tf.truncated_normal([3, 3, 1, 10], stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[10]))
conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding=โ€˜SAMEโ€™) + b
actv = tf.nn.relu(conv)
pool = tf.nn.max_pool(actv, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1])
๋“œ๋กญ์•„์›ƒ(dropout)
โ€ข ํ•™์Šตํ•  ๋•Œ ๋ ˆ์ด์–ด์˜ ์ผ๋ถ€ ๋…ธ๋“œ๋ฅผ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค.
โ€ข ์—ฌ๋Ÿฌ๊ฐœ์˜ ๋„คํŠธ์›Œํฌ๋ฅผ ์•™์ƒ๋ธ”ํ•˜๋Š” ํšจ๊ณผ๋ฅผ ๋ฐœํœ˜ํ•ฉ๋‹ˆ๋‹ค.
drop_out()
โ€ข ๋“œ๋กญ ์•„์›ƒํ•  ํ™•๋ฅ ์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค.
โ€ข ์ถ”๋ก (inference)์‹œ์—๋Š” ๋“œ๋กญ์•„์›ƒ์„ ํ•˜๋ฉด ์•ˆ๋ฉ๋‹ˆ๋‹ค.(ํ”Œ๋ ˆ์ด์Šค ํ™€๋”)
โ€ข ์˜ˆ์ œ์—์„œ๋Š” tf.layers.dropout() ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
drop = tf.nn.dropout(fc_output, keep_prob)
drop = tf.layers.dropout(fc_output, drop_prob)
๋“œ๋กญ์‹œํ‚ฌ ๋น„์œจ
๋“œ๋กญ์‹œํ‚ค์ง€ ์•Š๊ณ  ๋‚จ๊ธธ ๋น„์œจ
์ฝ˜๋ณผ๋ฃจ์…˜ ๊ตฌํ˜„
๋ ˆ์ด์–ด ๊ตฌ์„ฑ
...
...
28ร—28ร—1
5ร—5ร—1
same, 32๊ฐœ
28ร—28ร—32
2ร—2
maxpool
14ร—14ร—32
5ร—5ร—32
same, 64๊ฐœ
14ร—14ร—64
2ร—2
maxpool
7ร—7ร—64
1024 10
๐‘ฆ<๐‘ง
softmax
relu
์ฝ˜๋ณผ๋ฃจ์…˜1
โ€ข 5x5 ์ปค๋„, 32๊ฐœ
โ€ข ์ŠคํŠธ๋ผ์ด๋“œ 1
โ€ข ํŒจ๋”ฉ same
โ€ข ReLU ํ™œ์„ฑํ™” ํ•จ์ˆ˜
28ร—28ร—1
5ร—5ร—1
same, 32๊ฐœ
28ร—28ร—32
relu
ํ’€๋ง1
โ€ข 2x2 ์ปค๋„
โ€ข 2x2 ์ŠคํŠธ๋ผ์ด๋“œ
28ร—28ร—32
2ร—2
maxpool
14ร—14ร—32
์ฝ˜๋ณผ๋ฃจ์…˜2
โ€ข 5x5 ์ปค๋„, 64๊ฐœ
โ€ข ์ŠคํŠธ๋ผ์ด๋“œ 1
โ€ข ํŒจ๋”ฉ same
โ€ข ReLU ํ™œ์„ฑํ™” ํ•จ์ˆ˜
5ร—5ร—32
same, 64๊ฐœ
14ร—14ร—6414ร—14ร—32
ํ’€๋ง2
โ€ข 2x2 ์ปค๋„
โ€ข 2x2 ์ŠคํŠธ๋ผ์ด๋“œ
14ร—14ร—64
2ร—2
maxpool
7ร—7ร—64
์™„์ „์—ฐ๊ฒฐ ๋ ˆ์ด์–ด
โ€ข 1024๊ฐœ ์œ ๋‹›
โ€ข ๋“œ๋กญ์•„์›ƒ ์ ์šฉ
โ€ข ๋ ๋ฃจ ํ™œ์„ฑํ™” ํ•จ์ˆ˜
7ร—7ร—64
1024
์ถœ๋ ฅ ๋ ˆ์ด์–ด
โ€ข 10๊ฐœ์˜ ์œ ๋‹›
โ€ข ์†Œํ”„ํŠธ ๋งฅ์Šค ํ™œ์„ฑํ™” ํ•จ์ˆ˜
10
๐‘ฆ<๐‘ง
softmax
์†Œํ”„ํŠธ๋งฅ์Šค ํ†ต๊ณผ์ „ : z
์†Œํ”„ํŠธ๋งฅ์Šค ํ†ต๊ณผํ›„ : y_hat
ํ•™์Šต ์„ค์ •
argmax(y) = [?]
argmax(y_hat) = [?]
y = [?, 10]
z = [?, 10]
๋ถˆ๋ฆฌ์–ธ์„ ์ˆซ์ž๋กœ ๋ฐ”๊พธ๊ณ 
ํ‰๊ท ์„ ๋ƒ…๋‹ˆ๋‹ค.
[True, False, True,... ] ร  [1.0, 0.0, 1.0, ...]
๋ฏธ๋‹ˆ ๋ฐฐ์น˜ ํ›ˆ๋ จ
100๊ฐœ์”ฉ ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ๋ง
x_data: [100, 784]
y_data: [100, 10]
๊ฒฐ๊ณผ
์ฒซ๋ฒˆ์งธ ์ฝ˜๋ณผ๋ฃจ์…˜ ํ•„ํ„ฐ 32๊ฐœ
Materials
โ€ข Github :
https://github.com/rickiepark/tfk-notebooks/tree/master/tensorflow_for_beginners
โ€ข Slideshare :
https://www.slideshare.net/RickyPark3/
48
๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค.
49

More Related Content

What's hot

3.neural networks
3.neural networks3.neural networks
3.neural networksHaesun Park
ย 
4.representing data and engineering features(epoch#2)
4.representing data and engineering features(epoch#2)4.representing data and engineering features(epoch#2)
4.representing data and engineering features(epoch#2)Haesun Park
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จHaesun Park
ย 
(Handson ml)ch.7-ensemble learning and random forest
(Handson ml)ch.7-ensemble learning and random forest(Handson ml)ch.7-ensemble learning and random forest
(Handson ml)ch.7-ensemble learning and random forestHaesun Park
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹Haesun Park
ย 
3.unsupervised learing(epoch#2)
3.unsupervised learing(epoch#2)3.unsupervised learing(epoch#2)
3.unsupervised learing(epoch#2)Haesun Park
ย 
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #4
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #4แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #4
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #4Haesun Park
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 6์žฅ ๊ฒฐ์ • ํŠธ๋ฆฌ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 6์žฅ ๊ฒฐ์ • ํŠธ๋ฆฌ[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 6์žฅ ๊ฒฐ์ • ํŠธ๋ฆฌ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 6์žฅ ๊ฒฐ์ • ํŠธ๋ฆฌHaesun Park
ย 
2.supervised learning(epoch#2)-2
2.supervised learning(epoch#2)-22.supervised learning(epoch#2)-2
2.supervised learning(epoch#2)-2Haesun Park
ย 
์‚ฌ์ดํ‚ท๋Ÿฐ ์ตœ์‹  ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์Šคํ„ฐ๋””
์‚ฌ์ดํ‚ท๋Ÿฐ ์ตœ์‹  ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์Šคํ„ฐ๋””์‚ฌ์ดํ‚ท๋Ÿฐ ์ตœ์‹  ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์Šคํ„ฐ๋””
์‚ฌ์ดํ‚ท๋Ÿฐ ์ตœ์‹  ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์Šคํ„ฐ๋””Haesun Park
ย 
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #2
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #2แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #2
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #2Haesun Park
ย 
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์ŠตJuhui Park
ย 
๋จธ์‹  ๋Ÿฌ๋‹ ์ž…๋ฌธ #1-๋จธ์‹ ๋Ÿฌ๋‹ ์†Œ๊ฐœ์™€ kNN ์†Œ๊ฐœ
๋จธ์‹  ๋Ÿฌ๋‹ ์ž…๋ฌธ #1-๋จธ์‹ ๋Ÿฌ๋‹ ์†Œ๊ฐœ์™€ kNN ์†Œ๊ฐœ๋จธ์‹  ๋Ÿฌ๋‹ ์ž…๋ฌธ #1-๋จธ์‹ ๋Ÿฌ๋‹ ์†Œ๊ฐœ์™€ kNN ์†Œ๊ฐœ
๋จธ์‹  ๋Ÿฌ๋‹ ์ž…๋ฌธ #1-๋จธ์‹ ๋Ÿฌ๋‹ ์†Œ๊ฐœ์™€ kNN ์†Œ๊ฐœTerry Cho
ย 
6.algorithm chains and piplines(epoch#2)
6.algorithm chains and piplines(epoch#2)6.algorithm chains and piplines(epoch#2)
6.algorithm chains and piplines(epoch#2)Haesun Park
ย 
5.model evaluation and improvement(epoch#2) 2
5.model evaluation and improvement(epoch#2) 25.model evaluation and improvement(epoch#2) 2
5.model evaluation and improvement(epoch#2) 2Haesun Park
ย 
4.representing data and engineering features
4.representing data and engineering features4.representing data and engineering features
4.representing data and engineering featuresHaesun Park
ย 
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01Kwang Woo NAM
ย 
5.model evaluation and improvement(epoch#2) 1
5.model evaluation and improvement(epoch#2) 15.model evaluation and improvement(epoch#2) 1
5.model evaluation and improvement(epoch#2) 1Haesun Park
ย 
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)SANG WON PARK
ย 
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #1
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #1แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #1
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #1Haesun Park
ย 

What's hot (20)

3.neural networks
3.neural networks3.neural networks
3.neural networks
ย 
4.representing data and engineering features(epoch#2)
4.representing data and engineering features(epoch#2)4.representing data and engineering features(epoch#2)
4.representing data and engineering features(epoch#2)
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
ย 
(Handson ml)ch.7-ensemble learning and random forest
(Handson ml)ch.7-ensemble learning and random forest(Handson ml)ch.7-ensemble learning and random forest
(Handson ml)ch.7-ensemble learning and random forest
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
ย 
3.unsupervised learing(epoch#2)
3.unsupervised learing(epoch#2)3.unsupervised learing(epoch#2)
3.unsupervised learing(epoch#2)
ย 
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #4
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #4แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #4
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #4
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 6์žฅ ๊ฒฐ์ • ํŠธ๋ฆฌ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 6์žฅ ๊ฒฐ์ • ํŠธ๋ฆฌ[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 6์žฅ ๊ฒฐ์ • ํŠธ๋ฆฌ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 6์žฅ ๊ฒฐ์ • ํŠธ๋ฆฌ
ย 
2.supervised learning(epoch#2)-2
2.supervised learning(epoch#2)-22.supervised learning(epoch#2)-2
2.supervised learning(epoch#2)-2
ย 
์‚ฌ์ดํ‚ท๋Ÿฐ ์ตœ์‹  ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์Šคํ„ฐ๋””
์‚ฌ์ดํ‚ท๋Ÿฐ ์ตœ์‹  ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์Šคํ„ฐ๋””์‚ฌ์ดํ‚ท๋Ÿฐ ์ตœ์‹  ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์Šคํ„ฐ๋””
์‚ฌ์ดํ‚ท๋Ÿฐ ์ตœ์‹  ๋ณ€๊ฒฝ ์‚ฌํ•ญ ์Šคํ„ฐ๋””
ย 
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #2
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #2แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #2
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #2
ย 
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
ย 
๋จธ์‹  ๋Ÿฌ๋‹ ์ž…๋ฌธ #1-๋จธ์‹ ๋Ÿฌ๋‹ ์†Œ๊ฐœ์™€ kNN ์†Œ๊ฐœ
๋จธ์‹  ๋Ÿฌ๋‹ ์ž…๋ฌธ #1-๋จธ์‹ ๋Ÿฌ๋‹ ์†Œ๊ฐœ์™€ kNN ์†Œ๊ฐœ๋จธ์‹  ๋Ÿฌ๋‹ ์ž…๋ฌธ #1-๋จธ์‹ ๋Ÿฌ๋‹ ์†Œ๊ฐœ์™€ kNN ์†Œ๊ฐœ
๋จธ์‹  ๋Ÿฌ๋‹ ์ž…๋ฌธ #1-๋จธ์‹ ๋Ÿฌ๋‹ ์†Œ๊ฐœ์™€ kNN ์†Œ๊ฐœ
ย 
6.algorithm chains and piplines(epoch#2)
6.algorithm chains and piplines(epoch#2)6.algorithm chains and piplines(epoch#2)
6.algorithm chains and piplines(epoch#2)
ย 
5.model evaluation and improvement(epoch#2) 2
5.model evaluation and improvement(epoch#2) 25.model evaluation and improvement(epoch#2) 2
5.model evaluation and improvement(epoch#2) 2
ย 
4.representing data and engineering features
4.representing data and engineering features4.representing data and engineering features
4.representing data and engineering features
ย 
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
ย 
5.model evaluation and improvement(epoch#2) 1
5.model evaluation and improvement(epoch#2) 15.model evaluation and improvement(epoch#2) 1
5.model evaluation and improvement(epoch#2) 1
ย 
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
ย 
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #1
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #1แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #1
แ„’แ…ขแ„แ…ฅแ„‹แ…ฆแ„€แ…ฆ แ„Œแ…ฅแ†ซแ„’แ…ขแ„ƒแ…ณแ†ฏแ„‹แ…ณแ†ซ แ„†แ…ฅแ„‰แ…ตแ†ซแ„…แ…ฅแ„‚แ…ตแ†ผ #1
ย 

Similar to 4.convolutional neural networks

Cnn ๋ฐœํ‘œ์ž๋ฃŒ
Cnn ๋ฐœํ‘œ์ž๋ฃŒCnn ๋ฐœํ‘œ์ž๋ฃŒ
Cnn ๋ฐœํ‘œ์ž๋ฃŒ์ข…ํ˜„ ์ตœ
ย 
Rdatamining
Rdatamining Rdatamining
Rdatamining Kangwook Lee
ย 
Recurrent Neural Net์˜ ์ด๋ก ๊ณผ ์„ค๋ช…
Recurrent Neural Net์˜ ์ด๋ก ๊ณผ ์„ค๋ช…Recurrent Neural Net์˜ ์ด๋ก ๊ณผ ์„ค๋ช…
Recurrent Neural Net์˜ ์ด๋ก ๊ณผ ์„ค๋ช…ํ™๋ฐฐ ๊น€
ย 
Lecture 4: Neural Networks I
Lecture 4: Neural Networks ILecture 4: Neural Networks I
Lecture 4: Neural Networks ISang Jun Lee
ย 
Convolutional neural networks
Convolutional neural networksConvolutional neural networks
Convolutional neural networksHyunjinBae3
ย 
Codeแ„…แ…ฉ แ„‹แ…ตแ„’แ…ขแ„’แ…กแ„‚แ…ณแ†ซ RNN
Codeแ„…แ…ฉ แ„‹แ…ตแ„’แ…ขแ„’แ…กแ„‚แ…ณแ†ซ RNNCodeแ„…แ…ฉ แ„‹แ…ตแ„’แ…ขแ„’แ…กแ„‚แ…ณแ†ซ RNN
Codeแ„…แ…ฉ แ„‹แ…ตแ„’แ…ขแ„’แ…กแ„‚แ…ณแ†ซ RNNSANG WON PARK
ย 
Python Programming: Function
Python Programming: FunctionPython Programming: Function
Python Programming: FunctionChan Shik Lim
ย 
Adversarial Attack in Neural Machine Translation
Adversarial Attack in Neural Machine TranslationAdversarial Attack in Neural Machine Translation
Adversarial Attack in Neural Machine TranslationHyunKyu Jeon
ย 
Tensorflow regression ํ…์„œํ”Œ๋กœ์šฐ ํšŒ๊ท€
Tensorflow regression ํ…์„œํ”Œ๋กœ์šฐ ํšŒ๊ท€Tensorflow regression ํ…์„œํ”Œ๋กœ์šฐ ํšŒ๊ท€
Tensorflow regression ํ…์„œํ”Œ๋กœ์šฐ ํšŒ๊ท€beom kyun choi
ย 
R ์Šคํ„ฐ๋”” ๋„ค๋ฒˆ์งธ
R ์Šคํ„ฐ๋”” ๋„ค๋ฒˆ์งธR ์Šคํ„ฐ๋”” ๋„ค๋ฒˆ์งธ
R ์Šคํ„ฐ๋”” ๋„ค๋ฒˆ์งธJaeseok Park
ย 
[๊ธฐ์ดˆ๊ฐœ๋…] Graph Convolutional Network (GCN)
[๊ธฐ์ดˆ๊ฐœ๋…] Graph Convolutional Network (GCN)[๊ธฐ์ดˆ๊ฐœ๋…] Graph Convolutional Network (GCN)
[๊ธฐ์ดˆ๊ฐœ๋…] Graph Convolutional Network (GCN)Donghyeon Kim
ย 
R ์Šคํ„ฐ๋”” ์ฒซ๋ฒˆ์งธ
R ์Šคํ„ฐ๋”” ์ฒซ๋ฒˆ์งธR ์Šคํ„ฐ๋”” ์ฒซ๋ฒˆ์งธ
R ์Šคํ„ฐ๋”” ์ฒซ๋ฒˆ์งธJaeseok Park
ย 
Digit recognizer
Digit recognizerDigit recognizer
Digit recognizerChul Ju Hong
ย 
Deep learningwithkeras ch3_1
Deep learningwithkeras ch3_1Deep learningwithkeras ch3_1
Deep learningwithkeras ch3_1PartPrime
ย 
Ch.5 Deep Learning
Ch.5 Deep LearningCh.5 Deep Learning
Ch.5 Deep LearningPartPrime
ย 

Similar to 4.convolutional neural networks (20)

Cnn ๋ฐœํ‘œ์ž๋ฃŒ
Cnn ๋ฐœํ‘œ์ž๋ฃŒCnn ๋ฐœํ‘œ์ž๋ฃŒ
Cnn ๋ฐœํ‘œ์ž๋ฃŒ
ย 
Rdatamining
Rdatamining Rdatamining
Rdatamining
ย 
MNIST for ML beginners
MNIST for ML beginnersMNIST for ML beginners
MNIST for ML beginners
ย 
R_datamining
R_dataminingR_datamining
R_datamining
ย 
Recurrent Neural Net์˜ ์ด๋ก ๊ณผ ์„ค๋ช…
Recurrent Neural Net์˜ ์ด๋ก ๊ณผ ์„ค๋ช…Recurrent Neural Net์˜ ์ด๋ก ๊ณผ ์„ค๋ช…
Recurrent Neural Net์˜ ์ด๋ก ๊ณผ ์„ค๋ช…
ย 
Lecture 4: Neural Networks I
Lecture 4: Neural Networks ILecture 4: Neural Networks I
Lecture 4: Neural Networks I
ย 
Convolutional neural networks
Convolutional neural networksConvolutional neural networks
Convolutional neural networks
ย 
Tda jisu kim
Tda jisu kimTda jisu kim
Tda jisu kim
ย 
Codeแ„…แ…ฉ แ„‹แ…ตแ„’แ…ขแ„’แ…กแ„‚แ…ณแ†ซ RNN
Codeแ„…แ…ฉ แ„‹แ…ตแ„’แ…ขแ„’แ…กแ„‚แ…ณแ†ซ RNNCodeแ„…แ…ฉ แ„‹แ…ตแ„’แ…ขแ„’แ…กแ„‚แ…ณแ†ซ RNN
Codeแ„…แ…ฉ แ„‹แ…ตแ„’แ…ขแ„’แ…กแ„‚แ…ณแ†ซ RNN
ย 
Python Programming: Function
Python Programming: FunctionPython Programming: Function
Python Programming: Function
ย 
Adversarial Attack in Neural Machine Translation
Adversarial Attack in Neural Machine TranslationAdversarial Attack in Neural Machine Translation
Adversarial Attack in Neural Machine Translation
ย 
LeNet & GoogLeNet
LeNet & GoogLeNetLeNet & GoogLeNet
LeNet & GoogLeNet
ย 
Tensorflow regression ํ…์„œํ”Œ๋กœ์šฐ ํšŒ๊ท€
Tensorflow regression ํ…์„œํ”Œ๋กœ์šฐ ํšŒ๊ท€Tensorflow regression ํ…์„œํ”Œ๋กœ์šฐ ํšŒ๊ท€
Tensorflow regression ํ…์„œํ”Œ๋กœ์šฐ ํšŒ๊ท€
ย 
R ์Šคํ„ฐ๋”” ๋„ค๋ฒˆ์งธ
R ์Šคํ„ฐ๋”” ๋„ค๋ฒˆ์งธR ์Šคํ„ฐ๋”” ๋„ค๋ฒˆ์งธ
R ์Šคํ„ฐ๋”” ๋„ค๋ฒˆ์งธ
ย 
[๊ธฐ์ดˆ๊ฐœ๋…] Graph Convolutional Network (GCN)
[๊ธฐ์ดˆ๊ฐœ๋…] Graph Convolutional Network (GCN)[๊ธฐ์ดˆ๊ฐœ๋…] Graph Convolutional Network (GCN)
[๊ธฐ์ดˆ๊ฐœ๋…] Graph Convolutional Network (GCN)
ย 
R ์Šคํ„ฐ๋”” ์ฒซ๋ฒˆ์งธ
R ์Šคํ„ฐ๋”” ์ฒซ๋ฒˆ์งธR ์Šคํ„ฐ๋”” ์ฒซ๋ฒˆ์งธ
R ์Šคํ„ฐ๋”” ์ฒซ๋ฒˆ์งธ
ย 
Digit recognizer
Digit recognizerDigit recognizer
Digit recognizer
ย 
Deep learningwithkeras ch3_1
Deep learningwithkeras ch3_1Deep learningwithkeras ch3_1
Deep learningwithkeras ch3_1
ย 
Ch.5 Deep Learning
Ch.5 Deep LearningCh.5 Deep Learning
Ch.5 Deep Learning
ย 
Python
PythonPython
Python
ย 

More from Haesun Park

[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 9์žฅ ํ…์„œํ”Œ๋กœ ์‹œ์ž‘ํ•˜๊ธฐ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 9์žฅ ํ…์„œํ”Œ๋กœ ์‹œ์ž‘ํ•˜๊ธฐ[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 9์žฅ ํ…์„œํ”Œ๋กœ ์‹œ์ž‘ํ•˜๊ธฐ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 9์žฅ ํ…์„œํ”Œ๋กœ ์‹œ์ž‘ํ•˜๊ธฐHaesun Park
ย 
(Handson ml)ch.8-dimensionality reduction
(Handson ml)ch.8-dimensionality reduction(Handson ml)ch.8-dimensionality reduction
(Handson ml)ch.8-dimensionality reductionHaesun Park
ย 
7.woring with text data(epoch#2)
7.woring with text data(epoch#2)7.woring with text data(epoch#2)
7.woring with text data(epoch#2)Haesun Park
ย 
2.supervised learning(epoch#2)-1
2.supervised learning(epoch#2)-12.supervised learning(epoch#2)-1
2.supervised learning(epoch#2)-1Haesun Park
ย 
1.introduction(epoch#2)
1.introduction(epoch#2)1.introduction(epoch#2)
1.introduction(epoch#2)Haesun Park
ย 
7.woring with text data
7.woring with text data7.woring with text data
7.woring with text dataHaesun Park
ย 
6.algorithm chains and piplines
6.algorithm chains and piplines6.algorithm chains and piplines
6.algorithm chains and piplinesHaesun Park
ย 
1.introduction
1.introduction1.introduction
1.introductionHaesun Park
ย 
๊ธฐ๊ณ„๋„ ํ•™๊ต์— ๊ฐ€๋‚˜์š”?
๊ธฐ๊ณ„๋„ ํ•™๊ต์— ๊ฐ€๋‚˜์š”?๊ธฐ๊ณ„๋„ ํ•™๊ต์— ๊ฐ€๋‚˜์š”?
๊ธฐ๊ณ„๋„ ํ•™๊ต์— ๊ฐ€๋‚˜์š”?Haesun Park
ย 

More from Haesun Park (9)

[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 9์žฅ ํ…์„œํ”Œ๋กœ ์‹œ์ž‘ํ•˜๊ธฐ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 9์žฅ ํ…์„œํ”Œ๋กœ ์‹œ์ž‘ํ•˜๊ธฐ[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 9์žฅ ํ…์„œํ”Œ๋กœ ์‹œ์ž‘ํ•˜๊ธฐ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 9์žฅ ํ…์„œํ”Œ๋กœ ์‹œ์ž‘ํ•˜๊ธฐ
ย 
(Handson ml)ch.8-dimensionality reduction
(Handson ml)ch.8-dimensionality reduction(Handson ml)ch.8-dimensionality reduction
(Handson ml)ch.8-dimensionality reduction
ย 
7.woring with text data(epoch#2)
7.woring with text data(epoch#2)7.woring with text data(epoch#2)
7.woring with text data(epoch#2)
ย 
2.supervised learning(epoch#2)-1
2.supervised learning(epoch#2)-12.supervised learning(epoch#2)-1
2.supervised learning(epoch#2)-1
ย 
1.introduction(epoch#2)
1.introduction(epoch#2)1.introduction(epoch#2)
1.introduction(epoch#2)
ย 
7.woring with text data
7.woring with text data7.woring with text data
7.woring with text data
ย 
6.algorithm chains and piplines
6.algorithm chains and piplines6.algorithm chains and piplines
6.algorithm chains and piplines
ย 
1.introduction
1.introduction1.introduction
1.introduction
ย 
๊ธฐ๊ณ„๋„ ํ•™๊ต์— ๊ฐ€๋‚˜์š”?
๊ธฐ๊ณ„๋„ ํ•™๊ต์— ๊ฐ€๋‚˜์š”?๊ธฐ๊ณ„๋„ ํ•™๊ต์— ๊ฐ€๋‚˜์š”?
๊ธฐ๊ณ„๋„ ํ•™๊ต์— ๊ฐ€๋‚˜์š”?
ย 

4.convolutional neural networks