SlideShare uma empresa Scribd logo
1 de 52
Baixar para ler offline
Kubernetes Internals
Kubernetes 해부하기
eastbright.k@gmail.com
DongHyeon Kim
Agenda
‣ Understanding Kubernetes Components
‣ Understanding Networking
‣ Understanding Pod Networking
‣ Understanding Service Network
Understanding Kubernetes Components
Kubernetes Component
▸ Master Component
▸ 클러스터의 Control Plane을 제공
▸ Node Component
▸ Kubernetes Runtime 환경을 제공
▸ Add-on Component
▸ 부가적인 클러스터의 기능을 이행하는 Pod와 Service
Master Component
▸ kube-apiserver
▸ kubernetes api의 endpoint를 제공
▸ etcd
▸ 모든 클러스터의 데이터를 저장하는 Key-Value 저장소
▸ kube-scheduler
▸ Node가 배정되지 않은 Pod을 감지하고 해당 Pod가 구동 될 Node를 선택
▸ kube-controller-manager
▸ 다수의 Controller(Kubernetes의 Resource를 관리)를 실행
▸ cloud-controller-manager
▸ Cloud Provider와 상호작용
Node Component
▸ kubelet
▸ 클러스터의 각 호스트에서 실행되는 Agent
▸ kube-proxy
▸ Service의 추상화를 구현 (Userspace, iptables, ipvs,
kernelspace mode)
▸ Container-Runtime
▸ Container의 동작을 책임
▸ Container-Runtime-Interface를 구현한 모든 Runtime
Add-on Component
▸ DNS
▸ Service Discovery를 제공
▸ CNI (Container Network Interfaces)
▸ Pod 간의 Network를 제공
▸ Dashboard
▸ Monitoring
▸ Logging
컴포넌트의 상호 종속성
▸ 거의 항상 모든 컴포넌트는 API Server로 요청
▸ 일부 명령에 한해서만 API Server가 Kubelet에 요청
Understanding API Server
▸ 인증, 인가, 승인, 검증을 거쳐 etcd에 저장
▸ Resource의 변경 사항을 client들에게 전파
▸ Resource의 통지 및 저장하는 기능만 제공
Watch Interface of API Server
▸ Resource 별 watch Interface를 제공
▸ Notification(Publish / Subscribe) over Http
▸ Http 1.0, Http 1.1 지원
Watch Interface of API Server
$ curl --http1.0 http://localhost:8080/api/v1/pods?watch=true
$ tcpdump -nlA -i lo port 8080
05:42:32.087199 IP 127.0.0.1.47318 > 127.0.0.1.8080: Flags [P.], seq 1:101, ack 1, win 342, options [nop,nop,TS val 926521512
ecr 926521512], length 100: HTTP: GET /api/v1/pods?watch=true HTTP/1.0
E...9<@.@.."............rC.u...,...V.......
79..79..GET /api/v1/pods?watch=true HTTP/1.0
Host: localhost:8080
User-Agent: curl/7.58.0
Accept: */*
05:42:32.087785 IP 127.0.0.1.8080 > 127.0.0.1.47318: Flags [P.], seq 1:89, ack 101, win 342, options [nop,nop,TS val
926521513 ecr 926521512], length 88: HTTP: HTTP/1.0 200 OK
E...`c@.@..................,rC.....V.......
79..79..HTTP/1.0 200 OK
Content-Type: application/json
Date: Fri, 22 Mar 2019 05:42:32 GMT
05:42:32.090370 IP 127.0.0.1.8080 > 127.0.0.1.47318: Flags [P.], seq 56470:60566, ack 101, win 342, options [nop,nop,TS val
926521516 ecr 926521515], length 4096: HTTP
{"type":"ADDED","object":{ ... }}
...
Watch Interface of API Server
$ curl http://localhost:8080/api/v1/pods?watch=true
$ tcpdump -nlA -i lo port 8080
05:33:24.628863 IP 127.0.0.1.44242 > 127.0.0.1.8080: Flags [P.], seq 1:101, ack 1, win 342, options [nop,nop,TS val 925974024
ecr 925974024], length 100: HTTP: GET /api/v1/pods?watch=true HTTP/1.1
E....w@.@.o..............Q..jn.....V.......
71>.71>.GET /api/v1/pods?watch=true HTTP/1.1
Host: localhost:8080
User-Agent: curl/7.58.0
Accept: */*
05:33:24.629526 IP 127.0.0.1.8080 > 127.0.0.1.44242: Flags [P.], seq 1:117, ack 101, win 342, options [nop,nop,TS val
925974025 ecr 925974024], length 116: HTTP: HTTP/1.1 200 OK
E...;_@.@...............jn...Q.=...V.......
71> 71>.HTTP/1.1 200 OK
Content-Type: application/json
Date: Fri, 22 Mar 2019 05:33:24 GMT
Transfer-Encoding: chunked
9cf
{"type":"ADDED","object":{ ... }}
aab
{"type":"MODIFIED","object":{ ... }}
....
Understanding Scheduler
▸ Node가 할당되지 않은 Pod을 감지하여 Pod에 Node를 할당
▸ spec.nodeName (Pod.PodSpec.NodeName) 필드만 수정
▸ Pod이 Schedule될 수 있는 Node의 목록을 필터링
▸ 허용하는 Node 중 우선순위로 정렬한 뒤 최적의 Node를 선택
Scheduler의 기본적인 Filtering 정책
▸ Node가 Pod의 Request Resource 이상의 여분이 있는가?
▸ Node가 Pod의 NodeSelector에 맞는 Label을 가졌는가?
▸ Pod이 특정 Host Port Binding을 요구하는 경우 해당 Node에 Port가 이미 사용
중이지 않는가?
▸ Pod이 특정 Volume을 요청하는 경우, 이 Volume을 Node에서 제공할 수 있는가?
▸ Pod는 Node의 Taint를 허용하는가?
▸ …
▸ kubernetes/pkg/scheduler/core/generic_scheduler.go
▸ kubernetes/pkg/scheduler/algorithm/predicates/predicates.go
Understanding Controller
▸ Drive current state (status) → desired state (spec)
▸ Controller 간에는 통신 X
▸ Scheduler 와 통신 X
▸ Kubelet 과 통신 X
Replication Manager, ReplicaSet Controller
Endpoint Controller
Understanding Kubelet
▸ Worker Node에서 실행되는 모든 것의 책임을 가짐
▸ 초기 실행 시 Kubelet이 실행되는 Host를 Node Resource로 등록
▸ 해당 Node에 Schedule 된 Pod을 Container로 실행
▸ 실행 중인 Container를 지속적으로 모니터링하고 상태와 이벤트, 리
소스 소모를 API Server에 통지
▸ readiness, liveness probe를 실행
Understanding Kubelet
▸ 특정 Local Directory의 File 기반으로도 Pod 생성 가능
Components의 상호협력 방식
Understanding Kube-Proxy
▸ 모든 Node에서 Kube-Proxy가 실행 (daemonSet 으로 배포)
▸ Service의 추상화를 구현
▸ Userspace, iptables, ipvs, kernelspace Mode 지원
// cmd/kube-proxy/app/server.go
const (
proxyModeUserspace = "userspace"
proxyModeIPTables = "iptables"
proxyModeIPVS = "ipvs"
proxyModeKernelspace = “kernelspace" // for windows
)
Understanding Kube-Proxy
Proxy Mode (Userspace Mode)
kubernetes/pkg/proxy/userspace/proxier.go
Non-Proxy Mode (iptables, ipvs)
kubernetes/pkg/proxy/iptables/proxier.go
kubernetes/pkg/proxy/ipvs/proxier.go
Understanding DNS
▸ API Server의 watch interface를 통해 Service, Endpoint, Pod를 감시
▸ 최신의 DNS 정보를 유지
▸ Resource가 갱신 될 때 잠시동안 DNS Record가 유효하지 않을 수 있음
▸ Cluster에 배포되는 모든 Container 내부의 /etc/resolv.conf에
nameserver로 등록
▸ pkg/kubelet/network/dns/dns.go (SetupDNSinContainerizedMounter)
$ root@k8s-master:/home/h# kubectl exec -it sample cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.k8s svc.k8s k8s
options ndots:5
Understanding Networking
Networking Model
▸ Container to Container Networking
▸ namespace 공유 (localhost communication)
▸ Pod to Pod Networking
▸ CNI
▸ Pod to Service Networking
▸ Service
▸ External to Service Networking
▸ Service
Understanding Pod Networking
Requirements of CNI
▸ Node의 Pod는 모든 Node의 모든 Pod와 NAT 없이 통신이 가능해
야 함
▸ Node의 Agent는 해당 Node의 모든 Pod와 통신이 가능해야 함
▸ Node의 Host Network에서 실행되는 Pod는 모든 Node의 모든
Pod와 NAT 없이 통신이 가능해야 함
NAT-less?
Pod to Pod Networking (같은 노드)
Pod to Pod Networking (다른 노드)
Understanding Service Networking
Service Networking
▸ Service와 관련된 모든 것은 Kube-Proxy에 의해 처리
▸ Service는 고유한 IP와 Port를 가짐
▸ Service IP == Virtual IP
▸ Kube-Proxy는 Service 생성을 감지하면 Mode에 따른 Rule을 생성
▸ 목적지가 Service인 경우 목적지 주소를 Service에 연결 된 Pod 중
하나의 주소로 변경(DNAT)하여 Redirect
▸ Pod 바깥에서 Service로 접근하는 경우 SNAT (Node의 IP),
DNAT(Pod의 IP)가 모두 발생 (DSR을 지원하지 않는 경우)
iptables chain traversal
Service (iptables mode)
$ iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !192.168.0.0/16 2.2.2.2 tcp dpt:http-alt
KUBE-SVC-ZE62HOGUXOIF3MJ5 tcp -- anywhere 2.2.2.2 tcp dpt:http-alt
KUBE-NODEPORTS all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- anywhere anywhere tcp dpt:30001
KUBE-SVC-ZE62HOGUXOIF3MJ5 tcp -- anywhere anywhere tcp dpt:30001
Chain KUBE-SVC-ZE62HOGUXOIF3MJ5 (2 references)
target prot opt source destination
KUBE-SEP-7AE52TSMNDEGV6BO all -- anywhere anywhere statistic mode random probability 0.50000000000
KUBE-SEP-GEQ73U43LIPSQP2Z all -- anywhere anywhere
Chain KUBE-SEP-7AE52TSMNDEGV6BO (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 1.1.1.1 anywhere
DNAT tcp -- anywhere anywhere tcp to:1.1.1.1:8080
Chain KUBE-SEP-GEQ73U43LIPSQP2Z (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 1.1.2.1 anywhere
DNAT tcp -- anywhere anywhere tcp to:1.1.2.1:8080
Packet flow of Service
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2
3.3.3.1 3.3.3.2
3.3.3.3
External
Packet flow of Service (External to Service)
$ tcpdump -i enp0s8 port 30001 -n
05:17:50.632656 IP 3.3.3.3.55824 > 3.3.3.1.30001: Flags [SEW], seq 920096640, win
65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 449946274 ecr 0,sackOK,eol],
length 0
05:17:50.632886 IP 3.3.3.1.30001 > 3.3.3.3.55824: Flags [S.E], seq 2034560536, ack
920096641, win 28960, options [mss 1460,sackOK,TS val 167059923 ecr
449946274,nop,wscale 7], length 0
$ tcpdump -i cali27c81818b22 -n
05:17:50.632712 IP 3.3.3.1.55824 > 1.1.2.1.8080: Flags [SEW], seq 920096640, win
65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 449946274 ecr 0,sackOK,eol],
length 0
05:17:50.632874 IP 1.1.2.1.8080 > 3.3.3.1.55824: Flags [S.E], seq 2034560536, ack
920096641, win 28960, options [mss 1460,sackOK,TS val 167059923 ecr
449946274,nop,wscale 7], length 0
Node1 Interface
Pod B2 Interface
Packet flow of Service (External to Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 3.3.3.3, dst: 3.3.3.1:30001
3.3.3.1 3.3.3.2
3.3.3.3
External
Packet flow of Service (External to Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 3.3.3.1, dst: 1.1.2.1:8080
3.3.3.1 3.3.3.2
3.3.3.3
External
Packet flow of Service (External to Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 1.1.2.1:8080, dst:3.3.3.1
3.3.3.1 3.3.3.2
3.3.3.3
External
Packet flow of Service (External to Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 3.3.3.1:30001, dst: 3.3.3.3
3.3.3.1 3.3.3.2
3.3.3.3
External
Packet flow of Service (Pod to Service)
$ tcpdump -i calib43f921251f -n
05:14:05.077057 IP 1.1.1.1.54122 > 2.2.2.2.8080: Flags [S], seq 1210630612, win
29200, options [mss 1460,sackOK,TS val 2710881183 ecr 0,nop,wscale 7], length 0
05:14:05.077767 IP 2.2.2.2.8080 > 1.1.1.1.54122: Flags [S.], seq 4123667957, ack
1210630613, win 28960, options [mss 1460,sackOK,TS val 411294588 ecr
2710881183,nop,wscale 7], length 0
$ tcpdump -i cali27c81818b22 -n
05:14:05.099668 IP 1.1.1.1.54122 > 1.1.2.1.8080: Flags [S], seq 1210630612, win
29200, options [mss 1460,sackOK,TS val 2710881183 ecr 0,nop,wscale 7], length 0
05:14:05.099826 IP 1.1.2.1.8080 > 1.1.1.1.54122: Flags [S.], seq 4123667957, ack
1210630613, win 28960, options [mss 1460,sackOK,TS val 411294588 ecr
2710881183,nop,wscale 7], length 0
Pod B2 Interface
Pod A Interface
Packet flow of Service (Pod to Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 1.1.1.1, dst: 2.2.2.2:8080
3.3.3.1 3.3.3.2
Packet flow of Service (Pod to Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 1.1.1.1, dst: 1.1.2.1:8080
3.3.3.1 3.3.3.2
Packet flow of Service (Pod to Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 1.1.2.1:8080, dst:1.1.1.1
3.3.3.1 3.3.3.2
Packet flow of Service (Pod to Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 2.2.2.2:8080, dst:1.1.1.1
3.3.3.1 3.3.3.2
Packet flow of Service (Pod to self-Service)
$ tcpdump -i calib43f921251f -n
05:15:59.556723 IP 1.1.2.1.54308 > 2.2.2.2.8080: Flags [S], seq 4048875942, win
29200, options [mss 1460,sackOK,TS val 2710995663 ecr 0,nop,wscale 7], length 0
05:15:59.556770 IP 3.3.3.2.54308 > 1.1.2.1.8080: Flags [S], seq 4048875942, win
29200, options [mss 1460,sackOK,TS val 2710995663 ecr 0,nop,wscale 7], length 0
05:15:59.556779 IP 1.1.2.1.8080 > 3.3.3.2.54308: Flags [S.], seq 2680204874, ack
4048875943, win 28960, options [mss 1460,sackOK,TS val 1749589035 ecr
2710995663,nop,wscale 7], length 0
05:15:59.556785 IP 2.2.2.2.8080 > 1.1.2.1.54308: Flags [S.], seq 2680204874, ack
4048875943, win 28960, options [mss 1460,sackOK,TS val 1749589035 ecr
2710995663,nop,wscale 7], length 0
Pod B2 Interface
Packet flow of Service (Pod to self-Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 1.1.2.1, dst: 2.2.2.2:8080
3.3.3.1 3.3.3.2
Packet flow of Service (Pod to self-Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 3.3.3.2, dst: 1.1.2.1:8080
3.3.3.1 3.3.3.2
Packet flow of Service (Pod to self-Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 1.1.2.1:8080, dst:3.3.3.2
3.3.3.1 3.3.3.2
Packet flow of Service (Pod to self-Service)
Node 1 Node 2
Pod A Pod B1 Pod B2 Pod B3
Service A
2.2.2.2:8080
30001
Service A
2.2.2.2:8080
30001
1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2

src: 2.2.2.2:8080, dst: 1.1.2.1
3.3.3.1 3.3.3.2
Q & A
References
▸ https://livebook.manning.com/#!/book/kubernetes-in-
action/chapter-11
▸ https://github.com/kubernetes/community
▸ http://ebtables.netfilter.org/br_fw_ia/br_fw_ia.html
▸ https://github.com/inaz1502/kubernetes-internals
▸ https://github.com/sillim-programmer/kubernetes-in-
action-study/tree/master/k8s-in-action-chap11

Mais conteúdo relacionado

Mais procurados

Mais procurados (20)

Kubernetes dealing with storage and persistence
Kubernetes  dealing with storage and persistenceKubernetes  dealing with storage and persistence
Kubernetes dealing with storage and persistence
 
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech TalkArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
 
[오픈소스컨설팅]인프라 자동화 도구 Chef
[오픈소스컨설팅]인프라 자동화 도구  Chef[오픈소스컨설팅]인프라 자동화 도구  Chef
[오픈소스컨설팅]인프라 자동화 도구 Chef
 
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
 
Extending kubernetes with CustomResourceDefinitions
Extending kubernetes with CustomResourceDefinitionsExtending kubernetes with CustomResourceDefinitions
Extending kubernetes with CustomResourceDefinitions
 
Kubernetes 101
Kubernetes 101Kubernetes 101
Kubernetes 101
 
Deep dive into Kubernetes Networking
Deep dive into Kubernetes NetworkingDeep dive into Kubernetes Networking
Deep dive into Kubernetes Networking
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
Kubernetes Networking
Kubernetes NetworkingKubernetes Networking
Kubernetes Networking
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes
 
An intro to Kubernetes operators
An intro to Kubernetes operatorsAn intro to Kubernetes operators
An intro to Kubernetes operators
 
Kubernetes a comprehensive overview
Kubernetes   a comprehensive overviewKubernetes   a comprehensive overview
Kubernetes a comprehensive overview
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
Kubernetes Basics
Kubernetes BasicsKubernetes Basics
Kubernetes Basics
 
Kubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive OverviewKubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive Overview
 
DevOps with Kubernetes
DevOps with KubernetesDevOps with Kubernetes
DevOps with Kubernetes
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory Guide
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
NGINX Ingress Controller for Kubernetes
NGINX Ingress Controller for KubernetesNGINX Ingress Controller for Kubernetes
NGINX Ingress Controller for Kubernetes
 

Semelhante a Kubernetes internals (Kubernetes 해부하기)

Semelhante a Kubernetes internals (Kubernetes 해부하기) (20)

Deep dive in container service discovery
Deep dive in container service discoveryDeep dive in container service discovery
Deep dive in container service discovery
 
Multicloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRPMulticloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRP
 
Kubernetes at Datadog Scale - Ara Pulido
Kubernetes at Datadog Scale - Ara PulidoKubernetes at Datadog Scale - Ara Pulido
Kubernetes at Datadog Scale - Ara Pulido
 
K8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals TrainingK8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals Training
 
Kubernetes Basic Operation
Kubernetes Basic OperationKubernetes Basic Operation
Kubernetes Basic Operation
 
Warp 10 Platform Presentation - Criteo Beer & Tech 2016-02-03
Warp 10 Platform Presentation - Criteo Beer & Tech 2016-02-03Warp 10 Platform Presentation - Criteo Beer & Tech 2016-02-03
Warp 10 Platform Presentation - Criteo Beer & Tech 2016-02-03
 
Openstack Quantum + Devstack Tutorial
Openstack Quantum + Devstack TutorialOpenstack Quantum + Devstack Tutorial
Openstack Quantum + Devstack Tutorial
 
Spark with kubernates
Spark with kubernatesSpark with kubernates
Spark with kubernates
 
Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)
 
Real World Lessons on the Pain Points of Node.JS Application
Real World Lessons on the Pain Points of Node.JS ApplicationReal World Lessons on the Pain Points of Node.JS Application
Real World Lessons on the Pain Points of Node.JS Application
 
Kubernetes - Sailing a Sea of Containers
Kubernetes - Sailing a Sea of ContainersKubernetes - Sailing a Sea of Containers
Kubernetes - Sailing a Sea of Containers
 
Nynog-K8s-networking-101.pptx
Nynog-K8s-networking-101.pptxNynog-K8s-networking-101.pptx
Nynog-K8s-networking-101.pptx
 
Kubernetes #1 intro
Kubernetes #1   introKubernetes #1   intro
Kubernetes #1 intro
 
Introduction to Kubernetes
Introduction to KubernetesIntroduction to Kubernetes
Introduction to Kubernetes
 
Evolution of kube-proxy (Brussels, Fosdem 2020)
Evolution of kube-proxy (Brussels, Fosdem 2020)Evolution of kube-proxy (Brussels, Fosdem 2020)
Evolution of kube-proxy (Brussels, Fosdem 2020)
 
Kubernetes Ingress 101
Kubernetes Ingress 101Kubernetes Ingress 101
Kubernetes Ingress 101
 
Scaling Kubernetes to Support 50000 Services.pptx
Scaling Kubernetes to Support 50000 Services.pptxScaling Kubernetes to Support 50000 Services.pptx
Scaling Kubernetes to Support 50000 Services.pptx
 
Container orchestration from theory to practice
Container orchestration from theory to practiceContainer orchestration from theory to practice
Container orchestration from theory to practice
 
Introduction to istio
Introduction to istioIntroduction to istio
Introduction to istio
 
DCEU 18: Docker Container Networking
DCEU 18: Docker Container NetworkingDCEU 18: Docker Container Networking
DCEU 18: Docker Container Networking
 

Último

TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
mohitmore19
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 

Último (20)

Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfAzure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 

Kubernetes internals (Kubernetes 해부하기)

  • 2. Agenda ‣ Understanding Kubernetes Components ‣ Understanding Networking ‣ Understanding Pod Networking ‣ Understanding Service Network
  • 4. Kubernetes Component ▸ Master Component ▸ 클러스터의 Control Plane을 제공 ▸ Node Component ▸ Kubernetes Runtime 환경을 제공 ▸ Add-on Component ▸ 부가적인 클러스터의 기능을 이행하는 Pod와 Service
  • 5. Master Component ▸ kube-apiserver ▸ kubernetes api의 endpoint를 제공 ▸ etcd ▸ 모든 클러스터의 데이터를 저장하는 Key-Value 저장소 ▸ kube-scheduler ▸ Node가 배정되지 않은 Pod을 감지하고 해당 Pod가 구동 될 Node를 선택 ▸ kube-controller-manager ▸ 다수의 Controller(Kubernetes의 Resource를 관리)를 실행 ▸ cloud-controller-manager ▸ Cloud Provider와 상호작용
  • 6. Node Component ▸ kubelet ▸ 클러스터의 각 호스트에서 실행되는 Agent ▸ kube-proxy ▸ Service의 추상화를 구현 (Userspace, iptables, ipvs, kernelspace mode) ▸ Container-Runtime ▸ Container의 동작을 책임 ▸ Container-Runtime-Interface를 구현한 모든 Runtime
  • 7. Add-on Component ▸ DNS ▸ Service Discovery를 제공 ▸ CNI (Container Network Interfaces) ▸ Pod 간의 Network를 제공 ▸ Dashboard ▸ Monitoring ▸ Logging
  • 8. 컴포넌트의 상호 종속성 ▸ 거의 항상 모든 컴포넌트는 API Server로 요청 ▸ 일부 명령에 한해서만 API Server가 Kubelet에 요청
  • 9. Understanding API Server ▸ 인증, 인가, 승인, 검증을 거쳐 etcd에 저장 ▸ Resource의 변경 사항을 client들에게 전파 ▸ Resource의 통지 및 저장하는 기능만 제공
  • 10. Watch Interface of API Server ▸ Resource 별 watch Interface를 제공 ▸ Notification(Publish / Subscribe) over Http ▸ Http 1.0, Http 1.1 지원
  • 11. Watch Interface of API Server $ curl --http1.0 http://localhost:8080/api/v1/pods?watch=true $ tcpdump -nlA -i lo port 8080 05:42:32.087199 IP 127.0.0.1.47318 > 127.0.0.1.8080: Flags [P.], seq 1:101, ack 1, win 342, options [nop,nop,TS val 926521512 ecr 926521512], length 100: HTTP: GET /api/v1/pods?watch=true HTTP/1.0 E...9<@.@.."............rC.u...,...V....... 79..79..GET /api/v1/pods?watch=true HTTP/1.0 Host: localhost:8080 User-Agent: curl/7.58.0 Accept: */* 05:42:32.087785 IP 127.0.0.1.8080 > 127.0.0.1.47318: Flags [P.], seq 1:89, ack 101, win 342, options [nop,nop,TS val 926521513 ecr 926521512], length 88: HTTP: HTTP/1.0 200 OK E...`c@.@..................,rC.....V....... 79..79..HTTP/1.0 200 OK Content-Type: application/json Date: Fri, 22 Mar 2019 05:42:32 GMT 05:42:32.090370 IP 127.0.0.1.8080 > 127.0.0.1.47318: Flags [P.], seq 56470:60566, ack 101, win 342, options [nop,nop,TS val 926521516 ecr 926521515], length 4096: HTTP {"type":"ADDED","object":{ ... }} ...
  • 12. Watch Interface of API Server $ curl http://localhost:8080/api/v1/pods?watch=true $ tcpdump -nlA -i lo port 8080 05:33:24.628863 IP 127.0.0.1.44242 > 127.0.0.1.8080: Flags [P.], seq 1:101, ack 1, win 342, options [nop,nop,TS val 925974024 ecr 925974024], length 100: HTTP: GET /api/v1/pods?watch=true HTTP/1.1 E....w@.@.o..............Q..jn.....V....... 71>.71>.GET /api/v1/pods?watch=true HTTP/1.1 Host: localhost:8080 User-Agent: curl/7.58.0 Accept: */* 05:33:24.629526 IP 127.0.0.1.8080 > 127.0.0.1.44242: Flags [P.], seq 1:117, ack 101, win 342, options [nop,nop,TS val 925974025 ecr 925974024], length 116: HTTP: HTTP/1.1 200 OK E...;_@.@...............jn...Q.=...V....... 71> 71>.HTTP/1.1 200 OK Content-Type: application/json Date: Fri, 22 Mar 2019 05:33:24 GMT Transfer-Encoding: chunked 9cf {"type":"ADDED","object":{ ... }} aab {"type":"MODIFIED","object":{ ... }} ....
  • 13. Understanding Scheduler ▸ Node가 할당되지 않은 Pod을 감지하여 Pod에 Node를 할당 ▸ spec.nodeName (Pod.PodSpec.NodeName) 필드만 수정 ▸ Pod이 Schedule될 수 있는 Node의 목록을 필터링 ▸ 허용하는 Node 중 우선순위로 정렬한 뒤 최적의 Node를 선택
  • 14. Scheduler의 기본적인 Filtering 정책 ▸ Node가 Pod의 Request Resource 이상의 여분이 있는가? ▸ Node가 Pod의 NodeSelector에 맞는 Label을 가졌는가? ▸ Pod이 특정 Host Port Binding을 요구하는 경우 해당 Node에 Port가 이미 사용 중이지 않는가? ▸ Pod이 특정 Volume을 요청하는 경우, 이 Volume을 Node에서 제공할 수 있는가? ▸ Pod는 Node의 Taint를 허용하는가? ▸ … ▸ kubernetes/pkg/scheduler/core/generic_scheduler.go ▸ kubernetes/pkg/scheduler/algorithm/predicates/predicates.go
  • 15. Understanding Controller ▸ Drive current state (status) → desired state (spec) ▸ Controller 간에는 통신 X ▸ Scheduler 와 통신 X ▸ Kubelet 과 통신 X
  • 18. Understanding Kubelet ▸ Worker Node에서 실행되는 모든 것의 책임을 가짐 ▸ 초기 실행 시 Kubelet이 실행되는 Host를 Node Resource로 등록 ▸ 해당 Node에 Schedule 된 Pod을 Container로 실행 ▸ 실행 중인 Container를 지속적으로 모니터링하고 상태와 이벤트, 리 소스 소모를 API Server에 통지 ▸ readiness, liveness probe를 실행
  • 19. Understanding Kubelet ▸ 특정 Local Directory의 File 기반으로도 Pod 생성 가능
  • 21. Understanding Kube-Proxy ▸ 모든 Node에서 Kube-Proxy가 실행 (daemonSet 으로 배포) ▸ Service의 추상화를 구현 ▸ Userspace, iptables, ipvs, kernelspace Mode 지원 // cmd/kube-proxy/app/server.go const ( proxyModeUserspace = "userspace" proxyModeIPTables = "iptables" proxyModeIPVS = "ipvs" proxyModeKernelspace = “kernelspace" // for windows )
  • 22. Understanding Kube-Proxy Proxy Mode (Userspace Mode) kubernetes/pkg/proxy/userspace/proxier.go Non-Proxy Mode (iptables, ipvs) kubernetes/pkg/proxy/iptables/proxier.go kubernetes/pkg/proxy/ipvs/proxier.go
  • 23. Understanding DNS ▸ API Server의 watch interface를 통해 Service, Endpoint, Pod를 감시 ▸ 최신의 DNS 정보를 유지 ▸ Resource가 갱신 될 때 잠시동안 DNS Record가 유효하지 않을 수 있음 ▸ Cluster에 배포되는 모든 Container 내부의 /etc/resolv.conf에 nameserver로 등록 ▸ pkg/kubelet/network/dns/dns.go (SetupDNSinContainerizedMounter) $ root@k8s-master:/home/h# kubectl exec -it sample cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.k8s svc.k8s k8s options ndots:5
  • 25. Networking Model ▸ Container to Container Networking ▸ namespace 공유 (localhost communication) ▸ Pod to Pod Networking ▸ CNI ▸ Pod to Service Networking ▸ Service ▸ External to Service Networking ▸ Service
  • 27. Requirements of CNI ▸ Node의 Pod는 모든 Node의 모든 Pod와 NAT 없이 통신이 가능해 야 함 ▸ Node의 Agent는 해당 Node의 모든 Pod와 통신이 가능해야 함 ▸ Node의 Host Network에서 실행되는 Pod는 모든 Node의 모든 Pod와 NAT 없이 통신이 가능해야 함
  • 29. Pod to Pod Networking (같은 노드)
  • 30. Pod to Pod Networking (다른 노드)
  • 32. Service Networking ▸ Service와 관련된 모든 것은 Kube-Proxy에 의해 처리 ▸ Service는 고유한 IP와 Port를 가짐 ▸ Service IP == Virtual IP ▸ Kube-Proxy는 Service 생성을 감지하면 Mode에 따른 Rule을 생성 ▸ 목적지가 Service인 경우 목적지 주소를 Service에 연결 된 Pod 중 하나의 주소로 변경(DNAT)하여 Redirect ▸ Pod 바깥에서 Service로 접근하는 경우 SNAT (Node의 IP), DNAT(Pod의 IP)가 모두 발생 (DSR을 지원하지 않는 경우)
  • 34. Service (iptables mode) $ iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */ Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-MARK-MASQ tcp -- !192.168.0.0/16 2.2.2.2 tcp dpt:http-alt KUBE-SVC-ZE62HOGUXOIF3MJ5 tcp -- anywhere 2.2.2.2 tcp dpt:http-alt KUBE-NODEPORTS all -- anywhere anywhere ADDRTYPE match dst-type LOCAL Chain KUBE-NODEPORTS (1 references) target prot opt source destination KUBE-MARK-MASQ tcp -- anywhere anywhere tcp dpt:30001 KUBE-SVC-ZE62HOGUXOIF3MJ5 tcp -- anywhere anywhere tcp dpt:30001 Chain KUBE-SVC-ZE62HOGUXOIF3MJ5 (2 references) target prot opt source destination KUBE-SEP-7AE52TSMNDEGV6BO all -- anywhere anywhere statistic mode random probability 0.50000000000 KUBE-SEP-GEQ73U43LIPSQP2Z all -- anywhere anywhere Chain KUBE-SEP-7AE52TSMNDEGV6BO (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 1.1.1.1 anywhere DNAT tcp -- anywhere anywhere tcp to:1.1.1.1:8080 Chain KUBE-SEP-GEQ73U43LIPSQP2Z (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 1.1.2.1 anywhere DNAT tcp -- anywhere anywhere tcp to:1.1.2.1:8080
  • 35. Packet flow of Service Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 3.3.3.1 3.3.3.2 3.3.3.3 External
  • 36. Packet flow of Service (External to Service) $ tcpdump -i enp0s8 port 30001 -n 05:17:50.632656 IP 3.3.3.3.55824 > 3.3.3.1.30001: Flags [SEW], seq 920096640, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 449946274 ecr 0,sackOK,eol], length 0 05:17:50.632886 IP 3.3.3.1.30001 > 3.3.3.3.55824: Flags [S.E], seq 2034560536, ack 920096641, win 28960, options [mss 1460,sackOK,TS val 167059923 ecr 449946274,nop,wscale 7], length 0 $ tcpdump -i cali27c81818b22 -n 05:17:50.632712 IP 3.3.3.1.55824 > 1.1.2.1.8080: Flags [SEW], seq 920096640, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 449946274 ecr 0,sackOK,eol], length 0 05:17:50.632874 IP 1.1.2.1.8080 > 3.3.3.1.55824: Flags [S.E], seq 2034560536, ack 920096641, win 28960, options [mss 1460,sackOK,TS val 167059923 ecr 449946274,nop,wscale 7], length 0 Node1 Interface Pod B2 Interface
  • 37. Packet flow of Service (External to Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 3.3.3.3, dst: 3.3.3.1:30001 3.3.3.1 3.3.3.2 3.3.3.3 External
  • 38. Packet flow of Service (External to Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 3.3.3.1, dst: 1.1.2.1:8080 3.3.3.1 3.3.3.2 3.3.3.3 External
  • 39. Packet flow of Service (External to Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 1.1.2.1:8080, dst:3.3.3.1 3.3.3.1 3.3.3.2 3.3.3.3 External
  • 40. Packet flow of Service (External to Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 3.3.3.1:30001, dst: 3.3.3.3 3.3.3.1 3.3.3.2 3.3.3.3 External
  • 41. Packet flow of Service (Pod to Service) $ tcpdump -i calib43f921251f -n 05:14:05.077057 IP 1.1.1.1.54122 > 2.2.2.2.8080: Flags [S], seq 1210630612, win 29200, options [mss 1460,sackOK,TS val 2710881183 ecr 0,nop,wscale 7], length 0 05:14:05.077767 IP 2.2.2.2.8080 > 1.1.1.1.54122: Flags [S.], seq 4123667957, ack 1210630613, win 28960, options [mss 1460,sackOK,TS val 411294588 ecr 2710881183,nop,wscale 7], length 0 $ tcpdump -i cali27c81818b22 -n 05:14:05.099668 IP 1.1.1.1.54122 > 1.1.2.1.8080: Flags [S], seq 1210630612, win 29200, options [mss 1460,sackOK,TS val 2710881183 ecr 0,nop,wscale 7], length 0 05:14:05.099826 IP 1.1.2.1.8080 > 1.1.1.1.54122: Flags [S.], seq 4123667957, ack 1210630613, win 28960, options [mss 1460,sackOK,TS val 411294588 ecr 2710881183,nop,wscale 7], length 0 Pod B2 Interface Pod A Interface
  • 42. Packet flow of Service (Pod to Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 1.1.1.1, dst: 2.2.2.2:8080 3.3.3.1 3.3.3.2
  • 43. Packet flow of Service (Pod to Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 1.1.1.1, dst: 1.1.2.1:8080 3.3.3.1 3.3.3.2
  • 44. Packet flow of Service (Pod to Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 1.1.2.1:8080, dst:1.1.1.1 3.3.3.1 3.3.3.2
  • 45. Packet flow of Service (Pod to Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 2.2.2.2:8080, dst:1.1.1.1 3.3.3.1 3.3.3.2
  • 46. Packet flow of Service (Pod to self-Service) $ tcpdump -i calib43f921251f -n 05:15:59.556723 IP 1.1.2.1.54308 > 2.2.2.2.8080: Flags [S], seq 4048875942, win 29200, options [mss 1460,sackOK,TS val 2710995663 ecr 0,nop,wscale 7], length 0 05:15:59.556770 IP 3.3.3.2.54308 > 1.1.2.1.8080: Flags [S], seq 4048875942, win 29200, options [mss 1460,sackOK,TS val 2710995663 ecr 0,nop,wscale 7], length 0 05:15:59.556779 IP 1.1.2.1.8080 > 3.3.3.2.54308: Flags [S.], seq 2680204874, ack 4048875943, win 28960, options [mss 1460,sackOK,TS val 1749589035 ecr 2710995663,nop,wscale 7], length 0 05:15:59.556785 IP 2.2.2.2.8080 > 1.1.2.1.54308: Flags [S.], seq 2680204874, ack 4048875943, win 28960, options [mss 1460,sackOK,TS val 1749589035 ecr 2710995663,nop,wscale 7], length 0 Pod B2 Interface
  • 47. Packet flow of Service (Pod to self-Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 1.1.2.1, dst: 2.2.2.2:8080 3.3.3.1 3.3.3.2
  • 48. Packet flow of Service (Pod to self-Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 3.3.3.2, dst: 1.1.2.1:8080 3.3.3.1 3.3.3.2
  • 49. Packet flow of Service (Pod to self-Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 1.1.2.1:8080, dst:3.3.3.2 3.3.3.1 3.3.3.2
  • 50. Packet flow of Service (Pod to self-Service) Node 1 Node 2 Pod A Pod B1 Pod B2 Pod B3 Service A 2.2.2.2:8080 30001 Service A 2.2.2.2:8080 30001 1.1.1.1 1.1.1.2 1.1.2.1 1.1.2.2 
src: 2.2.2.2:8080, dst: 1.1.2.1 3.3.3.1 3.3.3.2
  • 51. Q & A
  • 52. References ▸ https://livebook.manning.com/#!/book/kubernetes-in- action/chapter-11 ▸ https://github.com/kubernetes/community ▸ http://ebtables.netfilter.org/br_fw_ia/br_fw_ia.html ▸ https://github.com/inaz1502/kubernetes-internals ▸ https://github.com/sillim-programmer/kubernetes-in- action-study/tree/master/k8s-in-action-chap11