SlideShare uma empresa Scribd logo
1 de 146
Kubernetes
An Introduction
It’s like landing on Pluto when people
are still trying to figure out Mars (other
tools) properly – Rishabh Indoria 
Why learn Kubernetes?
What Does “Kubernetes” Mean?
Greek for “pilot” or
“Helmsman of a ship”
What is Kubernetes?
● A Production-Grade Container Orchestration System Google-grown,
based on Borg and Omega, systems that run inside of Google right now
and are proven to work at Google for over 10 years.
● Google spawns billions of containers per week with these systems.
● Created by three Google employees initially during the summer of 2014;
grew exponentially and became the first project to get donated to the
CNCF.
● Hit the first production-grade version v1.0.1 in July 2015. Has continually
released a new minor version every three months since v1.2.0 in March
2016. Lately v1.13.0 was released in December 2018.
Decouples Infrastructure and Scaling
● All services within Kubernetes are natively
Load Balanced.
● Can scale up and down dynamically.
● Used both to enable self-healing and
seamless upgrading or rollback of
applications.
Self Healing
Kubernetes will ALWAYS try and steer the cluster to its
desired state.
● Me: “I want 3 healthy instances of redis to always be
running.”
● Kubernetes: “Okay, I’ll ensure there are always 3
instances up and running.”
● Kubernetes: “Oh look, one has died. I’m going to
attempt to spin up a new one.”
Project Stats
● Over 46,600 stars on Github
● 1800+ Contributors to K8s Core
● Most discussed Repository by a large
margin
● 50,000+ users in Slack Team
10/2018
Project Stats
A Couple
Key Concepts...
Pods
● Atomic unit or smallest
“unit of work”of Kubernetes.
● Pods are one or MORE
containers that share
volumes and namespace.
● They are also ephemeral!
Services
● Unified method of accessing
the exposed workloads of Pods.
● Durable resource
○ static cluster IP
○ static namespaced
DNS name
Services
● Unified method of accessing
the exposed workloads of Pods.
● Durable resource
○ static cluster IP
○ static namespaced
DNS name
NOT Ephemeral!
Architecture
Overview
Architecture Overview
Control Plane
Components
Control Plane Components
● kube-apiserver
● etcd
● kube-controller-manager
● kube-scheduler
● cloud-controller-manager
kube-apiserver
● Provides a forward facing REST interface into the
kubernetes control plane and datastore.
● All clients and other applications interact with
kubernetes strictly through the API Server.
● Acts as the gatekeeper to the cluster by handling
authentication and authorization, request validation,
mutation, and admission control in addition to being the
front-end to the backing datastore.
etcd
● etcd acts as the cluster datastore.
● Purpose in relation to Kubernetes is to provide a strong,
consistent and highly available key-value store for
persisting cluster state.
● Stores objects and config information.
etcd
Uses “Raft Consensus”
among a quorum of systems
to create a fault-tolerant
consistent “view” of the
cluster.
https://raft.github.io/
Image Source
kube-controller-manager
● Monitors the cluster state via the apiserver
and steers the cluster towards the
desired state.
● Node Controller: Responsible for noticing and responding when nodes go down.
● Replication Controller: Responsible for maintaining the correct number of pods for
every replication controller object in the system.
● Endpoints Controller: Populates the Endpoints object (that is, joins Services &
Pods).
● Service Account & Token Controllers: Create default accounts and API access
tokens for new namespaces.
kube-scheduler
● Component on the master that watches newly created
pods that have no node assigned, and selects a node
for them to run on.
● Factors taken into account for scheduling decisions
include individual and collective resource requirements,
hardware/software/policy constraints, affinity and anti-
affinity specifications, data locality, inter-workload
interference and deadlines.
cloud-controller-manager
● Node Controller: For checking the cloud provider to determine if a
node has been deleted in the cloud after it stops responding
● Route Controller: For setting up routes in the underlying cloud
infrastructure
● Service Controller: For creating, updating and deleting cloud
provider load balancers
● Volume Controller: For creating, attaching, and mounting volumes,
and interacting with the cloud provider to orchestrate volumes
Architecture Overview
Node
Components
Node Components
● kubelet
● kube-proxy
● Container Runtime Engine
kubelet
● An agent that runs on each node in the
cluster. It makes sure that containers are
running in a pod.
● The kubelet takes a set of PodSpecs that
are provided through various mechanisms
and ensures that the containers described in
those PodSpecs are running and healthy.
kube-proxy
● Manages the network rules on each node.
● Performs connection forwarding or load balancing for
Kubernetes cluster services.
Container Runtime Engine
● A container runtime is a CRI (Container Runtime
Interface) compatible application that executes and
manages containers.
○ Containerd (docker)
○ Cri-o
○ Rkt
○ Kata (formerly clear and hyper)
○ Virtlet (VM CRI compatible runtime)
Architecture Overview
Security
Authentication
● X509 Client Certs (CN used as user, Org fields as group) No way
to revoke them!! – wip 
● Static Password File (password,user,uid,"group1,group2,group3")
● Static Token File (token,user,uid,"group1,group2,group3")
● Bearer Token (Authorization: Bearer 31ada4fd-ade)
● Bootstrap Tokens (Authorization: Bearer
781292.db7bc3a58fc5f07e)
● Service Account Tokens (signed by API server’s private TLS key
or specified by file)
Role - Authorization
RoleBinding - Authorization
RoleBinding - Authorization
Admission Control
● AlwaysPullImages
● DefaultStorageClass
● DefaultTolerationSeconds
● DenyEscalatingExec
● EventRateLimit
● ImagePolicyWebhook
● LimitRanger/ResourceQuota
● PersistentVolumeClaimResize
● PodSecurityPolicy
Request/Response
{
"apiVersion": "authentication.k8s.io/v1beta1",
"kind": "TokenReview",
"status": {
"authenticated": true,
"user": {
"username": "janedoe@example.com",
"uid": "42",
"groups": [
"developers",
"qa"
]
}
}
}
{
"apiVersion": "authentication.k8s.io/v1beta1",
"kind": "TokenReview",
"spec": {
"token": "(BEARERTOKEN)"
}
}
Architecture Overview
Networking
Fundamental Networking Rules
● All containers within a pod can communicate with each
other unimpeded.
● All Pods can communicate with all other Pods without
NAT.
● All nodes can communicate with all Pods (and vice-
versa) without NAT.
● The IP that a Pod sees itself as is the same IP that
others see it as.
Fundamentals Applied
● Container-to-Container
○ Containers within a pod exist within the same
network namespace and share an IP.
○ Enables intrapod communication over localhost.
● Pod-to-Pod
○ Allocated cluster unique IP for the duration of its life
cycle.
○ Pods themselves are fundamentally ephemeral.
Fundamentals Applied
● Pod-to-Service
○ managed by kube-proxy and given a persistent
cluster unique IP
○ exists beyond a Pod’s lifecycle.
● External-to-Service
○ Handled by kube-proxy.
○ Works in cooperation with a cloud provider or other
external entity (load balancer).
Concepts and Resources
Core
Objects and API
● Namespaces
● Pods
● Labels
● Selectors
● Services
Namespaces
Namespaces are a logical cluster or environment, and are
the primary method of partitioning a cluster or scoping
access.
apiVersion: v1
kind: Namespace
metadata:
name: prod
labels:
app: MyBigWebApp
$ kubectl get ns --show-labels
NAME STATUS AGE LABELS
default Active 11h <none>
kube-public Active 11h <none>
kube-system Active 11h <none>
prod Active 6s app=MyBigWebApp
Pod Examples
apiVersion: v1
kind: Pod
metadata:
name: pod-example
labels:
app: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
Key Pod Container Attributes
● name - The name of the container
● image - The container image
● ports - array of ports to expose. Can
be granted a friendly name and
protocol may be specified
● env - array of environment variables
● command - Entrypoint array (equiv to
Docker ENTRYPOINT)
● args - Arguments to pass to the
command (equiv to Docker CMD)
Container
name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
name: http
protocol: TCP
env:
- name: MYVAR
value: isAwesome
command: [“/bin/sh”, “-c”]
args: [“echo ${MYVAR}”]
Pod Template
● Workload Controllers manage instances of Pods based
off a provided template.
● Pod Templates are Pod specs with limited metadata.
● Controllers use
Pod Templates to
make actual pods.
apiVersion: v1
kind: Pod
metadata:
name: pod-example
labels:
app: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Labels
● key-value pairs that are used to
identify, describe and group
together related sets of objects or
resources.
● NOT characteristic of uniqueness.
● Have a strict syntax with a slightly
limited character set*.
* https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
Selectors
Selectors use labels to filter
or select objects, and are
used throughout Kubernetes.
apiVersion: v1
kind: Pod
metadata:
name: pod-label-example
labels:
app: nginx
env: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
nodeSelector:
gpu: nvidia
apiVersion: v1
kind: Pod
metadata:
name: pod-label-example
labels:
app: nginx
env: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
nodeSelector:
gpu: nvidia
Selector Example
Equality based selectors allow for
simple filtering (=,==, or !=).
Selector Types
Set-based selectors are supported
on a limited subset of objects.
However, they provide a method of
filtering on a set of values, and
supports multiple operators including:
in, notin, and exist.
selector:
matchExpressions:
- key: gpu
operator: in
values: [“nvidia”]
selector:
matchLabels:
gpu: nvidia
Services
● Unified method of accessing the exposed workloads
of Pods.
● Durable resource (unlike Pods)
○ static cluster-unique IP
○ static namespaced DNS name
<service name>.<namespace>.svc.cluster.local
Services
● Target Pods using equality based selectors.
● Uses kube-proxy to provide simple load-balancing.
● kube-proxy acts as a daemon that creates local
entries in the host’s iptables for every service.
Service Types
There are 4 major service types:
● ClusterIP (default)
● NodePort
● LoadBalancer
● ExternalName
ClusterIP Service
ClusterIP services exposes a
service on a strictly cluster
internal virtual IP.
apiVersion: v1
kind: Service
metadata:
name: example-prod
spec:
selector:
app: nginx
env: prod
ports:
- protocol: TCP
port: 80
targetPort: 80
Cluster IP Service
Name: example-prod
Selector: app=nginx,env=prod
Type: ClusterIP
IP: 10.96.28.176
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.255.16.3:80,
10.255.16.4:80
/ # nslookup example-prod.default.svc.cluster.local
Name: example-prod.default.svc.cluster.local
Address 1: 10.96.28.176 example-prod.default.svc.cluster.local
ClusterIP Service Without Selector
NodePort Service
● NodePort services extend
the ClusterIP service.
● Exposes a port on every
node’s IP.
● Port can either be statically
defined, or dynamically taken
from a range between 30000-
32767.
apiVersion: v1
kind: Service
metadata:
name: example-prod
spec:
type: NodePort
selector:
app: nginx
env: prod
ports:
- nodePort: 32410
protocol: TCP
port: 80
targetPort: 80
NodePort Service
Name: example-prod
Selector: app=nginx,env=prod
Type: NodePort
IP: 10.96.28.176
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32410/TCP
Endpoints: 10.255.16.3:80,
10.255.16.4:80
LoadBalancer Service
apiVersion: v1
kind: Service
metadata:
name: example-prod
spec:
type: LoadBalancer
selector:
app: nginx
env: prod
ports:
protocol: TCP
port: 80
targetPort: 80
● LoadBalancer services
extend NodePort.
● Works in conjunction with an
external system to map a
cluster external IP to the
exposed service.
LoadBalancer Service
LoadBalancer Service
Name: example-prod
Selector: app=nginx,env=prod
Type: LoadBalancer
IP: 10.96.28.176
LoadBalancer
Ingress: 172.17.18.43
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32410/TCP
Endpoints: 10.255.16.3:80,
10.255.16.4:80
ExternalName Service
apiVersion: v1
kind: Service
metadata:
name: example-prod
spec:
type: ExternalName
spec:
externalName: example.com
● ExternalName is used to
reference endpoints
OUTSIDE the cluster.
● Creates an internal
CNAME DNS entry that
aliases another.
Ingress – Name Based Routing
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: first.bar.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: second.foo.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
- http:
paths:
- backend:
serviceName: service3
servicePort: 80
● An API object that manages
external access to the services
in a cluster
● Provides load balancing, SSL
termination and name/path-
based virtual hosting
● Gives services externally-
reachable URLs
Ingress – Path Based Routing
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2
servicePort: 8080
Lab - github.com/mrbobbytables/k8s-intro-tutorials/blob/master/core
Exploring
the Core
Concepts and Resources
Workloads
● ReplicaSet
● Deployment
● DaemonSet
● StatefulSet
● Job
● CronJob
ReplicaSet
● Primary method of managing pod replicas and their
lifecycle.
● Includes their scheduling, scaling, and deletion.
● Their job is simple: Always ensure the desired
number of pods are running.
ReplicaSet
● replicas: The desired number of
instances of the Pod.
● selector:The label selector for
the ReplicaSet will manage
ALL Pod instances that it
targets; whether it’s desired or
not.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-example
spec:
replicas: 3
selector:
matchLabels:
app: nginx
env: prod
template:
<pod template>
ReplicaSet
$ kubectl describe rs rs-example
Name: rs-example
Namespace: default
Selector: app=nginx,env=prod
Labels: app=nginx
env=prod
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
env=prod
Containers:
nginx:
Image: nginx:stable-alpine
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-mkll2
Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-b7bcg
Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-9l4dt
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-example
spec:
replicas: 3
selector:
matchLabels:
app: nginx
env: prod
template:
metadata:
labels:
app: nginx
env: prod
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rs-example-9l4dt 1/1 Running 0 1h
rs-example-b7bcg 1/1 Running 0 1h
rs-example-mkll2 1/1 Running 0 1h
Deployment
● Way of managing Pods via ReplicaSets.
● Provide rollback functionality and update control.
● Updates are managed through the pod-template-hash
label.
● Each iteration creates a unique label that is assigned to
both the ReplicaSet and subsequent Pods.
Deployment
Deployment
● revisionHistoryLimit: The number of previous
iterations of the Deployment to retain.
● strategy: Describes the method of updating the
Pods based on the type. Valid options are
Recreate or RollingUpdate.
○ Recreate: All existing Pods are killed before
the new ones are created.
○ RollingUpdate: Cycles through updating the
Pods according to the parameters:
maxSurge and maxUnavailable.
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-example
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: nginx
env: prod
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
<pod template>
Deployment
$ kubectl create deployment test --image=nginx
$ kubectl set image deployment test nginx=nginx:1.9.1 --record
$ kubectl rollout history deployment test
deployments "test"
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deployment test nginx=nginx:1.9.1 --record=true
$ kubectl annotate deployment test kubernetes.io/change-cause="image updated to 1.9.1"
$ kubectl rollout undo deployment test
$ kubectl rollout undo deployment test --to-revision=2
$ kubectl rollout history deployment test
deployments "test"
REVISION CHANGE-CAUSE
2 kubectl set image deployment test nginx=nginx:1.9.1 --record=true
3 <none>
kubectl scale deployment test --replicas=10
kubectl rollout pause deployment test
kubectl rollout resume deployment test
RollingUpdate Deployment
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydep-6766777fff-9r2zn 1/1 Running 0 5h
mydep-6766777fff-hsfz9 1/1 Running 0 5h
mydep-6766777fff-sjxhf 1/1 Running 0 5h
$ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
mydep-6766777fff 3 3 3 5h
Updating pod template generates a
new ReplicaSet revision.
R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d
RollingUpdate Deployment
$ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 1 1 1 5s
mydep-6766777fff 2 3 3 5h
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 2s
mydep-6766777fff-9r2zn 1/1 Running 0 5h
mydep-6766777fff-hsfz9 1/1 Running 0 5h
mydep-6766777fff-sjxhf 1/1 Running 0 5h
New ReplicaSet is initially scaled up
based on maxSurge.
R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d
RollingUpdate Deployment
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 5s
mydep-54f7ff7d6d-cqvlq 1/1 Running 0 2s
mydep-6766777fff-9r2zn 1/1 Running 0 5h
mydep-6766777fff-hsfz9 1/1 Running 0 5h
$ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 2 2 2 8s
mydep-6766777fff 2 2 2 5h
Phase out of old Pods managed by
maxSurge and maxUnavailable.
R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d
RollingUpdate Deployment
$ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 3 3 3 10s
mydep-6766777fff 0 1 1 5h
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 7s
mydep-54f7ff7d6d-cqvlq 1/1 Running 0 5s
mydep-54f7ff7d6d-gccr6 1/1 Running 0 2s
mydep-6766777fff-9r2zn 1/1 Running 0 5h
Phase out of old Pods managed by
maxSurge and maxUnavailable.
R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d
RollingUpdate Deployment
$ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 3 3 3 13s
mydep-6766777fff 0 0 0 5h
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 10s
mydep-54f7ff7d6d-cqvlq 1/1 Running 0 8s
mydep-54f7ff7d6d-gccr6 1/1 Running 0 5s
Phase out of old Pods managed by
maxSurge and maxUnavailable.
R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d
RollingUpdate Deployment
$ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
mydep-54f7ff7d6d 3 3 3 15s
mydep-6766777fff 0 0 0 5h
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydep-54f7ff7d6d-9gvll 1/1 Running 0 12s
mydep-54f7ff7d6d-cqvlq 1/1 Running 0 10s
mydep-54f7ff7d6d-gccr6 1/1 Running 0 7s
Updated to new deployment revision
completed.
R1 pod-template-hash:
676677fff
R2 pod-template-hash:
54f7ff7d6d
Taints and Tolerations
$ kubectl taint nodes node1 key=value:NoSchedule
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule”
-------------------------------------------------
tolerations:
- operator: "Exists"
tolerations:
- key: "key"
operator: "Exists”
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
tolerationSeconds: 3600
$ kubectl taint nodes node1 gpu=nvidia:NoSchedule
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
tolerations:
- key: gpu
value: nvidia
effect: NoSchedule
DaemonSet
● Ensure that all nodes matching certain criteria will run
an instance of the supplied Pod.
● Are ideal for cluster wide services such as log
forwarding or monitoring.
StatefulSet
● Tailored to managing Pods that must persist or maintain
state.
● Pod lifecycle will be ordered and follow consistent
patterns.
● Assigned a unique ordinal name following the
convention of ‘<statefulset name>-<ordinal index>’.
StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-example
spec:
replicas: 2
revisionHistoryLimit: 3
selector:
matchLabels:
app: stateful
serviceName: app
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
template:
metadata:
labels:
app: stateful
<continued>
<continued>
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-example
spec:
replicas: 2
revisionHistoryLimit: 3
selector:
matchLabels:
app: stateful
serviceName: app
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
template:
<pod template>
● revisionHistoryLimit: The number of
previous iterations of the StatefulSet to
retain.
● serviceName: The name of the associated
headless service; or a service without a
ClusterIP.
Headless Service
/ # dig sts-example-0.app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> sts-example-0.app.default.svc.cluster.local +noall +answer
;; global options: +cmd
sts-example-0.app.default.svc.cluster.local. 20 IN A 10.255.0.2
apiVersion: v1
kind: Service
metadata:
name: app
spec:
clusterIP: None
selector:
app: stateful
ports:
- protocol: TCP
port: 80
targetPort: 80
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sts-example-0 1/1 Running 0 11m
sts-example-1 1/1 Running 0 11m
<StatefulSet Name>-<ordinal>.<service name>.<namespace>.svc.cluster.local
/ # dig app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> app.default.svc.cluster.local +noall +answer
;; global options: +cmd
app.default.svc.cluster.local. 2 IN A 10.255.0.5
app.default.svc.cluster.local. 2 IN A 10.255.0.2
/ # dig sts-example-1.app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> sts-example-1.app.default.svc.cluster.local +noall +answer
;; global options: +cmd
sts-example-1.app.default.svc.cluster.local. 30 IN A 10.255.0.5
Headless Service
/ # dig sts-example-0.app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> sts-example-0.app.default.svc.cluster.local +noall +answer
;; global options: +cmd
sts-example-0.app.default.svc.cluster.local. 20 IN A 10.255.0.2
apiVersion: v1
kind: Service
metadata:
name: app
spec:
clusterIP: None
selector:
app: stateful
ports:
- protocol: TCP
port: 80
targetPort: 80
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sts-example-0 1/1 Running 0 11m
sts-example-1 1/1 Running 0 11m
<StatefulSet Name>-<ordinal>.<service name>.<namespace>.svc.cluster.local
/ # dig app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> app.default.svc.cluster.local +noall +answer
;; global options: +cmd
app.default.svc.cluster.local. 2 IN A 10.255.0.5
app.default.svc.cluster.local. 2 IN A 10.255.0.2
/ # dig sts-example-1.app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> sts-example-1.app.default.svc.cluster.local +noall +answer
;; global options: +cmd
sts-example-1.app.default.svc.cluster.local. 30 IN A 10.255.0.5
Headless Service
/ # dig sts-example-0.app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> sts-example-0.app.default.svc.cluster.local +noall +answer
;; global options: +cmd
sts-example-0.app.default.svc.cluster.local. 20 IN A 10.255.0.2
apiVersion: v1
kind: Service
metadata:
name: app
spec:
clusterIP: None
selector:
app: stateful
ports:
- protocol: TCP
port: 80
targetPort: 80
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sts-example-0 1/1 Running 0 11m
sts-example-1 1/1 Running 0 11m
<StatefulSet Name>-<ordinal>.<service name>.<namespace>.svc.cluster.local
/ # dig app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> app.default.svc.cluster.local +noall +answer
;; global options: +cmd
app.default.svc.cluster.local. 2 IN A 10.255.0.5
app.default.svc.cluster.local. 2 IN A 10.255.0.2
/ # dig sts-example-1.app.default.svc.cluster.local +noall +answer
; <<>> DiG 9.11.2-P1 <<>> sts-example-1.app.default.svc.cluster.local +noall +answer
;; global options: +cmd
sts-example-1.app.default.svc.cluster.local. 30 IN A 10.255.0.5
CronJob
An extension of the Job Controller, it provides a method of
executing jobs on a cron-like schedule.
CronJobs within Kubernetes
use UTC ONLY.
CronJob
● schedule: The cron schedule for the
job.
● successfulJobHistoryLimit: The
number of successful jobs to retain.
● failedJobHistoryLimit: The number of
failed jobs to retain.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-example
spec:
schedule: "*/1 * * * *"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
completions: 4
parallelism: 2
template:
<pod template>
CronJob
$ kubectl describe cronjob cronjob-example
Name: cronjob-example
Namespace: default
Labels: <none>
Annotations: <none>
Schedule: */1 * * * *
Concurrency Policy: Allow
Suspend: False
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: 2
Completions: 4
Pod Template:
Labels: <none>
Containers:
hello:
Image: alpine:latest
Port: <none>
Command:
/bin/sh
-c
Args:
echo hello from $HOSTNAME!
Environment: <none>
Mounts: <none>
Volumes: <none>
Last Schedule Time: Mon, 19 Feb 2018 09:54:00 -0500
Active Jobs: cronjob-example-1519052040
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 3m cronjob-controller Created job cronjob-example-1519051860
Normal SawCompletedJob 2m cronjob-controller Saw completed job: cronjob-example-1519051860
Normal SuccessfulCreate 2m cronjob-controller Created job cronjob-example-1519051920
Normal SawCompletedJob 1m cronjob-controller Saw completed job: cronjob-example-1519051920
Normal SuccessfulCreate 1m cronjob-controller Created job cronjob-example-1519051980
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-example
spec:
schedule: "*/1 * * * *"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
completions: 4
parallelism: 2
template:
spec:
containers:
- name: hello
image: alpine:latest
command: ["/bin/sh", "-c"]
args: ["echo hello from $HOSTNAME!"]
restartPolicy: Never
$ kubectl get jobs
NAME DESIRED SUCCESSFUL AGE
cronjob-example-1519053240 4 4 2m
cronjob-example-1519053300 4 4 1m
cronjob-example-1519053360 4 4 26s
Health checks
• initialDelaySeconds: Number of seconds after the container
has started before liveness or readiness probes are initiated.
• periodSeconds: How often (in seconds) to perform the
probe. Default to 10 seconds. Minimum value is 1.
• timeoutSeconds: Number of seconds after which the probe
times out. Defaults to 1 second. Minimum value is 1.
• successThreshold: Minimum consecutive successes for the
probe to be considered successful after having failed.
Defaults to 1. Must be 1 for liveness. Minimum value is 1.
• failureThreshold: When a Pod starts and the probe fails,
Kubernetes will try failureThreshold times before giving up.
Giving up in case of liveness probe means restarting the
Pod. In case of readiness probe the Pod will be marked
Unready. Defaults to 3. Minimum value is 1.
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-readiness-http
spec:
containers:
- name: liveness-readiness-http
image: k8s.gcr.io/ liveness-readiness-http
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 4
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 4
failureThreshold: 2
Concepts and Resources
Storage
● Volumes
● Persistent
Volumes
● Persistent
Volume Claims
● StorageClass
Storage
Pods by themselves are useful, but many workloads
require exchanging data between containers, or persisting
some form of data.
For this we have Volumes, PersistentVolumes,
PersistentVolumeClaims, and StorageClasses.
StorageClass
● Storage classes are an abstraction on top of an external
storage resource (PV)
● Work hand-in-hand with the external storage system to
enable dynamic provisioning of storage by eliminating
the need for the cluster admin to pre-provision a PV
StorageClass
● provisioner: Defines the ‘driver’ to be
used for provisioning of the external
storage.
● parameters: A hash of the various
configuration parameters for the
provisioner.
● reclaimPolicy: The behaviour for the
backing storage when the PVC is
deleted.
○ Retain - manual clean-up
○ Delete - storage asset deleted by
provider
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: us-central1-a, us-central1-b
reclaimPolicy: Delete
Available StorageClasses
● AWSElasticBlockStore
● AzureFile
● AzureDisk
● CephFS
● Cinder
● FC
● Flocker
● GCEPersistentDisk
● Glusterfs
● iSCSI
● Quobyte
● NFS
● RBD
● VsphereVolume
● PortworxVolume
● ScaleIO
● StorageOS
● Local
Internal Provisioner
Volumes
● Storage that is tied to the Pod’s Lifecycle.
● A pod can have one or more types of volumes attached
to it.
● Can be consumed by any of the containers within the
pod.
● Survive Pod restarts; however their durability beyond
that is dependent on the Volume Type.
Volume Types
● awsElasticBlockStore
● azureDisk
● azureFile
● cephfs
● configMap
● csi
● downwardAPI
● emptyDir
● fc (fibre channel)
● flocker
● gcePersistentDisk
● gitRepo
● glusterfs
● hostPath
● iscsi
● local
● nfs
● persistentVolume
Claim
● projected
● portworxVolume
● quobyte
● rbd
● scaleIO
● secret
● storageos
● vsphereVolume
Persistent Volume Supported
Volumes
● volumes: A list of volume objects to be
attached to the Pod. Every object
within the list must have it’s own
unique name.
● volumeMounts: A container specific list
referencing the Pod volumes by name,
along with their desired mountPath.
apiVersion: v1
kind: Pod
metadata:
name: volume-example
spec:
containers:
- name: nginx
image: nginx:stable-alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
ReadOnly: true
- name: content
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 5;
done
volumeMounts:
- name: html
mountPath: /html
volumes:
- name: html
emptyDir: {}
Volumes
● volumes: A list of volume objects to be
attached to the Pod. Every object
within the list must have it’s own
unique name.
● volumeMounts: A container specific list
referencing the Pod volumes by name,
along with their desired mountPath.
apiVersion: v1
kind: Pod
metadata:
name: volume-example
spec:
containers:
- name: nginx
image: nginx:stable-alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
ReadOnly: true
- name: content
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 5;
done
volumeMounts:
- name: html
mountPath: /html
volumes:
- name: html
emptyDir: {}
Volumes
● volumes: A list of volume objects to be
attached to the Pod. Every object
within the list must have it’s own
unique name.
● volumeMounts: A container specific list
referencing the Pod volumes by name,
along with their desired mountPath.
apiVersion: v1
kind: Pod
metadata:
name: volume-example
spec:
containers:
- name: nginx
image: nginx:stable-alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
ReadOnly: true
- name: content
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 5;
done
volumeMounts:
- name: html
mountPath: /html
volumes:
- name: html
emptyDir: {}
Persistent Volumes
● A PersistentVolume (PV) represents a storage
resource.
● PVs are a cluster wide resource linked to a backing
storage provider: NFS, GCEPersistentDisk, RBD etc.
● Generally provisioned by an administrator.
● Their lifecycle is handled independently from a pod
● CANNOT be attached to a Pod directly. Relies on a
PersistentVolumeClaim
PersistentVolumeClaims
● A PersistentVolumeClaim (PVC) is a namespaced
request for storage.
● Satisfies a set of requirements instead of mapping to a
storage resource directly.
● Ensures that an application’s ‘claim’ for storage is
portable across numerous backends or providers.
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfsserver
spec:
capacity:
storage: 50Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /exports
server: 172.22.0.42
PersistentVolume
● capacity.storage: The total amount of
available storage.
● volumeMode: The type of volume, this
can be either Filesystem or Block.
● accessModes: A list of the supported
methods of accessing the volume.
Options include:
○ ReadWriteOnce
○ ReadOnlyMany
○ ReadWriteMany
PersistentVolume
● persistentVolumeReclaimPolicy: The
behaviour for PVC’s that have been
deleted. Options include:
○ Retain - manual clean-up
○ Delete - storage asset deleted by
provider.
● storageClassName: Optional name of
the storage class that PVC’s can
reference. If provided, ONLY PVC’s
referencing the name consume use it.
● mountOptions: Optional mount options
for the PV.
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfsserver
spec:
capacity:
storage: 50Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /exports
server: 172.22.0.42
PersistentVolumeClaim
● accessModes: The selected method of
accessing the storage. This MUST be a
subset of what is defined on the target PV
or Storage Class.
○ ReadWriteOnce
○ ReadOnlyMany
○ ReadWriteMany
● resources.requests.storage: The desired
amount of storage for the claim
● storageClassName: The name of the
desired Storage Class
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-sc-example
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: slow
PVs and PVCs with Selectors
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-selector-example
labels:
type: hostpath
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-selector-example
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: hostpath
PVs and PVCs with Selectors
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-selector-example
labels:
type: hostpath
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-selector-example
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: hostpath
PV Phases
Available
PV is ready
and available
to be
consumed.
Bound
The PV has
been bound to
a claim.
Released
The binding
PVC has been
deleted, and
the PV is
pending
reclamation.
Failed
An error has
been
encountered.
StorageClass
pv: pvc-9df65c6e-1a69-11e8-ae10-080027a3682b
uid: 9df65c6e-1a69-11e8-ae10-080027a3682b
1. PVC makes a request of
the StorageClass.
2. StorageClass provisions
request through API with
external storage system.
3. External storage system
creates a PV strictly satisfying
the PVC request.
4. provisioned PV is bound
to requesting PVC.
Persistent Volumes and Claims
Cluster
Users
Cluster
Admins
Lab - github.com/mrbobbytables/k8s-intro-tutorials/blob/master/storage
Working with
Volumes
Concepts and Resources
Configuration ● ConfigMap
● Secret
Configuration
Kubernetes has an integrated pattern for
decoupling configuration from application or
container.
This pattern makes use of two Kubernetes
components: ConfigMaps and Secrets.
ConfigMap
● Externalized data stored within kubernetes.
● Can be referenced through several different means:
○ environment variable
○ a command line argument (via env var)
○ injected as a file into a volume mount
● Can be created from a manifest, literals, directories, or
files directly.
ConfigMap
data: Contains key-value pairs of
ConfigMap contents.
apiVersion: v1
kind: ConfigMap
metadata:
name: manifest-example
data:
state: Michigan
city: Ann Arbor
content: |
Look at this,
its multiline!
ConfigMap Example
apiVersion: v1
kind: ConfigMap
metadata:
name: manifest-example
data:
city: Ann Arbor
state: Michigan
$ kubectl create configmap literal-example 
> --from-literal="city=Ann Arbor" --from-literal=state=Michigan
configmap “literal-example” created
$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state
configmap "file-example" created
All produce a ConfigMap with the same content!
$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap dir-example --from-file=cm/
configmap "dir-example" created
ConfigMap Example
apiVersion: v1
kind: ConfigMap
metadata:
name: manifest-example
data:
city: Ann Arbor
state: Michigan
$ kubectl create configmap literal-example 
> --from-literal="city=Ann Arbor" --from-literal=state=Michigan
configmap “literal-example” created
$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state
configmap "file-example" created
All produce a ConfigMap with the same content!
$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap dir-example --from-file=cm/
configmap "dir-example" created
ConfigMap Example
apiVersion: v1
kind: ConfigMap
metadata:
name: manifest-example
data:
city: Ann Arbor
state: Michigan
$ kubectl create configmap literal-example 
> --from-literal="city=Ann Arbor" --from-literal=state=Michigan
configmap “literal-example” created
$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state
configmap "file-example" created
All produce a ConfigMap with the same content!
$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap dir-example --from-file=cm/
configmap "dir-example" created
ConfigMap Example
apiVersion: v1
kind: ConfigMap
metadata:
name: manifest-example
data:
city: Ann Arbor
state: Michigan
$ kubectl create configmap literal-example 
> --from-literal="city=Ann Arbor" --from-literal=state=Michigan
configmap “literal-example” created
$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state
configmap "file-example" created
All produce a ConfigMap with the same content!
$ cat info/city
Ann Arbor
$ cat info/state
Michigan
$ kubectl create configmap dir-example --from-file=cm/
configmap "dir-example" created
Secret
● Functionally identical to a ConfigMap.
● Stored as base64 encoded content.
● Encrypted at rest within etcd (if configured!).
● Stored on each worker node in tmpfs directory.
● Ideal for username/passwords, certificates or other
sensitive information that should not be stored in a
container.
Secret
● type: There are three different types of
secrets within Kubernetes:
○ docker-registry - credentials used to
authenticate to a container registry
○ generic/Opaque - literal values from
different sources
○ tls - a certificate based secret
● data: Contains key-value pairs of base64
encoded content.
apiVersion: v1
kind: Secret
metadata:
name: manifest-secret
type: Opaque
data:
username: ZXhhbXBsZQ==
password: bXlwYXNzd29yZA==
Secret Example
apiVersion: v1
kind: Secret
metadata:
name: manifest-example
type: Opaque
data:
username: ZXhhbXBsZQ==
password: bXlwYXNzd29yZA==
$ kubectl create secret generic literal-secret 
> --from-literal=username=example 
> --from-literal=password=mypassword
secret "literal-secret" created
$ cat secret/username
example
$ cat secret/password
mypassword
$ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password
Secret "file-secret" created
All produce a Secret with the same content!
$ cat info/username
example
$ cat info/password
mypassword
$ kubectl create secret generic dir-secret --from-file=secret/
Secret "file-secret" created
Secret Example
apiVersion: v1
kind: Secret
metadata:
name: manifest-example
type: Opaque
data:
username: ZXhhbXBsZQ==
password: bXlwYXNzd29yZA==
$ kubectl create secret generic literal-secret 
> --from-literal=username=example 
> --from-literal=password=mypassword
secret "literal-secret" created
$ cat secret/username
example
$ cat secret/password
mypassword
$ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password
Secret "file-secret" created
All produce a Secret with the same content!
$ cat info/username
example
$ cat info/password
mypassword
$ kubectl create secret generic dir-secret --from-file=secret/
Secret "file-secret" created
Secret Example
apiVersion: v1
kind: Secret
metadata:
name: manifest-example
type: Opaque
data:
username: ZXhhbXBsZQ==
password: bXlwYXNzd29yZA==
$ kubectl create secret generic literal-secret 
> --from-literal=username=example 
> --from-literal=password=mypassword
secret "literal-secret" created
$ cat secret/username
example
$ cat secret/password
mypassword
$ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password
Secret "file-secret" created
All produce a Secret with the same content!
$ cat info/username
example
$ cat info/password
mypassword
$ kubectl create secret generic dir-secret --from-file=secret/
Secret "file-secret" created
Secret Example
apiVersion: v1
kind: Secret
metadata:
name: manifest-example
type: Opaque
data:
username: ZXhhbXBsZQ==
password: bXlwYXNzd29yZA==
$ kubectl create secret generic literal-secret 
> --from-literal=username=example 
> --from-literal=password=mypassword
secret "literal-secret" created
$ cat secret/username
example
$ cat secret/password
mypassword
$ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password
Secret "file-secret" created
All produce a Secret with the same content!
$ cat info/username
example
$ cat info/password
mypassword
$ kubectl create secret generic dir-secret --from-file=secret/
Secret "file-secret" created
Injecting as Environment Variable
apiVersion: batch/v1
kind: Job
metadata:
name: cm-env-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“printenv CITY”]
env:
- name: CITY
valueFrom:
configMapKeyRef:
name: manifest-example
key: city
restartPolicy: Never
apiVersion: batch/v1
kind: Job
metadata:
name: secret-env-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“printenv USERNAME”]
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: manifest-example
key: username
restartPolicy: Never
Injecting as Environment Variable
apiVersion: batch/v1
kind: Job
metadata:
name: cm-env-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“printenv CITY”]
env:
- name: CITY
valueFrom:
configMapKeyRef:
name: manifest-example
key: city
restartPolicy: Never
apiVersion: batch/v1
kind: Job
metadata:
name: secret-env-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“printenv USERNAME”]
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: manifest-example
key: username
restartPolicy: Never
Injecting in a Command
apiVersion: batch/v1
kind: Job
metadata:
name: cm-cmd-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“echo Hello ${CITY}!”]
env:
- name: CITY
valueFrom:
configMapKeyRef:
name: manifest-example
key: city
restartPolicy: Never
apiVersion: batch/v1
kind: Job
metadata:
name: secret-cmd-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“echo Hello ${USERNAME}!”]
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: manifest-example
key: username
restartPolicy: Never
Injecting in a Command
apiVersion: batch/v1
kind: Job
metadata:
name: cm-cmd-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“echo Hello ${CITY}!”]
env:
- name: CITY
valueFrom:
configMapKeyRef:
name: manifest-example
key: city
restartPolicy: Never
apiVersion: batch/v1
kind: Job
metadata:
name: secret-cmd-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“echo Hello ${USERNAME}!”]
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: manifest-example
key: username
restartPolicy: Never
Injecting as a Volume
apiVersion: batch/v1
kind: Job
metadata:
name: cm-vol-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“cat /myconfig/city”]
volumeMounts:
- name: config-volume
mountPath: /myconfig
restartPolicy: Never
volumes:
- name: config-volume
configMap:
name: manifest-example
apiVersion: batch/v1
kind: Job
metadata:
name: secret-vol-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“cat /mysecret/username”]
volumeMounts:
- name: secret-volume
mountPath: /mysecret
restartPolicy: Never
volumes:
- name: secret-volume
secret:
secretName: manifest-example
Injecting as a Volume
apiVersion: batch/v1
kind: Job
metadata:
name: cm-vol-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“cat /myconfig/city”]
volumeMounts:
- name: config-volume
mountPath: /myconfig
restartPolicy: Never
volumes:
- name: config-volume
configMap:
name: manifest-example
apiVersion: batch/v1
kind: Job
metadata:
name: secret-vol-example
spec:
template:
spec:
containers:
- name: mypod
image: alpine:latest
command: [“/bin/sh”, “-c”]
args: [“cat /mysecret/username”]
volumeMounts:
- name: secret-volume
mountPath: /mysecret
restartPolicy: Never
volumes:
- name: secret-volume
secret:
secretName: manifest-example
Concepts and Resources
Metrics and
Monitoring
● Metrics server
● HPA (horizontal
pod autoscaler)
● Prometheus
● Grafana
(dashboards)
● Fluentd (log
shipping)
Metrics API Server
● Metric server collects metrics such as CPU
and Memory by each pod and node from the
Summary API, exposed by Kubelet on each
node.
● Metrics Server registered in the main API
server through Kubernetes aggregator,
which was introduced in Kubernetes 1.7
HPA
Summary
Links
● Free Kubernetes Courses
https://www.edx.org/
● Interactive Kubernetes Tutorials
https://www.katacoda.com/courses/kubernetes
● Learn Kubernetes the Hard Way
https://github.com/kelseyhightower/kubernetes-the-hard-way
● Official Kubernetes Youtube Channel
https://www.youtube.com/c/KubernetesCommunity
● Official CNCF Youtube Channel
https://www.youtube.com/c/cloudnativefdn
● Track to becoming a CKA/CKAD (Certified Kubernetes Administrator/Application Developer)
https://www.cncf.io/certification/expert/
● Awesome Kubernetes
https://www.gitbook.com/book/ramitsurana/awesome-kubernetes/details
Questions?
- by Joe Beda (Gluecon 2017)
This presentation is licensed under a Creative Commons Attribution 4.0 International License.
See https://creativecommons.org/licenses/by/4.0/ for more details.

Mais conteúdo relacionado

Mais procurados

Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17Ryan Jarvinen
 
Kubernetes - introduction
Kubernetes - introductionKubernetes - introduction
Kubernetes - introductionSparkbit
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes IntroductionEric Gustafson
 
Kubernetes 101 for Beginners
Kubernetes 101 for BeginnersKubernetes 101 for Beginners
Kubernetes 101 for BeginnersOktay Esgul
 
Docker introduction
Docker introductionDocker introduction
Docker introductiondotCloud
 
Kubernetes Networking
Kubernetes NetworkingKubernetes Networking
Kubernetes NetworkingCJ Cullen
 
Kubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive OverviewKubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive OverviewBob Killen
 
Kubernetes Architecture and Introduction
Kubernetes Architecture and IntroductionKubernetes Architecture and Introduction
Kubernetes Architecture and IntroductionStefan Schimanski
 
Dockers and containers basics
Dockers and containers basicsDockers and containers basics
Dockers and containers basicsSourabh Saxena
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes VMware Tanzu
 
Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...
Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...
Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...SlideTeam
 
Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...
Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...
Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...Janusz Nowak
 
What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginn...
What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginn...What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginn...
What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginn...Edureka!
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking OverviewSreenivas Makam
 
Red Hat OpenShift Container Platform Overview
Red Hat OpenShift Container Platform OverviewRed Hat OpenShift Container Platform Overview
Red Hat OpenShift Container Platform OverviewJames Falkner
 

Mais procurados (20)

Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17
 
Kubernetes - introduction
Kubernetes - introductionKubernetes - introduction
Kubernetes - introduction
 
Docker
DockerDocker
Docker
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
Kubernetes 101 for Beginners
Kubernetes 101 for BeginnersKubernetes 101 for Beginners
Kubernetes 101 for Beginners
 
Docker introduction
Docker introductionDocker introduction
Docker introduction
 
DevOps with Kubernetes
DevOps with KubernetesDevOps with Kubernetes
DevOps with Kubernetes
 
Introduction to docker
Introduction to dockerIntroduction to docker
Introduction to docker
 
Kubernetes Networking
Kubernetes NetworkingKubernetes Networking
Kubernetes Networking
 
Kubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive OverviewKubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive Overview
 
Docker & kubernetes
Docker & kubernetesDocker & kubernetes
Docker & kubernetes
 
Kubernetes Architecture and Introduction
Kubernetes Architecture and IntroductionKubernetes Architecture and Introduction
Kubernetes Architecture and Introduction
 
Dockers and containers basics
Dockers and containers basicsDockers and containers basics
Dockers and containers basics
 
Introduction to CI/CD
Introduction to CI/CDIntroduction to CI/CD
Introduction to CI/CD
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes
 
Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...
Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...
Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...
 
Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...
Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...
Continues Integration and Continuous Delivery with Azure DevOps - Deploy Anyt...
 
What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginn...
What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginn...What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginn...
What Is Kubernetes | Kubernetes Introduction | Kubernetes Tutorial For Beginn...
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking Overview
 
Red Hat OpenShift Container Platform Overview
Red Hat OpenShift Container Platform OverviewRed Hat OpenShift Container Platform Overview
Red Hat OpenShift Container Platform Overview
 

Semelhante a Introduction to kubernetes

Kubernetes #1 intro
Kubernetes #1   introKubernetes #1   intro
Kubernetes #1 introTerry Cho
 
Kubernetes presentation
Kubernetes presentationKubernetes presentation
Kubernetes presentationGauranG Bajpai
 
08 - kubernetes.pptx
08 - kubernetes.pptx08 - kubernetes.pptx
08 - kubernetes.pptxRanjithM61
 
(Draft) Kubernetes - A Comprehensive Overview
(Draft) Kubernetes - A Comprehensive Overview(Draft) Kubernetes - A Comprehensive Overview
(Draft) Kubernetes - A Comprehensive OverviewBob Killen
 
Getting started with kubernetes
Getting started with kubernetesGetting started with kubernetes
Getting started with kubernetesBob Killen
 
Kubernetes acomprehensiveoverview
Kubernetes acomprehensiveoverviewKubernetes acomprehensiveoverview
Kubernetes acomprehensiveoverviewAnkit Shukla
 
Kubernetes a comprehensive overview
Kubernetes   a comprehensive overviewKubernetes   a comprehensive overview
Kubernetes a comprehensive overviewGabriel Carro
 
Kubernetes for Beginners
Kubernetes for BeginnersKubernetes for Beginners
Kubernetes for BeginnersDigitalOcean
 
Kubernetes - training micro-dragons without getting burnt
Kubernetes -  training micro-dragons without getting burntKubernetes -  training micro-dragons without getting burnt
Kubernetes - training micro-dragons without getting burntAmir Moghimi
 
K8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals TrainingK8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals TrainingPiotr Perzyna
 
Open shift and docker - october,2014
Open shift and docker - october,2014Open shift and docker - october,2014
Open shift and docker - october,2014Hojoong Kim
 
Docker Enterprise Workshop - Technical
Docker Enterprise Workshop - TechnicalDocker Enterprise Workshop - Technical
Docker Enterprise Workshop - TechnicalPatrick Chanezon
 
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...Oleg Shalygin
 
Google Cloud Platform Kubernetes Workshop IYTE
Google Cloud Platform Kubernetes Workshop IYTEGoogle Cloud Platform Kubernetes Workshop IYTE
Google Cloud Platform Kubernetes Workshop IYTEGokhan Boranalp
 
Introduction to kubernetes
Introduction to kubernetesIntroduction to kubernetes
Introduction to kubernetesGabriel Carro
 

Semelhante a Introduction to kubernetes (20)

Kubernetes #1 intro
Kubernetes #1   introKubernetes #1   intro
Kubernetes #1 intro
 
Kubernetes Intro
Kubernetes IntroKubernetes Intro
Kubernetes Intro
 
Kubernetes presentation
Kubernetes presentationKubernetes presentation
Kubernetes presentation
 
Container Orchestration using kubernetes
Container Orchestration using kubernetesContainer Orchestration using kubernetes
Container Orchestration using kubernetes
 
08 - kubernetes.pptx
08 - kubernetes.pptx08 - kubernetes.pptx
08 - kubernetes.pptx
 
(Draft) Kubernetes - A Comprehensive Overview
(Draft) Kubernetes - A Comprehensive Overview(Draft) Kubernetes - A Comprehensive Overview
(Draft) Kubernetes - A Comprehensive Overview
 
Getting started with kubernetes
Getting started with kubernetesGetting started with kubernetes
Getting started with kubernetes
 
Kubernetes PPT.pptx
Kubernetes PPT.pptxKubernetes PPT.pptx
Kubernetes PPT.pptx
 
Kubernetes acomprehensiveoverview
Kubernetes acomprehensiveoverviewKubernetes acomprehensiveoverview
Kubernetes acomprehensiveoverview
 
Kubernetes a comprehensive overview
Kubernetes   a comprehensive overviewKubernetes   a comprehensive overview
Kubernetes a comprehensive overview
 
Kubernetes for Beginners
Kubernetes for BeginnersKubernetes for Beginners
Kubernetes for Beginners
 
Kubernetes - training micro-dragons without getting burnt
Kubernetes -  training micro-dragons without getting burntKubernetes -  training micro-dragons without getting burnt
Kubernetes - training micro-dragons without getting burnt
 
Kubernetes intro
Kubernetes introKubernetes intro
Kubernetes intro
 
K8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals TrainingK8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals Training
 
Open shift and docker - october,2014
Open shift and docker - october,2014Open shift and docker - october,2014
Open shift and docker - october,2014
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
Docker Enterprise Workshop - Technical
Docker Enterprise Workshop - TechnicalDocker Enterprise Workshop - Technical
Docker Enterprise Workshop - Technical
 
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...
 
Google Cloud Platform Kubernetes Workshop IYTE
Google Cloud Platform Kubernetes Workshop IYTEGoogle Cloud Platform Kubernetes Workshop IYTE
Google Cloud Platform Kubernetes Workshop IYTE
 
Introduction to kubernetes
Introduction to kubernetesIntroduction to kubernetes
Introduction to kubernetes
 

Último

AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplatePresentation.STUDIO
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...masabamasaba
 
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...masabamasaba
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...Shane Coughlan
 
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...Bert Jan Schrijver
 
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfkalichargn70th171
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in sowetomasabamasaba
 
WSO2CON 2024 - Does Open Source Still Matter?
WSO2CON 2024 - Does Open Source Still Matter?WSO2CON 2024 - Does Open Source Still Matter?
WSO2CON 2024 - Does Open Source Still Matter?WSO2
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...Jittipong Loespradit
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...SelfMade bd
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Steffen Staab
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfonteinmasabamasaba
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park masabamasaba
 
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...chiefasafspells
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrainmasabamasaba
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisamasabamasaba
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2
 

Último (20)

AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...
 
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto
 
WSO2CON 2024 - Does Open Source Still Matter?
WSO2CON 2024 - Does Open Source Still Matter?WSO2CON 2024 - Does Open Source Still Matter?
WSO2CON 2024 - Does Open Source Still Matter?
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 

Introduction to kubernetes

  • 2. It’s like landing on Pluto when people are still trying to figure out Mars (other tools) properly – Rishabh Indoria  Why learn Kubernetes?
  • 3. What Does “Kubernetes” Mean? Greek for “pilot” or “Helmsman of a ship”
  • 4. What is Kubernetes? ● A Production-Grade Container Orchestration System Google-grown, based on Borg and Omega, systems that run inside of Google right now and are proven to work at Google for over 10 years. ● Google spawns billions of containers per week with these systems. ● Created by three Google employees initially during the summer of 2014; grew exponentially and became the first project to get donated to the CNCF. ● Hit the first production-grade version v1.0.1 in July 2015. Has continually released a new minor version every three months since v1.2.0 in March 2016. Lately v1.13.0 was released in December 2018.
  • 5. Decouples Infrastructure and Scaling ● All services within Kubernetes are natively Load Balanced. ● Can scale up and down dynamically. ● Used both to enable self-healing and seamless upgrading or rollback of applications.
  • 6. Self Healing Kubernetes will ALWAYS try and steer the cluster to its desired state. ● Me: “I want 3 healthy instances of redis to always be running.” ● Kubernetes: “Okay, I’ll ensure there are always 3 instances up and running.” ● Kubernetes: “Oh look, one has died. I’m going to attempt to spin up a new one.”
  • 7. Project Stats ● Over 46,600 stars on Github ● 1800+ Contributors to K8s Core ● Most discussed Repository by a large margin ● 50,000+ users in Slack Team 10/2018
  • 10. Pods ● Atomic unit or smallest “unit of work”of Kubernetes. ● Pods are one or MORE containers that share volumes and namespace. ● They are also ephemeral!
  • 11. Services ● Unified method of accessing the exposed workloads of Pods. ● Durable resource ○ static cluster IP ○ static namespaced DNS name
  • 12. Services ● Unified method of accessing the exposed workloads of Pods. ● Durable resource ○ static cluster IP ○ static namespaced DNS name NOT Ephemeral!
  • 14.
  • 16. Control Plane Components ● kube-apiserver ● etcd ● kube-controller-manager ● kube-scheduler ● cloud-controller-manager
  • 17. kube-apiserver ● Provides a forward facing REST interface into the kubernetes control plane and datastore. ● All clients and other applications interact with kubernetes strictly through the API Server. ● Acts as the gatekeeper to the cluster by handling authentication and authorization, request validation, mutation, and admission control in addition to being the front-end to the backing datastore.
  • 18. etcd ● etcd acts as the cluster datastore. ● Purpose in relation to Kubernetes is to provide a strong, consistent and highly available key-value store for persisting cluster state. ● Stores objects and config information.
  • 19. etcd Uses “Raft Consensus” among a quorum of systems to create a fault-tolerant consistent “view” of the cluster. https://raft.github.io/ Image Source
  • 20. kube-controller-manager ● Monitors the cluster state via the apiserver and steers the cluster towards the desired state. ● Node Controller: Responsible for noticing and responding when nodes go down. ● Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system. ● Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods). ● Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.
  • 21. kube-scheduler ● Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on. ● Factors taken into account for scheduling decisions include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti- affinity specifications, data locality, inter-workload interference and deadlines.
  • 22. cloud-controller-manager ● Node Controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding ● Route Controller: For setting up routes in the underlying cloud infrastructure ● Service Controller: For creating, updating and deleting cloud provider load balancers ● Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes
  • 24. Node Components ● kubelet ● kube-proxy ● Container Runtime Engine
  • 25. kubelet ● An agent that runs on each node in the cluster. It makes sure that containers are running in a pod. ● The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
  • 26. kube-proxy ● Manages the network rules on each node. ● Performs connection forwarding or load balancing for Kubernetes cluster services.
  • 27. Container Runtime Engine ● A container runtime is a CRI (Container Runtime Interface) compatible application that executes and manages containers. ○ Containerd (docker) ○ Cri-o ○ Rkt ○ Kata (formerly clear and hyper) ○ Virtlet (VM CRI compatible runtime)
  • 28.
  • 30.
  • 31. Authentication ● X509 Client Certs (CN used as user, Org fields as group) No way to revoke them!! – wip  ● Static Password File (password,user,uid,"group1,group2,group3") ● Static Token File (token,user,uid,"group1,group2,group3") ● Bearer Token (Authorization: Bearer 31ada4fd-ade) ● Bootstrap Tokens (Authorization: Bearer 781292.db7bc3a58fc5f07e) ● Service Account Tokens (signed by API server’s private TLS key or specified by file)
  • 35. Admission Control ● AlwaysPullImages ● DefaultStorageClass ● DefaultTolerationSeconds ● DenyEscalatingExec ● EventRateLimit ● ImagePolicyWebhook ● LimitRanger/ResourceQuota ● PersistentVolumeClaimResize ● PodSecurityPolicy
  • 36.
  • 37.
  • 38. Request/Response { "apiVersion": "authentication.k8s.io/v1beta1", "kind": "TokenReview", "status": { "authenticated": true, "user": { "username": "janedoe@example.com", "uid": "42", "groups": [ "developers", "qa" ] } } } { "apiVersion": "authentication.k8s.io/v1beta1", "kind": "TokenReview", "spec": { "token": "(BEARERTOKEN)" } }
  • 40. Fundamental Networking Rules ● All containers within a pod can communicate with each other unimpeded. ● All Pods can communicate with all other Pods without NAT. ● All nodes can communicate with all Pods (and vice- versa) without NAT. ● The IP that a Pod sees itself as is the same IP that others see it as.
  • 41. Fundamentals Applied ● Container-to-Container ○ Containers within a pod exist within the same network namespace and share an IP. ○ Enables intrapod communication over localhost. ● Pod-to-Pod ○ Allocated cluster unique IP for the duration of its life cycle. ○ Pods themselves are fundamentally ephemeral.
  • 42. Fundamentals Applied ● Pod-to-Service ○ managed by kube-proxy and given a persistent cluster unique IP ○ exists beyond a Pod’s lifecycle. ● External-to-Service ○ Handled by kube-proxy. ○ Works in cooperation with a cloud provider or other external entity (load balancer).
  • 43. Concepts and Resources Core Objects and API ● Namespaces ● Pods ● Labels ● Selectors ● Services
  • 44. Namespaces Namespaces are a logical cluster or environment, and are the primary method of partitioning a cluster or scoping access. apiVersion: v1 kind: Namespace metadata: name: prod labels: app: MyBigWebApp $ kubectl get ns --show-labels NAME STATUS AGE LABELS default Active 11h <none> kube-public Active 11h <none> kube-system Active 11h <none> prod Active 6s app=MyBigWebApp
  • 45. Pod Examples apiVersion: v1 kind: Pod metadata: name: pod-example labels: app: nginx spec: template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx apiVersion: v1 kind: Pod metadata: name: pod-example spec: containers: - name: nginx image: nginx:stable-alpine ports: - containerPort: 80
  • 46. Key Pod Container Attributes ● name - The name of the container ● image - The container image ● ports - array of ports to expose. Can be granted a friendly name and protocol may be specified ● env - array of environment variables ● command - Entrypoint array (equiv to Docker ENTRYPOINT) ● args - Arguments to pass to the command (equiv to Docker CMD) Container name: nginx image: nginx:stable-alpine ports: - containerPort: 80 name: http protocol: TCP env: - name: MYVAR value: isAwesome command: [“/bin/sh”, “-c”] args: [“echo ${MYVAR}”]
  • 47. Pod Template ● Workload Controllers manage instances of Pods based off a provided template. ● Pod Templates are Pod specs with limited metadata. ● Controllers use Pod Templates to make actual pods. apiVersion: v1 kind: Pod metadata: name: pod-example labels: app: nginx spec: template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx
  • 48. Labels ● key-value pairs that are used to identify, describe and group together related sets of objects or resources. ● NOT characteristic of uniqueness. ● Have a strict syntax with a slightly limited character set*. * https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
  • 49.
  • 50. Selectors Selectors use labels to filter or select objects, and are used throughout Kubernetes. apiVersion: v1 kind: Pod metadata: name: pod-label-example labels: app: nginx env: prod spec: containers: - name: nginx image: nginx:stable-alpine ports: - containerPort: 80 nodeSelector: gpu: nvidia
  • 51. apiVersion: v1 kind: Pod metadata: name: pod-label-example labels: app: nginx env: prod spec: containers: - name: nginx image: nginx:stable-alpine ports: - containerPort: 80 nodeSelector: gpu: nvidia Selector Example
  • 52. Equality based selectors allow for simple filtering (=,==, or !=). Selector Types Set-based selectors are supported on a limited subset of objects. However, they provide a method of filtering on a set of values, and supports multiple operators including: in, notin, and exist. selector: matchExpressions: - key: gpu operator: in values: [“nvidia”] selector: matchLabels: gpu: nvidia
  • 53. Services ● Unified method of accessing the exposed workloads of Pods. ● Durable resource (unlike Pods) ○ static cluster-unique IP ○ static namespaced DNS name <service name>.<namespace>.svc.cluster.local
  • 54. Services ● Target Pods using equality based selectors. ● Uses kube-proxy to provide simple load-balancing. ● kube-proxy acts as a daemon that creates local entries in the host’s iptables for every service.
  • 55. Service Types There are 4 major service types: ● ClusterIP (default) ● NodePort ● LoadBalancer ● ExternalName
  • 56. ClusterIP Service ClusterIP services exposes a service on a strictly cluster internal virtual IP. apiVersion: v1 kind: Service metadata: name: example-prod spec: selector: app: nginx env: prod ports: - protocol: TCP port: 80 targetPort: 80
  • 57. Cluster IP Service Name: example-prod Selector: app=nginx,env=prod Type: ClusterIP IP: 10.96.28.176 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.255.16.3:80, 10.255.16.4:80 / # nslookup example-prod.default.svc.cluster.local Name: example-prod.default.svc.cluster.local Address 1: 10.96.28.176 example-prod.default.svc.cluster.local
  • 59. NodePort Service ● NodePort services extend the ClusterIP service. ● Exposes a port on every node’s IP. ● Port can either be statically defined, or dynamically taken from a range between 30000- 32767. apiVersion: v1 kind: Service metadata: name: example-prod spec: type: NodePort selector: app: nginx env: prod ports: - nodePort: 32410 protocol: TCP port: 80 targetPort: 80
  • 60. NodePort Service Name: example-prod Selector: app=nginx,env=prod Type: NodePort IP: 10.96.28.176 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32410/TCP Endpoints: 10.255.16.3:80, 10.255.16.4:80
  • 61. LoadBalancer Service apiVersion: v1 kind: Service metadata: name: example-prod spec: type: LoadBalancer selector: app: nginx env: prod ports: protocol: TCP port: 80 targetPort: 80 ● LoadBalancer services extend NodePort. ● Works in conjunction with an external system to map a cluster external IP to the exposed service.
  • 63. LoadBalancer Service Name: example-prod Selector: app=nginx,env=prod Type: LoadBalancer IP: 10.96.28.176 LoadBalancer Ingress: 172.17.18.43 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32410/TCP Endpoints: 10.255.16.3:80, 10.255.16.4:80
  • 64. ExternalName Service apiVersion: v1 kind: Service metadata: name: example-prod spec: type: ExternalName spec: externalName: example.com ● ExternalName is used to reference endpoints OUTSIDE the cluster. ● Creates an internal CNAME DNS entry that aliases another.
  • 65. Ingress – Name Based Routing apiVersion: extensions/v1beta1 kind: Ingress metadata: name: name-virtual-host-ingress spec: rules: - host: first.bar.com http: paths: - backend: serviceName: service1 servicePort: 80 - host: second.foo.com http: paths: - backend: serviceName: service2 servicePort: 80 - http: paths: - backend: serviceName: service3 servicePort: 80 ● An API object that manages external access to the services in a cluster ● Provides load balancing, SSL termination and name/path- based virtual hosting ● Gives services externally- reachable URLs
  • 66. Ingress – Path Based Routing apiVersion: extensions/v1beta1 kind: Ingress metadata: name: simple-fanout-example spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: service1 servicePort: 4200 - path: /bar backend: serviceName: service2 servicePort: 8080
  • 68. Concepts and Resources Workloads ● ReplicaSet ● Deployment ● DaemonSet ● StatefulSet ● Job ● CronJob
  • 69. ReplicaSet ● Primary method of managing pod replicas and their lifecycle. ● Includes their scheduling, scaling, and deletion. ● Their job is simple: Always ensure the desired number of pods are running.
  • 70. ReplicaSet ● replicas: The desired number of instances of the Pod. ● selector:The label selector for the ReplicaSet will manage ALL Pod instances that it targets; whether it’s desired or not. apiVersion: apps/v1 kind: ReplicaSet metadata: name: rs-example spec: replicas: 3 selector: matchLabels: app: nginx env: prod template: <pod template>
  • 71. ReplicaSet $ kubectl describe rs rs-example Name: rs-example Namespace: default Selector: app=nginx,env=prod Labels: app=nginx env=prod Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=nginx env=prod Containers: nginx: Image: nginx:stable-alpine Port: 80/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-mkll2 Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-b7bcg Normal SuccessfulCreate 16s replicaset-controller Created pod: rs-example-9l4dt apiVersion: apps/v1 kind: ReplicaSet metadata: name: rs-example spec: replicas: 3 selector: matchLabels: app: nginx env: prod template: metadata: labels: app: nginx env: prod spec: containers: - name: nginx image: nginx:stable-alpine ports: - containerPort: 80 $ kubectl get pods NAME READY STATUS RESTARTS AGE rs-example-9l4dt 1/1 Running 0 1h rs-example-b7bcg 1/1 Running 0 1h rs-example-mkll2 1/1 Running 0 1h
  • 72. Deployment ● Way of managing Pods via ReplicaSets. ● Provide rollback functionality and update control. ● Updates are managed through the pod-template-hash label. ● Each iteration creates a unique label that is assigned to both the ReplicaSet and subsequent Pods.
  • 74. Deployment ● revisionHistoryLimit: The number of previous iterations of the Deployment to retain. ● strategy: Describes the method of updating the Pods based on the type. Valid options are Recreate or RollingUpdate. ○ Recreate: All existing Pods are killed before the new ones are created. ○ RollingUpdate: Cycles through updating the Pods according to the parameters: maxSurge and maxUnavailable. apiVersion: apps/v1 kind: Deployment metadata: name: deploy-example spec: replicas: 3 revisionHistoryLimit: 3 selector: matchLabels: app: nginx env: prod strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: <pod template>
  • 75. Deployment $ kubectl create deployment test --image=nginx $ kubectl set image deployment test nginx=nginx:1.9.1 --record $ kubectl rollout history deployment test deployments "test" REVISION CHANGE-CAUSE 1 <none> 2 kubectl set image deployment test nginx=nginx:1.9.1 --record=true $ kubectl annotate deployment test kubernetes.io/change-cause="image updated to 1.9.1" $ kubectl rollout undo deployment test $ kubectl rollout undo deployment test --to-revision=2 $ kubectl rollout history deployment test deployments "test" REVISION CHANGE-CAUSE 2 kubectl set image deployment test nginx=nginx:1.9.1 --record=true 3 <none> kubectl scale deployment test --replicas=10 kubectl rollout pause deployment test kubectl rollout resume deployment test
  • 76. RollingUpdate Deployment $ kubectl get pods NAME READY STATUS RESTARTS AGE mydep-6766777fff-9r2zn 1/1 Running 0 5h mydep-6766777fff-hsfz9 1/1 Running 0 5h mydep-6766777fff-sjxhf 1/1 Running 0 5h $ kubectl get replicaset NAME DESIRED CURRENT READY AGE mydep-6766777fff 3 3 3 5h Updating pod template generates a new ReplicaSet revision. R1 pod-template-hash: 676677fff R2 pod-template-hash: 54f7ff7d6d
  • 77. RollingUpdate Deployment $ kubectl get replicaset NAME DESIRED CURRENT READY AGE mydep-54f7ff7d6d 1 1 1 5s mydep-6766777fff 2 3 3 5h $ kubectl get pods NAME READY STATUS RESTARTS AGE mydep-54f7ff7d6d-9gvll 1/1 Running 0 2s mydep-6766777fff-9r2zn 1/1 Running 0 5h mydep-6766777fff-hsfz9 1/1 Running 0 5h mydep-6766777fff-sjxhf 1/1 Running 0 5h New ReplicaSet is initially scaled up based on maxSurge. R1 pod-template-hash: 676677fff R2 pod-template-hash: 54f7ff7d6d
  • 78. RollingUpdate Deployment $ kubectl get pods NAME READY STATUS RESTARTS AGE mydep-54f7ff7d6d-9gvll 1/1 Running 0 5s mydep-54f7ff7d6d-cqvlq 1/1 Running 0 2s mydep-6766777fff-9r2zn 1/1 Running 0 5h mydep-6766777fff-hsfz9 1/1 Running 0 5h $ kubectl get replicaset NAME DESIRED CURRENT READY AGE mydep-54f7ff7d6d 2 2 2 8s mydep-6766777fff 2 2 2 5h Phase out of old Pods managed by maxSurge and maxUnavailable. R1 pod-template-hash: 676677fff R2 pod-template-hash: 54f7ff7d6d
  • 79. RollingUpdate Deployment $ kubectl get replicaset NAME DESIRED CURRENT READY AGE mydep-54f7ff7d6d 3 3 3 10s mydep-6766777fff 0 1 1 5h $ kubectl get pods NAME READY STATUS RESTARTS AGE mydep-54f7ff7d6d-9gvll 1/1 Running 0 7s mydep-54f7ff7d6d-cqvlq 1/1 Running 0 5s mydep-54f7ff7d6d-gccr6 1/1 Running 0 2s mydep-6766777fff-9r2zn 1/1 Running 0 5h Phase out of old Pods managed by maxSurge and maxUnavailable. R1 pod-template-hash: 676677fff R2 pod-template-hash: 54f7ff7d6d
  • 80. RollingUpdate Deployment $ kubectl get replicaset NAME DESIRED CURRENT READY AGE mydep-54f7ff7d6d 3 3 3 13s mydep-6766777fff 0 0 0 5h $ kubectl get pods NAME READY STATUS RESTARTS AGE mydep-54f7ff7d6d-9gvll 1/1 Running 0 10s mydep-54f7ff7d6d-cqvlq 1/1 Running 0 8s mydep-54f7ff7d6d-gccr6 1/1 Running 0 5s Phase out of old Pods managed by maxSurge and maxUnavailable. R1 pod-template-hash: 676677fff R2 pod-template-hash: 54f7ff7d6d
  • 81. RollingUpdate Deployment $ kubectl get replicaset NAME DESIRED CURRENT READY AGE mydep-54f7ff7d6d 3 3 3 15s mydep-6766777fff 0 0 0 5h $ kubectl get pods NAME READY STATUS RESTARTS AGE mydep-54f7ff7d6d-9gvll 1/1 Running 0 12s mydep-54f7ff7d6d-cqvlq 1/1 Running 0 10s mydep-54f7ff7d6d-gccr6 1/1 Running 0 7s Updated to new deployment revision completed. R1 pod-template-hash: 676677fff R2 pod-template-hash: 54f7ff7d6d
  • 82. Taints and Tolerations $ kubectl taint nodes node1 key=value:NoSchedule tolerations: - key: "key" operator: "Equal" value: "value" effect: "NoSchedule” ------------------------------------------------- tolerations: - operator: "Exists" tolerations: - key: "key" operator: "Exists” tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 $ kubectl taint nodes node1 gpu=nvidia:NoSchedule apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx tolerations: - key: gpu value: nvidia effect: NoSchedule
  • 83. DaemonSet ● Ensure that all nodes matching certain criteria will run an instance of the supplied Pod. ● Are ideal for cluster wide services such as log forwarding or monitoring.
  • 84. StatefulSet ● Tailored to managing Pods that must persist or maintain state. ● Pod lifecycle will be ordered and follow consistent patterns. ● Assigned a unique ordinal name following the convention of ‘<statefulset name>-<ordinal index>’.
  • 85. StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: sts-example spec: replicas: 2 revisionHistoryLimit: 3 selector: matchLabels: app: stateful serviceName: app updateStrategy: type: RollingUpdate rollingUpdate: partition: 0 template: metadata: labels: app: stateful <continued> <continued> spec: containers: - name: nginx image: nginx:stable-alpine ports: - containerPort: 80 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: standard resources: requests: storage: 1Gi
  • 86. StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: sts-example spec: replicas: 2 revisionHistoryLimit: 3 selector: matchLabels: app: stateful serviceName: app updateStrategy: type: RollingUpdate rollingUpdate: partition: 0 template: <pod template> ● revisionHistoryLimit: The number of previous iterations of the StatefulSet to retain. ● serviceName: The name of the associated headless service; or a service without a ClusterIP.
  • 87. Headless Service / # dig sts-example-0.app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> sts-example-0.app.default.svc.cluster.local +noall +answer ;; global options: +cmd sts-example-0.app.default.svc.cluster.local. 20 IN A 10.255.0.2 apiVersion: v1 kind: Service metadata: name: app spec: clusterIP: None selector: app: stateful ports: - protocol: TCP port: 80 targetPort: 80 $ kubectl get pods NAME READY STATUS RESTARTS AGE sts-example-0 1/1 Running 0 11m sts-example-1 1/1 Running 0 11m <StatefulSet Name>-<ordinal>.<service name>.<namespace>.svc.cluster.local / # dig app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> app.default.svc.cluster.local +noall +answer ;; global options: +cmd app.default.svc.cluster.local. 2 IN A 10.255.0.5 app.default.svc.cluster.local. 2 IN A 10.255.0.2 / # dig sts-example-1.app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> sts-example-1.app.default.svc.cluster.local +noall +answer ;; global options: +cmd sts-example-1.app.default.svc.cluster.local. 30 IN A 10.255.0.5
  • 88. Headless Service / # dig sts-example-0.app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> sts-example-0.app.default.svc.cluster.local +noall +answer ;; global options: +cmd sts-example-0.app.default.svc.cluster.local. 20 IN A 10.255.0.2 apiVersion: v1 kind: Service metadata: name: app spec: clusterIP: None selector: app: stateful ports: - protocol: TCP port: 80 targetPort: 80 $ kubectl get pods NAME READY STATUS RESTARTS AGE sts-example-0 1/1 Running 0 11m sts-example-1 1/1 Running 0 11m <StatefulSet Name>-<ordinal>.<service name>.<namespace>.svc.cluster.local / # dig app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> app.default.svc.cluster.local +noall +answer ;; global options: +cmd app.default.svc.cluster.local. 2 IN A 10.255.0.5 app.default.svc.cluster.local. 2 IN A 10.255.0.2 / # dig sts-example-1.app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> sts-example-1.app.default.svc.cluster.local +noall +answer ;; global options: +cmd sts-example-1.app.default.svc.cluster.local. 30 IN A 10.255.0.5
  • 89. Headless Service / # dig sts-example-0.app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> sts-example-0.app.default.svc.cluster.local +noall +answer ;; global options: +cmd sts-example-0.app.default.svc.cluster.local. 20 IN A 10.255.0.2 apiVersion: v1 kind: Service metadata: name: app spec: clusterIP: None selector: app: stateful ports: - protocol: TCP port: 80 targetPort: 80 $ kubectl get pods NAME READY STATUS RESTARTS AGE sts-example-0 1/1 Running 0 11m sts-example-1 1/1 Running 0 11m <StatefulSet Name>-<ordinal>.<service name>.<namespace>.svc.cluster.local / # dig app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> app.default.svc.cluster.local +noall +answer ;; global options: +cmd app.default.svc.cluster.local. 2 IN A 10.255.0.5 app.default.svc.cluster.local. 2 IN A 10.255.0.2 / # dig sts-example-1.app.default.svc.cluster.local +noall +answer ; <<>> DiG 9.11.2-P1 <<>> sts-example-1.app.default.svc.cluster.local +noall +answer ;; global options: +cmd sts-example-1.app.default.svc.cluster.local. 30 IN A 10.255.0.5
  • 90. CronJob An extension of the Job Controller, it provides a method of executing jobs on a cron-like schedule. CronJobs within Kubernetes use UTC ONLY.
  • 91. CronJob ● schedule: The cron schedule for the job. ● successfulJobHistoryLimit: The number of successful jobs to retain. ● failedJobHistoryLimit: The number of failed jobs to retain. apiVersion: batch/v1beta1 kind: CronJob metadata: name: cronjob-example spec: schedule: "*/1 * * * *" successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 1 jobTemplate: spec: completions: 4 parallelism: 2 template: <pod template>
  • 92. CronJob $ kubectl describe cronjob cronjob-example Name: cronjob-example Namespace: default Labels: <none> Annotations: <none> Schedule: */1 * * * * Concurrency Policy: Allow Suspend: False Starting Deadline Seconds: <unset> Selector: <unset> Parallelism: 2 Completions: 4 Pod Template: Labels: <none> Containers: hello: Image: alpine:latest Port: <none> Command: /bin/sh -c Args: echo hello from $HOSTNAME! Environment: <none> Mounts: <none> Volumes: <none> Last Schedule Time: Mon, 19 Feb 2018 09:54:00 -0500 Active Jobs: cronjob-example-1519052040 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 3m cronjob-controller Created job cronjob-example-1519051860 Normal SawCompletedJob 2m cronjob-controller Saw completed job: cronjob-example-1519051860 Normal SuccessfulCreate 2m cronjob-controller Created job cronjob-example-1519051920 Normal SawCompletedJob 1m cronjob-controller Saw completed job: cronjob-example-1519051920 Normal SuccessfulCreate 1m cronjob-controller Created job cronjob-example-1519051980 apiVersion: batch/v1beta1 kind: CronJob metadata: name: cronjob-example spec: schedule: "*/1 * * * *" successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 1 jobTemplate: spec: completions: 4 parallelism: 2 template: spec: containers: - name: hello image: alpine:latest command: ["/bin/sh", "-c"] args: ["echo hello from $HOSTNAME!"] restartPolicy: Never $ kubectl get jobs NAME DESIRED SUCCESSFUL AGE cronjob-example-1519053240 4 4 2m cronjob-example-1519053300 4 4 1m cronjob-example-1519053360 4 4 26s
  • 93. Health checks • initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. • periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. • timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. • successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. • failureThreshold: When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the Pod. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1. apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-readiness-http spec: containers: - name: liveness-readiness-http image: k8s.gcr.io/ liveness-readiness-http livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 4 failureThreshold: 5 readinessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 100 periodSeconds: 10 timeoutSeconds: 4 failureThreshold: 2
  • 94.
  • 95. Concepts and Resources Storage ● Volumes ● Persistent Volumes ● Persistent Volume Claims ● StorageClass
  • 96. Storage Pods by themselves are useful, but many workloads require exchanging data between containers, or persisting some form of data. For this we have Volumes, PersistentVolumes, PersistentVolumeClaims, and StorageClasses.
  • 97. StorageClass ● Storage classes are an abstraction on top of an external storage resource (PV) ● Work hand-in-hand with the external storage system to enable dynamic provisioning of storage by eliminating the need for the cluster admin to pre-provision a PV
  • 98. StorageClass ● provisioner: Defines the ‘driver’ to be used for provisioning of the external storage. ● parameters: A hash of the various configuration parameters for the provisioner. ● reclaimPolicy: The behaviour for the backing storage when the PVC is deleted. ○ Retain - manual clean-up ○ Delete - storage asset deleted by provider kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard zones: us-central1-a, us-central1-b reclaimPolicy: Delete
  • 99. Available StorageClasses ● AWSElasticBlockStore ● AzureFile ● AzureDisk ● CephFS ● Cinder ● FC ● Flocker ● GCEPersistentDisk ● Glusterfs ● iSCSI ● Quobyte ● NFS ● RBD ● VsphereVolume ● PortworxVolume ● ScaleIO ● StorageOS ● Local Internal Provisioner
  • 100. Volumes ● Storage that is tied to the Pod’s Lifecycle. ● A pod can have one or more types of volumes attached to it. ● Can be consumed by any of the containers within the pod. ● Survive Pod restarts; however their durability beyond that is dependent on the Volume Type.
  • 101. Volume Types ● awsElasticBlockStore ● azureDisk ● azureFile ● cephfs ● configMap ● csi ● downwardAPI ● emptyDir ● fc (fibre channel) ● flocker ● gcePersistentDisk ● gitRepo ● glusterfs ● hostPath ● iscsi ● local ● nfs ● persistentVolume Claim ● projected ● portworxVolume ● quobyte ● rbd ● scaleIO ● secret ● storageos ● vsphereVolume Persistent Volume Supported
  • 102. Volumes ● volumes: A list of volume objects to be attached to the Pod. Every object within the list must have it’s own unique name. ● volumeMounts: A container specific list referencing the Pod volumes by name, along with their desired mountPath. apiVersion: v1 kind: Pod metadata: name: volume-example spec: containers: - name: nginx image: nginx:stable-alpine volumeMounts: - name: html mountPath: /usr/share/nginx/html ReadOnly: true - name: content image: alpine:latest command: ["/bin/sh", "-c"] args: - while true; do date >> /html/index.html; sleep 5; done volumeMounts: - name: html mountPath: /html volumes: - name: html emptyDir: {}
  • 103. Volumes ● volumes: A list of volume objects to be attached to the Pod. Every object within the list must have it’s own unique name. ● volumeMounts: A container specific list referencing the Pod volumes by name, along with their desired mountPath. apiVersion: v1 kind: Pod metadata: name: volume-example spec: containers: - name: nginx image: nginx:stable-alpine volumeMounts: - name: html mountPath: /usr/share/nginx/html ReadOnly: true - name: content image: alpine:latest command: ["/bin/sh", "-c"] args: - while true; do date >> /html/index.html; sleep 5; done volumeMounts: - name: html mountPath: /html volumes: - name: html emptyDir: {}
  • 104. Volumes ● volumes: A list of volume objects to be attached to the Pod. Every object within the list must have it’s own unique name. ● volumeMounts: A container specific list referencing the Pod volumes by name, along with their desired mountPath. apiVersion: v1 kind: Pod metadata: name: volume-example spec: containers: - name: nginx image: nginx:stable-alpine volumeMounts: - name: html mountPath: /usr/share/nginx/html ReadOnly: true - name: content image: alpine:latest command: ["/bin/sh", "-c"] args: - while true; do date >> /html/index.html; sleep 5; done volumeMounts: - name: html mountPath: /html volumes: - name: html emptyDir: {}
  • 105. Persistent Volumes ● A PersistentVolume (PV) represents a storage resource. ● PVs are a cluster wide resource linked to a backing storage provider: NFS, GCEPersistentDisk, RBD etc. ● Generally provisioned by an administrator. ● Their lifecycle is handled independently from a pod ● CANNOT be attached to a Pod directly. Relies on a PersistentVolumeClaim
  • 106. PersistentVolumeClaims ● A PersistentVolumeClaim (PVC) is a namespaced request for storage. ● Satisfies a set of requirements instead of mapping to a storage resource directly. ● Ensures that an application’s ‘claim’ for storage is portable across numerous backends or providers.
  • 107. apiVersion: v1 kind: PersistentVolume metadata: name: nfsserver spec: capacity: storage: 50Gi volumeMode: Filesystem accessModes: - ReadWriteOnce - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /exports server: 172.22.0.42 PersistentVolume ● capacity.storage: The total amount of available storage. ● volumeMode: The type of volume, this can be either Filesystem or Block. ● accessModes: A list of the supported methods of accessing the volume. Options include: ○ ReadWriteOnce ○ ReadOnlyMany ○ ReadWriteMany
  • 108. PersistentVolume ● persistentVolumeReclaimPolicy: The behaviour for PVC’s that have been deleted. Options include: ○ Retain - manual clean-up ○ Delete - storage asset deleted by provider. ● storageClassName: Optional name of the storage class that PVC’s can reference. If provided, ONLY PVC’s referencing the name consume use it. ● mountOptions: Optional mount options for the PV. apiVersion: v1 kind: PersistentVolume metadata: name: nfsserver spec: capacity: storage: 50Gi volumeMode: Filesystem accessModes: - ReadWriteOnce - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /exports server: 172.22.0.42
  • 109. PersistentVolumeClaim ● accessModes: The selected method of accessing the storage. This MUST be a subset of what is defined on the target PV or Storage Class. ○ ReadWriteOnce ○ ReadOnlyMany ○ ReadWriteMany ● resources.requests.storage: The desired amount of storage for the claim ● storageClassName: The name of the desired Storage Class kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-sc-example spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: slow
  • 110. PVs and PVCs with Selectors kind: PersistentVolume apiVersion: v1 metadata: name: pv-selector-example labels: type: hostpath spec: capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-selector-example spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: type: hostpath
  • 111. PVs and PVCs with Selectors kind: PersistentVolume apiVersion: v1 metadata: name: pv-selector-example labels: type: hostpath spec: capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-selector-example spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: type: hostpath
  • 112. PV Phases Available PV is ready and available to be consumed. Bound The PV has been bound to a claim. Released The binding PVC has been deleted, and the PV is pending reclamation. Failed An error has been encountered.
  • 113. StorageClass pv: pvc-9df65c6e-1a69-11e8-ae10-080027a3682b uid: 9df65c6e-1a69-11e8-ae10-080027a3682b 1. PVC makes a request of the StorageClass. 2. StorageClass provisions request through API with external storage system. 3. External storage system creates a PV strictly satisfying the PVC request. 4. provisioned PV is bound to requesting PVC.
  • 114. Persistent Volumes and Claims Cluster Users Cluster Admins
  • 116. Concepts and Resources Configuration ● ConfigMap ● Secret
  • 117. Configuration Kubernetes has an integrated pattern for decoupling configuration from application or container. This pattern makes use of two Kubernetes components: ConfigMaps and Secrets.
  • 118. ConfigMap ● Externalized data stored within kubernetes. ● Can be referenced through several different means: ○ environment variable ○ a command line argument (via env var) ○ injected as a file into a volume mount ● Can be created from a manifest, literals, directories, or files directly.
  • 119. ConfigMap data: Contains key-value pairs of ConfigMap contents. apiVersion: v1 kind: ConfigMap metadata: name: manifest-example data: state: Michigan city: Ann Arbor content: | Look at this, its multiline!
  • 120. ConfigMap Example apiVersion: v1 kind: ConfigMap metadata: name: manifest-example data: city: Ann Arbor state: Michigan $ kubectl create configmap literal-example > --from-literal="city=Ann Arbor" --from-literal=state=Michigan configmap “literal-example” created $ cat info/city Ann Arbor $ cat info/state Michigan $ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state configmap "file-example" created All produce a ConfigMap with the same content! $ cat info/city Ann Arbor $ cat info/state Michigan $ kubectl create configmap dir-example --from-file=cm/ configmap "dir-example" created
  • 121. ConfigMap Example apiVersion: v1 kind: ConfigMap metadata: name: manifest-example data: city: Ann Arbor state: Michigan $ kubectl create configmap literal-example > --from-literal="city=Ann Arbor" --from-literal=state=Michigan configmap “literal-example” created $ cat info/city Ann Arbor $ cat info/state Michigan $ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state configmap "file-example" created All produce a ConfigMap with the same content! $ cat info/city Ann Arbor $ cat info/state Michigan $ kubectl create configmap dir-example --from-file=cm/ configmap "dir-example" created
  • 122. ConfigMap Example apiVersion: v1 kind: ConfigMap metadata: name: manifest-example data: city: Ann Arbor state: Michigan $ kubectl create configmap literal-example > --from-literal="city=Ann Arbor" --from-literal=state=Michigan configmap “literal-example” created $ cat info/city Ann Arbor $ cat info/state Michigan $ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state configmap "file-example" created All produce a ConfigMap with the same content! $ cat info/city Ann Arbor $ cat info/state Michigan $ kubectl create configmap dir-example --from-file=cm/ configmap "dir-example" created
  • 123. ConfigMap Example apiVersion: v1 kind: ConfigMap metadata: name: manifest-example data: city: Ann Arbor state: Michigan $ kubectl create configmap literal-example > --from-literal="city=Ann Arbor" --from-literal=state=Michigan configmap “literal-example” created $ cat info/city Ann Arbor $ cat info/state Michigan $ kubectl create configmap file-example --from-file=cm/city --from-file=cm/state configmap "file-example" created All produce a ConfigMap with the same content! $ cat info/city Ann Arbor $ cat info/state Michigan $ kubectl create configmap dir-example --from-file=cm/ configmap "dir-example" created
  • 124. Secret ● Functionally identical to a ConfigMap. ● Stored as base64 encoded content. ● Encrypted at rest within etcd (if configured!). ● Stored on each worker node in tmpfs directory. ● Ideal for username/passwords, certificates or other sensitive information that should not be stored in a container.
  • 125. Secret ● type: There are three different types of secrets within Kubernetes: ○ docker-registry - credentials used to authenticate to a container registry ○ generic/Opaque - literal values from different sources ○ tls - a certificate based secret ● data: Contains key-value pairs of base64 encoded content. apiVersion: v1 kind: Secret metadata: name: manifest-secret type: Opaque data: username: ZXhhbXBsZQ== password: bXlwYXNzd29yZA==
  • 126. Secret Example apiVersion: v1 kind: Secret metadata: name: manifest-example type: Opaque data: username: ZXhhbXBsZQ== password: bXlwYXNzd29yZA== $ kubectl create secret generic literal-secret > --from-literal=username=example > --from-literal=password=mypassword secret "literal-secret" created $ cat secret/username example $ cat secret/password mypassword $ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password Secret "file-secret" created All produce a Secret with the same content! $ cat info/username example $ cat info/password mypassword $ kubectl create secret generic dir-secret --from-file=secret/ Secret "file-secret" created
  • 127. Secret Example apiVersion: v1 kind: Secret metadata: name: manifest-example type: Opaque data: username: ZXhhbXBsZQ== password: bXlwYXNzd29yZA== $ kubectl create secret generic literal-secret > --from-literal=username=example > --from-literal=password=mypassword secret "literal-secret" created $ cat secret/username example $ cat secret/password mypassword $ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password Secret "file-secret" created All produce a Secret with the same content! $ cat info/username example $ cat info/password mypassword $ kubectl create secret generic dir-secret --from-file=secret/ Secret "file-secret" created
  • 128. Secret Example apiVersion: v1 kind: Secret metadata: name: manifest-example type: Opaque data: username: ZXhhbXBsZQ== password: bXlwYXNzd29yZA== $ kubectl create secret generic literal-secret > --from-literal=username=example > --from-literal=password=mypassword secret "literal-secret" created $ cat secret/username example $ cat secret/password mypassword $ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password Secret "file-secret" created All produce a Secret with the same content! $ cat info/username example $ cat info/password mypassword $ kubectl create secret generic dir-secret --from-file=secret/ Secret "file-secret" created
  • 129. Secret Example apiVersion: v1 kind: Secret metadata: name: manifest-example type: Opaque data: username: ZXhhbXBsZQ== password: bXlwYXNzd29yZA== $ kubectl create secret generic literal-secret > --from-literal=username=example > --from-literal=password=mypassword secret "literal-secret" created $ cat secret/username example $ cat secret/password mypassword $ kubectl create secret generic file-secret --from-file=secret/username --from-file=secret/password Secret "file-secret" created All produce a Secret with the same content! $ cat info/username example $ cat info/password mypassword $ kubectl create secret generic dir-secret --from-file=secret/ Secret "file-secret" created
  • 130. Injecting as Environment Variable apiVersion: batch/v1 kind: Job metadata: name: cm-env-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“printenv CITY”] env: - name: CITY valueFrom: configMapKeyRef: name: manifest-example key: city restartPolicy: Never apiVersion: batch/v1 kind: Job metadata: name: secret-env-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“printenv USERNAME”] env: - name: USERNAME valueFrom: secretKeyRef: name: manifest-example key: username restartPolicy: Never
  • 131. Injecting as Environment Variable apiVersion: batch/v1 kind: Job metadata: name: cm-env-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“printenv CITY”] env: - name: CITY valueFrom: configMapKeyRef: name: manifest-example key: city restartPolicy: Never apiVersion: batch/v1 kind: Job metadata: name: secret-env-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“printenv USERNAME”] env: - name: USERNAME valueFrom: secretKeyRef: name: manifest-example key: username restartPolicy: Never
  • 132. Injecting in a Command apiVersion: batch/v1 kind: Job metadata: name: cm-cmd-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“echo Hello ${CITY}!”] env: - name: CITY valueFrom: configMapKeyRef: name: manifest-example key: city restartPolicy: Never apiVersion: batch/v1 kind: Job metadata: name: secret-cmd-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“echo Hello ${USERNAME}!”] env: - name: USERNAME valueFrom: secretKeyRef: name: manifest-example key: username restartPolicy: Never
  • 133. Injecting in a Command apiVersion: batch/v1 kind: Job metadata: name: cm-cmd-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“echo Hello ${CITY}!”] env: - name: CITY valueFrom: configMapKeyRef: name: manifest-example key: city restartPolicy: Never apiVersion: batch/v1 kind: Job metadata: name: secret-cmd-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“echo Hello ${USERNAME}!”] env: - name: USERNAME valueFrom: secretKeyRef: name: manifest-example key: username restartPolicy: Never
  • 134. Injecting as a Volume apiVersion: batch/v1 kind: Job metadata: name: cm-vol-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“cat /myconfig/city”] volumeMounts: - name: config-volume mountPath: /myconfig restartPolicy: Never volumes: - name: config-volume configMap: name: manifest-example apiVersion: batch/v1 kind: Job metadata: name: secret-vol-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“cat /mysecret/username”] volumeMounts: - name: secret-volume mountPath: /mysecret restartPolicy: Never volumes: - name: secret-volume secret: secretName: manifest-example
  • 135. Injecting as a Volume apiVersion: batch/v1 kind: Job metadata: name: cm-vol-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“cat /myconfig/city”] volumeMounts: - name: config-volume mountPath: /myconfig restartPolicy: Never volumes: - name: config-volume configMap: name: manifest-example apiVersion: batch/v1 kind: Job metadata: name: secret-vol-example spec: template: spec: containers: - name: mypod image: alpine:latest command: [“/bin/sh”, “-c”] args: [“cat /mysecret/username”] volumeMounts: - name: secret-volume mountPath: /mysecret restartPolicy: Never volumes: - name: secret-volume secret: secretName: manifest-example
  • 136. Concepts and Resources Metrics and Monitoring ● Metrics server ● HPA (horizontal pod autoscaler) ● Prometheus ● Grafana (dashboards) ● Fluentd (log shipping)
  • 137. Metrics API Server ● Metric server collects metrics such as CPU and Memory by each pod and node from the Summary API, exposed by Kubelet on each node. ● Metrics Server registered in the main API server through Kubernetes aggregator, which was introduced in Kubernetes 1.7
  • 138. HPA
  • 139.
  • 140.
  • 141.
  • 142.
  • 144.
  • 145. Links ● Free Kubernetes Courses https://www.edx.org/ ● Interactive Kubernetes Tutorials https://www.katacoda.com/courses/kubernetes ● Learn Kubernetes the Hard Way https://github.com/kelseyhightower/kubernetes-the-hard-way ● Official Kubernetes Youtube Channel https://www.youtube.com/c/KubernetesCommunity ● Official CNCF Youtube Channel https://www.youtube.com/c/cloudnativefdn ● Track to becoming a CKA/CKAD (Certified Kubernetes Administrator/Application Developer) https://www.cncf.io/certification/expert/ ● Awesome Kubernetes https://www.gitbook.com/book/ramitsurana/awesome-kubernetes/details
  • 146. Questions? - by Joe Beda (Gluecon 2017) This presentation is licensed under a Creative Commons Attribution 4.0 International License. See https://creativecommons.org/licenses/by/4.0/ for more details.

Notas do Editor

  1. The first question always asked There is also the abbreviation of K8s -- K, eight letters, s
  2. There’s a phrase called Google-scale. This was developed out of a need to scale large container applications across Google-scale infrastructure borg is the man behind the curtain managing everything in google Kubernetes is loosely coupled, meaning that all the components have little knowledge of each other and function independently. This makes them easy to replace and integrate with a wide variety of systems
  3. A lot of key components are batteries-included
  4. The instance will have a new IP and likely be on a different host, but Kubernetes gives the primitives for us to ‘not care’ about that.
  5. Very active project 42k starts on github 1800+ contributors to main core repo most discussed repo on github massive slack team with 50k+ users
  6. Also one of the largest open source projects based on commits and PRs.
  7. There are a few small things we should mention It’s kind of a chicken and egg problem. But we need to talk about these before the stuff that supports and extends it
  8. While this may seem like a technical definition, in practice it can be viewed more simply: A service is an internal load balancer to your pod(s). You have 3 HTTP pods scheduled? Create a service, reference the pods, and (internally) it will load balance across the three.
  9. Services are persistent objects used to reference ephemeral resources
  10. Get right into the meat of the internals.
  11. The complete picture
  12. Kube-apiserver Gate keeper for everything in kubernetes EVERYTHING interacts with kubernetes through the apiserver Etcd Distributed storage back end for kubernetes The apiserver is the only thing that talks to it Kube-controller-manager The home of the core controllers kube-scheduler handes placement
  13. provides forward facing REST interface into k8s itself everything, ALL components interact with each other through the api-server handles authn, authz, request validation, mutation and admission control and serves as a generic front end to the backing datastore
  14. is the backing datastore extremely durable and highly available key-value store was developed originally by coreos now redhat however it is in the process of being donated to the CNCF
  15. provides HA via raft requires quorum of systems Typically aim for 1,3,5,7 control plane servers
  16. Its the director behind the scenes The thing that says “hey I need a few more pods spun up” Does NOT handle scheduling, just decides what the desired state of the cluster should look like e.g. receives request for a deployment, produces replicaset, then produces pods
  17. scheduler decides which nodes should run which pods updates pod with a node assignment, nodes poll checking which pods have their matching assignment takes into account variety of reqs, affinity, anti-affinity, hw resources etc possible to actually run more than one scheduler example: kube-batch is a gang scheduler based off LSF/Symphony from IBM
  18. Its the director behind the scenes The thing that says “hey I need a few more pods spun up” Does NOT handle scheduling, just decides what the desired state of the cluster should look like e.g. receives request for a deployment, produces replicaset, then produces pods
  19. Kubelet Agent running on every node, including the control plane Kube-proxy The network ‘plumber’ for Kubernetes services Enables in-cluster load-balancing and service discovery Container Runtime Engine The containerizer itself - typically docker
  20. the single host daemon required for a being a part of a kubernetes cluster can read pod manifests from several different locations workers: poll kube-apiserver looking for what they should run masters: run the master services as static manifests found locally on the host
  21. kube-proxy is the plumber creates the rules on the host to map and expose services uses a combination of ipvs and iptables to manage networking/loadbalancing ipvs is more performant and opens the door to a wider feature set (port ranges, better lb rules etc)
  22. Kubernetes functions with multiple different containerizers Interacts with them through the CRI - container runtime interface CRI creates a ‘shim’ to talk between kubelet and the container runtime cri-o is a cool one that allows you to run any oci compatible image/runtime in Kubernetes kata is a super lightweight KVM wrapper
  23. The complete picture
  24. Networking is where people tend to get confused, they think of it like Docker The backing cluster network is a plugin, does not work just out of the box
  25. The complete picture
  26. Reminder to myself: Don’t expand on things on THIS SLIDE, use the next ones Kubernetes adheres to these fundamental rules for networking
  27. Reminder to myself: Don’t expand on things on THIS SLIDE, use the next ones Kubernetes adheres to these fundamental rules for networking
  28. Reminder to myself: Don’t expand on things on THIS SLIDE, use the next ones Kubernetes adheres to these fundamental rules for networking
  29. Reminder to myself: Don’t expand on things on THIS SLIDE, use the next ones Kubernetes adheres to these fundamental rules for networking
  30. Reminder to myself: Don’t expand on things on THIS SLIDE, use the next ones Kubernetes adheres to these fundamental rules for networking
  31. The complete picture
  32. The complete picture
  33. Discuss sidecar pattern This is a good example of a pod having multiple containers
  34. Networking is where people tend to get confused, they think of it like Docker The backing cluster network is a plugin, does not work just out of the box
  35. Reminder to myself: Don’t expand on things on THIS SLIDE, use the next ones Kubernetes adheres to these fundamental rules for networking
  36. Pods are groups of containers into a single manageable unit Example: app container + a container within a pod that sends logs to another service As such the containers share a network namespace Pods are given a cluster unique IP for the duration of its lifecycle. Think dhcp, a Pod will be given a static lease for as long as it’s around
  37. Both pod-to-service and external-to-service connectivity relies on kube-proxy For pod-to-service - kubernetes creates a cluster-wide IP that can map to n-number of pods think like a dns name, or an LB to point to containers backing your service External Connectivity relies on an external entity, a cloud provider or some other system in conjunction with kube-proxy to map out-of-cluster access to cluster services
  38. Ns,pod: Jeff Labels,Selectors: Bob Service intro: Jeff
  39. Think like a scope or even a subdomain This is a very similar concept to namespaces in programming. In fact it is the same, just for application architecture.
  40. Discuss sidecar pattern This is a good example of a pod having multiple containers
  41. Essentially a Pod without a name No need to supply apiVersion or Kind
  42. Selectors allow you to create rules based on your labels NO ‘hard links’ to things, everything is done through labels and selectors
  43. Not ephemeral. Repeat repeat repeat.
  44. This in-turn acts as a simple round-robin load balancer among the Pods targeted by the selector. More options coming
  45. clusterIP,LB - Bob Nodeport,External - Jeff
  46. ClusterIP is an internal LB for your application
  47. The Pod on host C requests the service Hits host iptables and it load-balances the connection between the endpoints residing on Hosts A, B
  48. ClusterIP is an internal LB for your application
  49. NodePort behaves just like ClusterIP, except it also exposes the service on a (random or specified) port on every Node in your cluster.
  50. User can hit any host in cluster on nodeport IP and get to service Does introduce extra hop if hitting a host without instance of the pod
  51. Continue to get abstracted and built on top of one another. The LoadBalancer service extends NodePort turns it into a highly-available externally consumable resource. -Must- Work with some external system to provide cluster ingress
  52. Gets external IP from provider There are settings to remove nodes that are not running an instance of the pod. To remove extra hop.
  53. Gets external IP from provider There are settings to remove nodes that are not running an instance of the pod. To remove extra hop.
  54. Allows you to point your configs and such to a static entry internally that you can update out of band later
  55. Allows you to point your configs and such to a static entry internally that you can update out of band later
  56. Allows you to point your configs and such to a static entry internally that you can update out of band later
  57. rs,ds,j/cronjob - jeff dep,sts - bob
  58. ReplicaSets are “dumb” They strive to ensure the number of pods you want running will stay running
  59. You define the number of pods you want running with the replicas field. You then define the selector the ReplicaSet will use to look for Pods and ensure they’re running.
  60. Selector and labels within the pod template match This may seem redundant, but there are edge-case instances where you might not always want them to match. A little redundancy here also grants us a very flexible and powerful tool. Pods have generated name based off replicaset name + random 5 character string
  61. The real way we manage pods Deployments manage replicasets which then manage pods Rare that you manage replicasets directly because they are ‘dumb’ Offer update control and rollback functionality, gives blue/green deployment Does this through hashing the pod template and storing that value in a label, the pod template hash Used in replicaset name Added to the selector and pod labels automatically gives us a unique selector that strictly targets a revision of a pod
  62. Since we now have rollback functionality, we now have to know how many previous versions we want to keep strategy describes the method used to update the deployment recreate is pretty self explanatory, nuke it and recreate, not graceful rolling update is what we normally want max Sure == how many ADDITIONAL replicas we want to spin up while updating max Unavailable == how many may be unavailable during the update process Lets say you’re resource constrained and can only support 3 replicas of the pod at any one time We could set maxSurge to 0 and maxUnavailable to 1. This would cycle through updating 1 at a time without spinning up additional pods.
  63. Since we now have rollback functionality, we now have to know how many previous versions we want to keep strategy describes the method used to update the deployment recreate is pretty self explanatory, nuke it and recreate, not graceful rolling update is what we normally want max Sure == how many ADDITIONAL replicas we want to spin up while updating max Unavailable == how many may be unavailable during the update process Lets say you’re resource constrained and can only support 3 replicas of the pod at any one time We could set maxSurge to 0 and maxUnavailable to 1. This would cycle through updating 1 at a time without spinning up additional pods.
  64. Since we now have rollback functionality, we now have to know how many previous versions we want to keep strategy describes the method used to update the deployment recreate is pretty self explanatory, nuke it and recreate, not graceful rolling update is what we normally want max Sure == how many ADDITIONAL replicas we want to spin up while updating max Unavailable == how many may be unavailable during the update process Lets say you’re resource constrained and can only support 3 replicas of the pod at any one time We could set maxSurge to 0 and maxUnavailable to 1. This would cycle through updating 1 at a time without spinning up additional pods.
  65. We add a new label, change the image, do something to change the pod template in some way The deployment controller creates a new replicaset with 0 instances to start
  66. It then scales up the new ReplicaSet based on our maxSurge value
  67. It will then begin to kill off the prevision revision by scaling down the old replicaset and scaling up the new one
  68. Continues this pattern based on maxSurge and maxUnavailable
  69. Update is complete
  70. Until the previous revision is at 0 ReplicaSet will remain around, but it will have 0 replicas. How many stay around are based off our revision history limit
  71. Had to span 2 ‘pages’ as its rather big Don’t worry, will go over this stuff in more detail If you want to look at this more closely, goto: workloads/manifests/sts-example.yaml
  72. Daemonsets schedule pods across all nodes matching certain criteria. They work best for services like log shipping and health monitoring. The criteria can be anything from the specific OS or Arch, or your own cluster-specific labels.
  73. Tailored to apps that must persist or maintain state: namely databases Generally these systems care a bit more about their identity, their hostname their IP etc instead of that random string on the end of a pod name, they’ll get a predictable value Follows the convention of statefulset name - ordinal index ordinal index correlates to replica Ordinal starts at 0, just like a normal array e.g. mysql-0 naming convention carries over to network identity and storage
  74. Had to span 2 ‘pages’ as its rather big Don’t worry, will go over this stuff in more detail If you want to look at this more closely, goto: workloads/manifests/sts-example.yaml
  75. Just like Deployments and Daemonsets, it can keep track of revisions be careful rolling back as some applications will change file/structure schema and it may cause more problems serviceName is first unique aspect of statefulsets maps to the name of a service you create along with the StatefulSet that helps provide pod network identity Headless service, or a service without a ClusterIP. No NAT’ing, No load balancing. It will contain a list of the pod endpoints and provides a mechanism for unique DNS names
  76. Follows pattern of <StatefulSet Name>-<ordinal>.<service name>.<namespace>.svc.cluster.local Note: clusterIP is set to None
  77. Can query the service name directly it will return multiple records containing all our pods
  78. or can query for pod directly
  79. Note successfulJobsHistoryLImit and failedJobsHistoryLimit Oh hey a jobTemplate.
  80. Note successfulJobsHistoryLImit and failedJobsHistoryLimit Oh hey a jobTemplate.
  81. It then scales up the new ReplicaSet based on our maxSurge value
  82. Bob - v,sc Jeff - pv,pvc
  83. Storage is one of those things that does tend to confuse people with all the various levels of abstractions 4 ‘types’ of storage Volumes Persistent Volumes Persistent Volume Claims Storage Classes
  84. PV’s and PVCs work great together, but still require some manual provisioning under the hood. What if we could do all that dynamically? Well, you can with StorageClasses Act as an abstraction on top of a external storage resource that has dynamic provisioning capability usually the cloud providers, but others work like ceph
  85. A storage class is really defined by its provisioner. Each provisioner will have a slew of different parameters that are tied to the specific storage system it’s talking to. Lastly is the reclaimPolicy which tells the external storage system what to do when the associate PVC is deleted.
  86. A list of current StorageClasses More are supported through CSI plugin
  87. A Volume is defined in the pod Spec and tied to its lifecycle This means it survives a Pod restart, but what happens to it when the pod is deleted is up to the volume type volumes themselves can be actually be attached to more than one container within the Pod very common to share storage between containers within the pod web server pre-process data socket files logs
  88. Current list of in-tree volume types supported, with more being added through the “Container Storage Interface” essentially storage plugin system supported by multiple container orchestration engines (k8s and mesos) The ones in blue support persisting beyond a pod’s lifecycle, but generally are not mounted directly except through a persistent volume claim emptyDir is common for scratch space between pods configMap, secret, downward API, and projected are all used for injecting information stored within Kubernetes directly into the Pod
  89. 2 Major components with volumes in a Pod Spec: volumes is on the same ‘level’ as containers within the Pod Spec volumeMounts are part of each container’s definition
  90. Volumes is a list of every volume type you want to attach to the Pod Each volume will have its own volume type specific parameters and a name Using the name is how we reference mounting that storage within a container
  91. volumeMounts is an attribute within each container in your Pod they reference the volume name and supply the mountPath It may have other available options depending on the volume type e.g. mount read only
  92. PVs are k8s objects that represent underlying storage. They don’t belong to any namespace, they are a global resource. As such users aren’t usually the ones creating PVs, that’s left up to admins. The way a user consumes a PV is by creating a PVC.
  93. Unlike a PV, a PVC is based within a namespace (with the pod) It is a one-to-one mapping of a user claim to a global storage offer. A user can define specific requirements for their PVC so it gets matched with a specific PV.
  94. For a PV, you define the capacity, whether you want a filesystem or a block device, and the access mode. A word on accessModes: RWO means only a single pod will be able to (through a PVC) mount this. ROM means many pods can mount this, but none can write. RWM means, well, many pods can mount and write.
  95. Additionally, you need to specify your reclaim policy. If a PVC goes away, should the PV clean itself up or do you want manual intervention? Manual intervention is certainly safer, but everyone's use case may vary. You can define your storageClass here. For example, if this PV attaches to super fast storage, you might call the storageClass “superfast” and then ONLY PVCs that target it can attach to it. Finally, you define mount options. These vary depending on your volumeType, but think of it like the same options you’d put in FSTAB or a mount command. Just recognize this is an array.
  96. As with the PV, you define your accessModes for your PVC. A PV and a PVC match through their accessModes. Likewise, you request how much storage your PVC will require. Finally, you can specify what className you require. A lot of this can be arbitrary and highly dependent on your environment.
  97. This example defines a PV with 2Gi of space and a label. The PV is using the hostPath volume type, and thus they have a label of “type: hostpath” The PVC is looking for 1Gi of storage and is looking for a PV with a label of “type: hostpath” Next slide: match on labels
  98. They will match and bind because the labels match, the access modes match, and the PVC storage capacity is <= PV storage capacity
  99. If you get or describe a PV, you will see that they have several different states. The diagram has a pretty good explanation but it warrants going over. Available - The PV is ready and available Bound - The PV has been bound to a PVC and is not available. Released - The PVC no longer exists. The PV is in a state that (depending on reclaimPolicy) requires manual intervention Failed - For some reason, Kubernetes could not reclaim the PV from the stale or deleted PVC
  100. A PVC is created with a storage request and supplies the StorageClass ‘standard’ The storageClass ‘standard’ is configured with some information to connect and interact with the API of an external storage provider The external storage provider creates a PV that strictly satisfies the PVC request. Really, it means it’s the same size as the request The PV will be named pvc-<uid of the pvc> The PV is then bound to the requesting PVC
  101. PVCs can be named the same to make things consistent but point to different storage classes In this example we have a dev and prod namespace. These PVCs are named the same, but reside within different namespaces and request different classes of storage.
  102. There are instances in your application where you want to be able to store runtime-specific information. Things like connection info, SSL certs, certain flags that you will inevitably change over time. The Kubernetes takes that into account and has a solution with ConfigMaps and Secrets. ConfigMaps and Secrets are kind of like Volumes, as you’ll see.
  103. Configmap data is stored “outside” of the cluster (in etcd) They are insanely versatile. You can inject them as environment variables at runtime, as a command arg, or as a file or folder via a volume mount. They can also be created in a multitude of ways.
  104. This is a pretty basic example of a configmap. It has three different keys and values.
  105. Like I said there are a lot of different ways to create a configmap and all yield the same result.
  106. You can use kubectl and simply pass it strings for everything
  107. You can pass it a folder and it will be smart and map the files to keys within the configmap
  108. Or you can pass the files individually.
  109. Secrets for the most part are functionally identical to ConfigMaps with a few small key differences stored as base64 encoded content encrypted if configured great for user/pass or certs and can be created the same way as configmaps
  110. Looks very similar to ConfigMap Now has a type: docker-registry - used by container runtime to pull images from private registries generic - just unstructured data tls - pub/priv keys that can be used both inside a container and out by k8s itself
  111. Same as ConfigMaps except you now have to tell it what kind of secret (we use generic here)
  112. From literals
  113. From a directory
  114. From individual files
  115. On the left is ConfigMaps, right is Secret K8s lets you pass all sorts of information into a container, with environment variables being one of them uses ‘valueFrom’ specify that it comes from a ConfigMap or Secret, give it the cm/secret name and key you want to query
  116. Can then access it just as you would any other env var
  117. Also means it can be used in the command when launching a container This is the same thing from the last example
  118. Now instead of printing them off, we just use normal shell expansion to use them
  119. Last method is actually mounting them as a volume we specify volume type as a configmap or secret and give it the name
  120. Within the container itself, we reference it as we would any other volume we specify the name, and give it a path
  121. Ns,pod: Jeff Labels,Selectors: Bob Service intro: Jeff
  122. Heapster is being deprecated but can be found in minikube addons metrics api is taking over, also in minikube as an addon used with horizontal pod autoscaler, kube-dashboard, kubectl etc
  123. Heapster is being deprecated but can be found in minikube addons metrics api is taking over, also in minikube as an addon used with horizontal pod autoscaler, kube-dashboard, kubectl etc
  124. The complete picture