SlideShare uma empresa Scribd logo
1 de 127
Baixar para ler offline
Let’s unbox Rancher 2.0
<v2.0.0>
1
LINE Corporation
Verda2 Yuki Nishiwaki
1. Rancher 2.0 Overview
2. Deep dive Rancher Server
2.1. Rancher API
2.2. Rancher Controllers
2.3. Controller/Context
3. Dependent Library
3.1. Norman Framework
3.2. Kontainer-Engine
3.3. RKE
2
Agenda
Casts in Rancher 2.0
rancher
server
Node1 Node2 Node3
rancher
node-agent
rancher
node-agent
rancher
node-agent
rancher
cluster-agent1. Rancher Server
2. Rancher Cluster Agent
3. Rancher Node Agent
Parent Kubernetes
Child Kubernetes
deployed by Rancher
Child Kubernetes
deployed by Rancher
Parent k8s: k8s working with rancher
Child k8s: k8s deployed by rancher
3
1. Rancher 2.0 Overview
Casts in Rancher 2.0
rancher
server
Node1 Node2 Node3
rancher
node-agent
rancher
node-agent
rancher
node-agent
rancher
cluster-agent1. Rancher Server
2. Rancher Cluster Agent
3. Rancher Node Agent
Parent Kubernetes
Child Kubernetes
deployed by Rancher
Child Kubernetes
deployed by Rancher
Parent k8s: k8s working with rancher
Child k8s: k8s deployed by rancher
4
1. Rancher 2.0 Overview
About Rancher Server
➢ Provide the user with GUI/API
➢ Rancher Server depend on Kubernetes Cluster (Need to be deployed
before start to run Rancher Server)
○ If you try to run Rancher Server without Kubernetes, Rancher Server automatically try to
build all in one Kubernetes on same host
○ All data related to Rancher will be stored in Kubernetes as a CRD
○ Rancher run custom controllers to deploy/maintain multiple Kubernetes
➢ Everytime Rancher Server need to talk to deployed Kubernetes, Use
Rancher Cluster/Node Agent as a TCP Proxy Server via Websocket
➢ Multiple Rancher Server deployment is not available
○ HA Configuration is available but This is just run 1 Rancher Server on multi-node
Kubernetes environment behind Loadbalancer
5
1. Rancher 2.0 Overview
About Rancher Server Implementation
➢ One binary cover all of following features….
○ Rancher API
○ Various Controller
○ gRPC Server of Kontainer-Engine
○ GUI of Rancher
➢ Depend on some other Rancher’s library/middleware
○ Norman
■ Used as a framework by API, Controller implementation
○ Kontainer Engine
■ Used by Controller to Deploy/Update k8s cluster on various environment
like GKE, EKS, Any Server with RKE….
○ RKE
■ Used by Kontainer Engine, API(Nodeconfigserver) to Deploy/Update k8s cluster
on Server
6
1. Rancher 2.0 Overview
What Server does?
Server
API Controllers
CRD
Kind: Cluster
Node1 Node2 Node3
rancher
node-agent
rancher
node-agent
rancher
node-agent
rancher
cluster-agent
Child Kubernetes
deployed by Rancher
Child Kubernetes
deployed by Rancher
CRD
Kind: Node
All data stored
as CRD in k8s
Watch CRD
Deploy
Monitor Cluster/Sync Data
Call docker/k8s API via websocket, If need.
Don’t access to docker/k8s api directly from rancher server
Websocket session
Point 2 Point 3
Point 4
Point 5
Point 1
Provide API
7
1. Rancher 2.0 Overview
What Server does?
Server
API Controllers
CRD
Kind: Cluster
Node1 Node2 Node3
rancher
node-agent
rancher
node-agent
rancher
node-agent
rancher
cluster-agent
Child Kubernetes
deployed by Rancher
Child Kubernetes
deployed by Rancher
CRD
Kind: Node
Provide unified access to multiple k8s cluster
Point 6
8
1. Rancher 2.0 Overview
Casts in Rancher 2.0
rancher
server
Node1 Node2 Node3
rancher
node-agent
rancher
node-agent
rancher
node-agent
rancher
cluster-agent1. Rancher Server
2. Rancher Cluster Agent
3. Rancher Node Agent
Parent Kubernetes
Child Kubernetes
deployed by Rancher
Child Kubernetes
deployed by Rancher
9
1. Rancher 2.0 Overview
About Rancher Agent
➢ All nodes need to run
➢ Periodically call /v3/connect/config
API and Check If node need to run
any container or create any file
➢ Provide TCP Proxy via websocket
(/v3/connect)
Rancher Node Agent Rancher Cluster Agent
➢ Cluster need to run 1 agent
➢ Provide TCP Proxy via websocket
(/v3/connect)
Use same binary and switch agent type by
environment variable(CATTLE_CLUSTER)
There are 2 types of Agent to run on Kubernetes deployed by Rancher
10
1. Rancher 2.0 Overview
What Agent does?
Node Agent
Node A
Cluster Agent
Child Kubernetes
Node Agent
Node B
Parent Kubernetes
Server
Dialer API
(pkg/dialer)
RkeNodeConfig API
(pkg/rkenodeconfigserver)
Controllers
websocket session
(/v3/connect)
/v3/connect/config
Use session
For access
(k8s, docker)
Rancher Agent is basically to establish websocket to provide TCP Proxy
and just check NodeConfig periodically. Almost all configurations will be
done/triggered by controllers through websocket
Point 2
Establish websocket session
Point 1 Provide TCP Proxy
via websocket
Point 3
Check If file,container need to
create/run or not periodically
11
1. Rancher 2.0 Overview
Rancher 2.0 overview summary
Almost all logics are in Rancher Server and Agent is just sitting as a TCP Proxy
Server in k8s deployed so that Rancher Server can use
➢ Rancher Server
○ All data for Rancher is stored as CRD in Kubernetes (translating Rancher’s resource into CRD)
○ Rancher’s API is kind of wrapper for Kubernetes API
○ Rancher have various controllers to watch CRD resources in parent k8s to deploy k8s
(Management Controllers)
○ Rancher have various controllers to watch resources including CRD in parent k8s to inject
some data to k8s deployed (User Controllers)
➢ Rancher Agent
○ Establish websocket to provide TCP Proxy
■ This is used when Rancher Server want to talk to Child Kubernetes
■ This is used when user want to call Kubernetes API
○ Check periodically if node need to create something file or run something container 12
1. Rancher 2.0 Overview
Rancher 2.0 overview summary
Almost all logics are in Rancher Server and Agent is just sitting as a TCP Proxy
Server in k8s deployed for Rancher Server
● Rancher Server
a. All data for Rancher stored as CRD in Kubernetes (translating Rancher’s resource into CRD)
b. Rancher’s API is kind of proxy to Kubernetes API
c. Rancher have various controllers to watch CRD resources in parent k8s to deploy k8s
(Management Controllers)
d. Rancher have various controllers to watch CRD resources in parent k8s to inject some data
to k8s deployed (User Controllers)
e. Use websocket session to access deployed Node or K8s Cluster.
● Rancher Agent
a. Establish websocket to provide TCP Proxy
b. Check periodically if node need to create something file or run something container
If we want to know more about How Rancher maintain
Kubernetes Cluster, It’s enough to see just Rancher Server.
Because Agent is just to provide proxy.
13
1. Rancher 2.0 Overview
1. Rancher 2.0 Overview
2. Deep dive Rancher Server
2.1. Rancher API
2.2. Rancher Controllers
2.3. Controller/Context
3. Dependent Library
3.1. Norman Framework
3.2. Kontainer-Engine
3.3. RKE
14
Agenda
2.1. Rancher API
Server
API Controllers
CRD
Kind: Cluster
CRD
Kind: Node
Node1 Node2 Node3
rancher
node-agent
rancher
node-agent
rancher
node-agent
rancher
cluster-agent
Child Kubernetes
deployed by Rancher
All data stored
as CRD in k8s
Point 2
Point 1
Provide API
15
2. Deep dive Rancher Server
Rancher API Overview
➢ All data used/created by Rancher will be stored as a Kubernetes Resource
○ Which Kubernetes is used is depending on API Path
■ Parent Kubernetes
■ Child Kubernetes
➢ What you can do in Rancher API(Management API) can be done by calling
directly k8s API as well
➢ Rancher API allow you to create almost all resources type in K8s, not only CRD
○ Proxy request to k8s after some manipulation like adding annotation, label
➢ API can be classified into 5 types (It’s not officially classified)
16
2. Deep dive Rancher Server 2.1. Rancher API
5 types of API
Server
Controllers
API
Parent Kubernetes
➢ API can be classified into 5 types
➢ Some API is for only Agent
○ API for user
■ Management
■ Auth
■ K8s Proxy
○ API for agent
■ Dialer
■ RKE Node Config
Auth API
Management API
K8s Proxy API
Dialer API
RKE Node Config API
Main
/v3-public
/v3/token
/v3/
/k8s/clusters
/v3/connect
/v3/connect/register
/v3/connect/config Agent
User
17
2. Deep dive Rancher Server 2.1. Rancher API
Management API
Server
Controllers
API
Parent Kubernetes
Auth API
Management API
K8s Proxy API
Dialer API
RKE Node Config API
Main
/v3/
Child Kubernetes
deployed by Rancher
Create/Update/Get
Resource
Create/Update/Get
Resource
POST
/v3/cluster
POST
/v3/project/
<cluster-id>:<project-id>/pods
CRD
Cluster
PodAgent
depending on Path
Use TCP Proxy
Cluster Agent provide
18
2. Deep dive Rancher Server 2.1. Rancher API
Management API
➢ This is Main API for Rancher These resources are created by
Management API
19
2. Deep dive Rancher Server 2.1. Rancher API
Management API
➢ Provide CRUD for Almost All CRD(like cluster) and k8s resources(like pod)
➢ Use Norman Framework
○ Schema definition can be seen in following path
■ types/apis/management.cattle.io/v3/schema/schema.go
■ types/apis/cluster.cattle.io/v3/schema/schema.go
■ types/apis/project.cattle.io/v3/schema/schema.go
➢ According to resource type, Management API use proper data store
○ Use Parent Kubernetes for CRD like Cluster
○ Use Child Kubernetes for Kubernetes Core resource like Pod
➢ It doesn’t try to create actual resource and provisioning
○ This API just create cluster CRD in Parent Kubernetes
○ Provisioning is a responsibility of Controller
20
2. Deep dive Rancher Server 2.1. Rancher API
Auth API
Server
Controllers
API
Parent Kubernetes
Management API
K8s Proxy API
Dialer API
RKE Node Config API
Main
CRD
User
Auth API
/v3-public
/v3/token
Authenticate with User CRD
resource for Rancher API
Get Token
21
2. Deep dive Rancher Server 2.1. Rancher API
K8s Proxy API
Server
API
Parent Kubernetes
Management API
Dialer API
RKE Node Config API
Main
Child Kubernetes
deployed by Rancher
CRD
Token
Auth API
Authenticate with User CRD
resource for Rancher API
K8s Proxy API
Controllers
Websocket
Sessions
Agent
Call Child K8s API via TCP Proxy via Websocket
GET /k8s/clusters/<cluster>
/api/v1/componentstatuses
/k8s/clusters
GET
/api/v1/componentstatuses
22
2. Deep dive Rancher Server 2.1. Rancher API
K8s Proxy API
➢ All request to Kubernetes deployed by Rancher is *authenticated* by this API
➢ If authentication succeeded, Proxy the request to k8s via websocket session to
Cluster Agent
➢ Use Impersonate-User, Impersonate-Group HTTP Headers to propagate User
information to k8s and always use same ServiceAccountToken to be created
when cluster deployed
➢ Authorization will be done by k8s deployed not K8s Proxy
○ Although Rancher’s Controller inject role,clusterrole and these binding information into k8s
deployed according to the value of roletemplate(CRD)
23
2. Deep dive Rancher Server 2.1. Rancher API
K8s Proxy Implementation
Node Agent
Node A ….
/k8s/clusters/<id of clusterA>
Cluster Agent
Cluster A
As you can see here, somehow
User access to k8s will be done
through Websocket session to
Cluster Agent
24
2. Deep dive Rancher Server 2.1. Rancher API
We use always same Service Account which is
created for each cluster.
Impersonate-User, Impersonate-Group HTTP
Headers are used to tell the user by k8s cluster
Dialer API
Server
API
Parent Kubernetes
Management API
RKE Node Config API
Main
Child Kubernetes
deployed by Rancher
Auth API
K8s Proxy API
Controllers
Websocket
Sessions
Agent
Dialer API
/v3/connect
/v3/connect/register
wss://<rancher-server>/v3/connect
CRD
ClusterRegisterToken
Start Provide
TCP Proxy via websocket
Check which cluster
Does agent belong to
Add websocket session for “K8s Proxy” and
Controllers to use TCP Proxy
25
2. Deep dive Rancher Server 2.1. Rancher API
Dialer API
➢ This is most important API from Agent perspective
➢ Dialer API(TunnelServer) provide endpoint of websocket and maintain
sessions for the controllers in Rancher
➢ All Cluster/Node Agent establish 1 websocket session and Provide TCP Proxy
➢ The websocket sessions established here will be used by Controllers and K8s
Proxy API
○ If agent failed to establish the websocket session, Any controllers in Rancher can not do
anything.
○ Keep it in mind Rancher also fail to proxy k8s access because k8s proxy API use websocket
session to Cluster Agent
26
2. Deep dive Rancher Server 2.1. Rancher API
Dialer API Implementation
Rancher
Controllers
Agent
wss://<rancher-server>/v3/connect
Node A
Provide
TCP Proxy
Interface to
- lookup proper websocket session
- start/maintain connection over websocket
TCP 127.0.0.1:443
Rancher
K8s proxy API
Various components of Rancher Server use websocket session to
access docker/k8s running on the target node
Cluster A
27
2. Deep dive Rancher Server 2.1. Rancher API
RKE Node Config API
Server
API
Parent Kubernetes
Management API
Main
Child Kubernetes
deployed by Rancher
Auth API
K8s Proxy API
Controllers
Agent
Dialer API
RKE Node Config API/v3/connect/config
CRD
Cluster
RKE
library
Check Config
Generate
NodeConfig
According to NodeConfig
- Create File
- Create container via docker
28
2. Deep dive Rancher Server 2.1. Rancher API
RKE Node Config API
➢ This is the API only for Node Agent on k8s deployed by RKE
○ If Node Agent running on GKE, EKS, This API always return HTTP Status Code 404
➢ This API return NodeConfig object which include followings
○ Files to create
○ Process to run (Container)
○ Certificate to use
➢ This Node Config object is generated based on one of attributes in Cluster
Object which is rancherKubernetesEngineConfig
29
2. Deep dive Rancher Server 2.1. Rancher API
RKE Node Config API Implementation
Node Agent
Node A
/v3/connect/config
Create file/Run container
based NodeConfig
Node
Config
● What process need to run
● What file need to create
● What certificate need to use
Get NodeConfig for Node A
rke generate
NodeConfig based on
rancherKubernetesEngineConfig which
is one of the attributes in Cluster CRD
Cluster A 30
2. Deep dive Rancher Server 2.1. Rancher API
1. Rancher 2.0 Overview
2. Deep dive Rancher Server
2.1. Rancher API
2.2. Rancher Controllers
2.3. Controller/Context
3. Dependent Library
3.1. Norman Framework
3.2. Kontainer-Engine
3.3. RKE
31
Agenda
2.2. Rancher Controllers
Server
API Controllers
CRD
Kind: Cluster
Node1 Node2 Node3
rancher
node-agent
rancher
node-agent
rancher
node-agent
rancher
cluster-agent
Child Kubernetes
deployed by Rancher
CRD
Kind: Node
Watch CRD
Deploy
Monitor Cluster/Sync Data
Call docker/k8s API via websocket, If need.
Don’t access to docker/k8s api directly from rancher server
Websocket session
Point 3
Point 4
Point 5
32
2. Deep dive Rancher Server
2.2.1 Rancher Controllers Overview
➢ Rancher API just create CRD resource in k8s
➢ Actual provisioning/configuration is done by Controller when detect
something change in k8s
➢ Rancher Controller watch resource@Child K8s not only resource@Parent K8s
➢ Many Controllers Rancher would run, altogether more than 40 Controllers
➢ Rancher Controller Implementation actively use Norman Framework
33
2. Deep dive Rancher Server 2.2. Rancher Controllers
API Controllers
Management
Controllers
Cluster(User)
Controllers
Workload Controllers
4 types of Controllers
Server
API
Controllers
Parent Kubernetes
Create
Resource
Watch
Resource
➢ Rancher Controllers can be classified
into 4 types of group
➢ Each group have own trigger to start
➢ Triggered when Server start
○ API Controllers
○ Management Controllers
➢ Triggered when new Cluster detected
○ Cluster(User) Controllers
○ Workload Controllers
34
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
API Controllers
Management
Controllers
Cluster(User)
Controllers
Workload Controllers
API Controllers
Server
API
Controllers
Parent Kubernetes
Create
Resource
Watch
Resource
Configure
➢ Watch CRD resource related to API
Server Configuration
○ settings
○ dynamicschemas
○ nodedrivers
➢ Configure API server according to
the change of resource
35
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
API Controllers
Management
Controllers
Cluster(User)
Controllers
Workload Controllers
Management Controllers
Server
API
Controllers
Parent Kubernetes
Create
Resource
Watch
Resource
Provisioning/Update Cluster
Start Cluster(User),
Workload Controllers
Child Kubernetes
deployed by Rancher
➢ Watch Cluster/Node related CRD
➢ Provision/Update Cluster according
to the change of resource
➢ After Provision, Start Cluster(User),
Workload Controllers to start data
sync and monitor
36
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
API Controllers
Management
Controllers
Cluster(User)
Controllers
Workload Controllers
Cluster(User) Controllers
Child Kubernetes
deployed by Rancher
Server
Controllers
Parent Kubernetes
Create
Resource
Watch
ResourceCreate
Resource
Watch
Resource
Update/Create CRD
According to Child K8s
Update/Create
Resource including Pod
According to Parent K8s CRD
37
Cluster CRD
Secret
Alerts CRD
Status
Spec
Node
For updating CRD in Parent K8s
Resource Sync between Parent and Child K8s
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
API Controllers
Management
Controllers
Cluster(User)
Controllers
Workload Controllers
Workload Controllers
Child Kubernetes
deployed by Rancher
Server
API
Controllers
Parent Kubernetes
Create
Resource
Watch
ResourceCreate
Resource
Watch
Resource
The Simple Custom Controller to extend K8s
➢ Watch only resource@Child K8s
➢ Create/Update/Delete related
resource
➢ These Controller are more like
enhancing K8s feature itself
38
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
2.2.2. Important Management Controllers
Controllers
39
API Controllers
Management
Controllers
Cluster(User)
Controllers
Workload Controllers
Server
API
Create
Resource
Watch
Resource
Parent Kubernetes
2. Deep dive Rancher Server 2.2. Rancher Controllers
Cluster Controller (pkg/controllers/management)
Overview
➢ Deploy actual Kubernetes Cluster by Kontainer-Engine
➢ After deployed Child Kubernetes, Do followings
○ Make sure cluster-agent/node-agent run on Child Kubernetes
○ Make sure Cluster(User Controllers) start against Child Kubernetes
➢ If the attributes of cluster has been changed, Update cluster by
Kontainer-Engine
➢ Update Status of Cluster based on Node’s(nodes.management.cattle.io)
information which is synced with Node’s(core.v1.nodes) information
40
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
Cluster Controller Implement (pkg/controllers/management)
Parent k8s
Server
Cluster Controller
(one of management controllers)
handlers
lifecycles
cluster-provisioner-controller
cluster-agent-controller
cluster-scoped-gc
cluster-deploy
cluster-stats
CRD
Cluster A
Informer
Child k8s
Cluster A
watch
Execute
deploy
Node Agent Cluster Agent
deploy
Cluster(User) Controllers
Alerts ingress ...
Run Cluster Controllers for Cluster A
CRD
Node A
CRD
Node B
Update Cluster Collect status
41
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
Cluster Controller Implement (pkg/controllers/management)
➢ [lifecycle] cluster-provisioner-controller (clusterprovisioner/provisioner.go)
○ Create: Initialize condition object of cluster. Call Create RPC of proper driver in Kontainer-Engine (rke, gke…)
○ Update: Call Update RPC of proper driver in Kontainer-Engine if RancherKubernetesEngineConfig(this is
generated by translating each driver config like rancherKubernetesEngineConfig) got changed from before
○ Remove: Call Remove RPC of proper driver in Kontainer-Engine
➢ [lifecycle] cluster-agent-controller (usercontrollers/controller.go)
○ Update: Re/Start All Controllers related to UserContext, User Only Context
○ Remove: Stop All Controllers related to UserContext, UserOnlyContext
➢ [lifecycle] cluster-scoped-gc (clustergc/cluster_scoped_gc.go)
○ Remove: Remove cluster-name finalizer from the objects cluster depending on roletemplate, project…
➢ [handler] cluster-deploy (clusterdeploy/clusterdeploy.go)
○ Have a responsibility to run cluster-agent/node-agent on child kubernetes
➢ [handler] cluster-stats (clusterstats/statsaggregator.go)
○ Update cluster status by collecting all machine’s status like the number of active pod, the memory consumed
Watch clusters.management.cattle.io(CRD) resources. Trigger followings
42
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
Node Controller (pkg/controllers/management)
Overview
➢ Deploy VM by docker-machine command if need
➢ Run rancher node-agent after deployed VM
➢ Rancher node-agent established websocket session and register own node
to the cluster
○ This will trigger Cluster Controller to provision Kubernetes related process
43
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
Node Controller Implement (pkg/controllers/management)
Parent k8s
CRD
Node A
CRD
Node B
Server
Node Controller
(one of management controllers)
handlers
lifecycles
node-controller
cluster-provisioner-controller
cluster-stats
nodepool-provisioner
Informer
watch
Execute
VM
Node Agent
Managements
Controllers
Cluster
Controller
NodePool
Controller
Just trigger handlers
Run Node Agent
Create VM If
doesn’t exist
docker-machine
trigger handlers
Create VM
44
2. Deep dive Rancher Server 2.2. Rancher Controllers
Call wss://<server>/v3/connect/register
To register node into specific cluster
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
Node Controller Implement (pkg/controllers/management)
➢ [lifecycle] node-controller (node/controller.go)
○ Create: Initialize nodeConfig object and store it as a secret resource in k8s
○ Update: If Node doesn’t have the condition “Provisioned is True”, Try to create new VM on Public/Private
Cloud by docker-machine command
○ Remove: Just delete nodeConfig (secret) in k8s
➢ [handler] cluster-provisioner-controller (clusterprovisioner/provisioner.go)
○ Enqueue job to Cluster Controller
➢ [handler] cluster-stats (clusterstats/statsaggregator.go)
○ Enqueue job to Cluster Controller
➢ [handler] nodepool-provisioner (nodepool/nodepool.go)
○ Enqueue job to NodePool Controller
Watch nodes.management.cattle.io(CRD) resource. Trigger followings
45
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
Node Pool Controller (pkg/controllers/management)
Overview
➢ This Controller allow user to create multiple Nodes as a group
➢ If we specify the group having 3 nodes, This controller automatically create
Node CRD for each and Node Controller will do actual provisioning as usual
46
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
Node Pool Controller Implement (pkg/controllers/management)
Parent k8s
CRD(Kind: Node)
Node A
CRD(Kind NodePool)
Nodes For Cluster A
CRD
Node B
Server
Nodes For Cluster A
- Node A
- Role: Etcd
- Node B
- Role: Control
- Node C
- Role: Worker
Note:
Just image about
what nodepool defined
NodePool Controller
lifecycles
nodepool-provisionerInformer
CRD
Node C
Create If missing
Check
all Nodes exist
Watch
Execute
Confirmed exist
47
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
Node Pool Controller Implement (pkg/controllers/management)
➢ [lifecycle] nodepool-provisioner (nodepool/nodepool.go)
○ Update: Check If all node CRD in the nodepool are created or not. if not, create node
CRD in k8s
○ Remove: Delete node CRD described in the deleted nodepool in k8s
Watch nodepools.management.cattle.io(CRD) resources. Trigger followings
48
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
2.2.3. Important User Controllers
49
API Controllers
Management
Controllers
Cluster(User)
Controllers
Workload Controllers
Server
API
Controllers
Parent Kubernetes
Create
Resource
Watch
Resource
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers
Alerts Controller (pkg/controllers/user/alerts)
Overview
➢ User can define Alerts against Child K8s to check
○ Event, Resource status, Node mem/cpu…
➢ User can define notifiers like slack, email, webhook
○ This notifier definition is used when alert fired
➢ Prometheus Alertmanager is deployed @Child K8s for the part to send
notification
○ You can easily support any other notification system as long as Prometheus support
50
2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Alerts Controller Implement (pkg/controllers/user/alerts)
Parent k8s
CRD
ClusterAlerts
CRD
Node
Server
ClusterAlerts Controller
handlers
cluster-config-syncer
cluster-alert-deployer
Informer
watch
Execute
Watchers
EventWatcherNodeWatcher
ProjectAlerts
Controller
Child k8s
alertmanager
Config.yaml
notification.tmpl
secret
mount
Pod
Maintain config files
According to ClusterAlerts CRD
Deploy alertmanager if need
….
Check Node violate
ClusterAlerts
Check Event
violate ClusterAlerts
Send Notify
Send Alert via email, slack
Same as
ClusterAlerts
51
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Alerts Controller Implement(1/2) (pkg/controllers/user/alerts)
➢ Watch clusteralerts.management.cattle.io resource @Parent K8s
○ [handler] cluster-alert-deployer
■ Make sure alert manager(prom/alertmanager/) is running with alert-manager
helper(rancher/alertmanager-helper) on Child k8s when notifier and alerts
resources are created
● Alertmanager-helper is just to watch config and reload alertmanager by API
● Alertmanager refer config file which is secret resource via secret mount and
have a responsibility to notify alert to notifiers(slack, webhook…)
○ [handler] cluster-config-syncer
■ Maintain config file on secret resource according to the change of ClusterAlerts
➢ Watch projectalerts.management.cattle.io resource @Parent K8s
○ [handler] project-alert-deployer
■ Same as “cluster-alerts-deployer”
○ [handler] project-alert-syncer
■ Same as “cluster-config-syncer” 52
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Alerts Controller Implement(2/2) (pkg/controllers/user/alerts)
➢ Watch notifiers.management.cattle.io resource @Parent K8s
○ [handler] notifier-config-syncer
■ Same as “cluster-config-syncer”
➢ Run Alert Evaluation threads(called as Watcher) for Events/Pod/Node…..
○ One thread(watcher) for One resource type
○ Periodically evaluate the condition alert described with current status and If current
status broke the condition of alerts, Call http://<alert-manager service ip>/api/alerts to let
alert manager know new alert fired via websocket session to cluster-agent
53
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
NodeEndpointsController (pkg/controllers/user/endpoints)
Overview
➢ In Kubernete, It’s not easy to tell which address, port, protocol are exposed to
outside cluster on the spot
➢ This NodeEndpointsController check all resources possibly exposed to outside
and store all exposed address/port/protocol information into annotation
$kubectl get nodes rancher003 -o=custom-columns=NAME:.metadata.annotations
NAME
map[field.cattle.io/publicEndpoints:[
{"nodeName":"local:machine-z6qww","addresses":["rancher003"],"port":30080,"protocol":"TCP",
"serviceName":"cattle-system:cattle-service","allNodes":true},
{"nodeName":"local:machine-z6qww","addresses":["rancher003"],"port":30443,"protocol":"TCP",
"serviceName":"cattle-system:cattle-service","allNodes":true}
] 54
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
NodeEndpointsController Implement
(pkg/controllers/user/endpoints)
Parent k8s
CRD(Kind: Node)
Node A
Server core.v1.Node Controller
handlers
nodeEndpointControllerInformerWatch
Execute
Child k8s
Core.v1.Node
Node A
Service A
Lookup Node Name
Corresponding to core.v1.node Node A
Get All Service exposed outside
Update field.cattle.io/publicEndpoints
to store all exposed endpoint
55
1.
2.
3.
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
NodeEndpointsController Implement
(pkg/controllers/user/endpoints)
➢ Watch core.v1.nodes resource @ Child k8s
○ [handler] nodesEndpointController
■ Check all endpoints exposed to outside child k8s cluster by nodePort, LB service or
hostport pod and Store these endpoints with “field.cattle.io/publicEndpoints”
annotation in core.v1.nodes resource in child k8s
56
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
EventSyncer (pkg/controllers/user/eventssyncer)
Overview
➢ This is watching Event resource@Child K8s and Store all of events into
Parent K8s as a ClusterEvent CRD
○ This doesn’t update event status/message and always create new event
➢ This ClusterEvent CRD is deleted after 24 hours passed from creation by
/pkg/controllers/management/clusterevents/clustereventscleanup.go
➢ This Controller is not enabled at default now because of scaling issue
○ https://github.com/rancher/rancher/issues/11771
○ If you want to enable this, it’s better to modify this logic to store external database
57
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
EventSyncer Implement (pkg/controllers/user/eventssyncer)
Parent k8s
CRD
ClusterEvents
Server
core.v1.Event Controller
handlers
events-syncerInformer
Execute
Child k8s Event
Watch
Translate event into
clusterevents and Create
clustereventscleanup
(pkg/controllers/management/)
Goroutine
Watch
Delete clusterevent after 24
hours passed from create
58
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
EventSyncer Implement (pkg/controllers/user/eventssyncer)
➢ Watch core.Event resource @ Child K8s
○ [handler] events-syncer
■ Create/Update ClusterEvents corresponding to event object in child k8s.
This ClusterEvents CRD resource are deleted after 24 hours passed by
pkg/controllers/management/clusterevents/clustereventscleanup.go
○ This Controller is not enabled by default now because of scaling issue
■ https://github.com/rancher/rancher/issues/11771
59
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
HealthSyncer (pkg/controllers/user/healthsyncer)
Overview
➢ Start thread to check component status for Child k8s periodically
○ Call <Child K8s Endpoint>/v1/componentstatuses periodically
○ Store components status into Status.ComponentStatuses in v3.Cluster object @Parent K8s
○ If This controller failed to get component status from Child k8s, Stop All UserControllers and
periodically check if Cluster got back to alive or not.
60
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
HealthSyncer Implement (pkg/controllers/user/healthsyncer)
Parent k8s
CRD
Cluster A
Server
healthSyncer
Goroutine
Child k8s
Cluster(User) Controllers
ClusterAlert Controller Notifier Controller ...
GET
/api/v1/componentstatuses
Stop if failed to
get status
Re-start
If status got alive
Update
Status.ComponentStatuses
61
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Cluster(Project)Logging Controller
(pkg/controller/user/logging/)
Overview
➢ User can specify where all container logs should be sent to
➢ The logs will be sent by fluentd which is deployed by this controller
➢ If user specify the embedded Elasticsearch, Elasticsearch will be deployed
onto Child K8s as well
62
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Cluster(Project)Logging Controller Implement
(pkg/controller/user/logging/)
Parent k8s CRD
ClusterLogging
Server
Child k8s
ClusterLogging Controller
lifecycle
cluster-logging-controllerInformer
Execute
ProjectLogging Controller
Almost same as
ClusterLogging Controller
Watch
Daemonset
cluster.conf
ConfigMap
project.conf
ConfigMap
/var/lib/docker/containers/
/var/log/containers/
/var/log/pods
/var/lib/rancher/rke/log
HostPath
Mount
Mount
Mount
Deploy
Update
Out of Scope
Send logs
63
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Cluster(Project)Logging Controller Implement
(pkg/controller/user/logging/)
➢ Watch clusterloggings.cattle.io resource @Parent K8s
○ [lifecycle] cluster-logging-controller
■ Create: Deploy fluentd daemon set and initial config
● Create 2 ConfigMap @Child K8s. One is for to store cluster.conf, the other is for to
store project.conf.
● Deploy daemonset @Child K8s. rancher/fluentd docker image is used. This
daemonset have following hostpath volume
○ /var/lib/docker/containers (created by docker)
○ /var/log/containers, /var/log/pods (created by k8s)
○ /var/lib/rancher/rke/log (created by rke)
■ Updated: Update ConfigMap (cluster.conf) according to spec of clusterlogging resource.
Configuration template can be seen in
pkg/controllers/user/logging/generator/cluster_template.go
■ Remove: Delete the namespace including all logging related resources
ProjectLogging Controller
have very similar behaviour
64
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
User Node Remove (pkg/controller/user/noderemove/)
Overview
➢ This controller is in charge of deleting core.v1.Node when
nodes.management.cattle.io resource is deleted
Implementation
➢ Watch nodes.management.cattle.io@ Parent K8s
○ [lifecycle] user-node-remove
■ Remove: delete core.Node resource @ Child K8s corresponding to Node CRD
65
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
NodeSyncer (pkg/controller/user/nodesyncer/)
Overview
➢ This controller is in charge of updating nodes.management.cattle.io CRD
resource so that it has same status as core.v1.Node
core.v1.Node@Child K8s Nodes.management.cattle.io CRD@Parent K8s
Spec Spec.InternalNodeSpec
Status Status.InternalNodeStatus
Annotations Status.NodeAnnotation
Labels Status.NodeLabels
Name Status.Nodename
Translate
66
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
NodeSyncer Implement (pkg/controller/user/nodesyncer/)
Parent k8s
CRD
Node A
Server
Child k8s
Core.v1
Node A
core.v1.Node Controller
handlers
Informer
nodeSyncer
management.cattle.io.v3.Node Controller
Informer
handlers
machinesSyncer
machineLabelSycner
Watch
Execute Execute
Watch
Execute
Get Information
Store information
into Node CRD Update Label,
Annotation
Check Spec.
Annotation and Label
67
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
NodeSyncer Implement(1/2) (pkg/controller/user/nodesyncer/)
➢ Watch core.Node @Child K8s
○ [handler] nodeSycner
■ Just enqueue job to NodeController
➢ Watch nodes.managements.cattle.io @ Parent K8s
○ [handler] machinesSyncer
■ Check all core.Node resources @Child K8s and If there is missing
nodes.management.cattle.io resource corresponding to core.Node @Parent K8s, Create
new nodes.management.cattle.io. If there is nodes.management.cattle.io resource
corresponding to no core.Node, Try to delete that nodes.management.cattle.io resource
■ Update nodes.management.cattle.io resource
● Spec (core.Node) -> Spec.InternalNodeSpec (nodes.management.cattle.io)
● Status (core.Node) -> Status.InternalNodeStatus (nodes.management.cattle.io)
● Calculate requested resource by pod -> Status.Request (nodes.manageme…)
● Annotations (core.Node) -> Status.NodeAnnotation (nodes.manageme…
● Labels (core.Node) -> Status.NodeLabels (node.manageme…
● Name (core.Node) -> Status.NodeName (node.manageme… 68
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
NodeSyncer Implement(2/2) (pkg/controller/user/nodesyncer/)
➢ Watch nodes.managements.cattle.io @ Parent K8s
○ [handler] machinesLabelSyncer
■ If nodes.managements.cattle.io resource@Parent K8s have Spec.DesiredNodeLabels
or Spec.DesiredNodeAnnotation, Try to make sure core.Node@Child K8s have
completely same Annotation or Label
69
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
RBAC Controllers (pkg/controller/user/rbac)
RoleTemplate
Overview
➢ This controller allow user to create roletemplate which is corresponding
to clusterrole in K8s
➢ Roletemplate can be assigned to User in Rancher and this information will
propagate to all clusters so that all cluster have same RBAC information
70
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
RBAC Controllers (pkg/controller/user/rbac)
You can choose roletemplate
Choose user
to assign roletemplate
Adding Cluster Member
= Assign specific roletemplate to user
71
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
RBAC Controllers Implement (pkg/controller/user/rbac)
Parent k8s
CRD
ClusterRoleTemplateBinding
Server
Child k8s
ClusterRole
RoleTemplate Controller
Informer
lifecycle
cluster-roletemplate-sync
ClusterRoleTemplateBinding Controller
lifecycle
cluster-crtb-syncInformer
CRD
RoleTemplate
Watch
Watch
Execute
Execute
ClusterRoleBindings
Check if there is binding resource
to refer RoleTemplate
Update
Create If need
Create If need
Get
Get
Get
72
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
RBAC: RoleTemplate Controller Implement
(pkg/controller/user/rbac)
➢ Watch roletemplate@Parent K8s
○ [lifecycle] cluster-roletemplate-sync
■ Update:
● Check If there is project role template binding referring to the roletemplate
● Check If there is cluster role template binding referring to the roletemplate
● If there is, Make sure roletemplate’s rule is same as clusterrole corresponding
to the roletemplate @Child K8s
■ Remove:
● Remove clusterrole @Child K8s corresponding to deleted roletemplate
73
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
RBAC: ClusterRoleTemplateBinding Controller
Implement (pkg/controller/user/rbac)
➢ Watch clusterroletemplatebindings.management.cattle.io @Parent K8s
○ [lifecycle] cluster-crtb-sync
■ Create: Make sure followings
● There is clusterrole corresponding to roletemplate
clusterroletemplatebindings refer to @Child K8s
● There is clusterrolebindings corresponding to clusterroletemplatebindings
@Child K8s
■ Update: same as Create
■ Remove: Make sure delete clusterrolebindings corresponding to
clusterroletemplatebindings @Child K8s
74
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Secret Controller (pkg/controller/user/secret)
Overview
➢ User can create secret resource for all namespaces thanks to this Controller
➢ If user create secret with all namespaces, this controller watch the
namespaces resource@Child K8s and create secret for new namespace
75
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Secret Controller Implement (pkg/controller/user/secret)
Parent k8s Secret
Server
Child k8s Namespace
Namespace Controller
handlers
Informer
SecretsController
Secret Controller
Informer
handlers
secretsController
Watch
Execute
Watch
Execute
CreateCheck If the secret have
field.cattle.io/projectId annotation
Secret
Create
Get All secret in project
Check If the namespace have
field.cattle.io/projectId annotation
76
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
Secret Controller Implement (pkg/controller/user/rbac)
➢ Watch core.Namespaces @Child K8s
○ [handler] secretsController
■ If the namspace have annotation (field.cattle.io/projectId), Create all secrets under
<project-name> namespace@Parent K8s into Child K8s
➢ Watch core.Secret @Parent K8s
○ [lifecycle] secretsController
■ Create: If the secret have annotation (field.cattle.io/projectId), Create the secret into
specified namespace in Child K8s.
■ Update: Same as Create
■ Remove: Delete secret in Child K8s as well
77
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
2.2.4. Important Workload Controllers
78
API Controllers
Management
Controllers
Cluster(User)
Controllers
Workload Controllers
Server
API
Controllers
Parent Kubernetes
Create
Resource
Watch
Resource
2. Deep dive Rancher Server 2.2. Rancher Controllers
ExternalIPServiceController (pkg/controllers/user/externalservice)
Overview
➢ Headless Service(Cluster-IP: None) is usually used to balance traffic
between Pods by selector
➢ This is to extend Headless Service to support external IP which is
maintained manually in usual k8s
Endpoint having all ip addresses as a
subset are created 79
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
ExternalIPServiceController Implement
(pkg/controllers/user/externalservice)
Parent k8s
Server
Child k8s Service
Service Controller
Informer
handlers
externalIPServiceController
Watch
Execute
Endpoint
Update subset so at to
have all ip address described in annotation
“field.cattle.io/ipAddresses”
Check
“Filed.cattle.io/ipAddresses” annotation
80
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
ExternalIPServiceController Implement
(pkg/controllers/user/externalservice)
➢ Watch core.v1.Service resource @ Child k8s
○ [handler] externalIpServiceController
■ Check “field.cattle.io/ipAddresses” annotation on the service resource and create
endpoint object having ip addresses described in annotation
81
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
Rancher Ingress Controller (pkg/controllers/user/ingress)
Overview
➢ Kubernetes Ingress rule allow you to specify only service as a backend
➢ But Rancher allow you to use Deployment/Pods as a backend for Ingress rule
○ By letting this controller automatically create service for these deployments/Pods
Workload is actually Pod, Deployment...
82
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
Rancher Ingress Controller Implement
(pkg/controllers/user/ingress)
Parent k8s
Server
Child k8s
Ingress
Ingress Controller
Informer
handlers
ingressWorkloadController
Watch
Execute
Check Ingress definition created by Rancher by
annotation “field.cattle.io.ingress/state” including
info about target workload
Service
Create Service with NodePort having
selector workload_ingress_***
Workload Controller (pkg/controllers/user/workload/workload_common.go)
Informer
handlers
syncDeployments
syncReplicasets
….
Deployments
Pod
ingressEndpointController
Add workload label for ingress selector
Watch
Execute
Get info to identify pod
Execute
83
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
Rancher Ingress Controller Implement
(pkg/controllers/user/ingress)
➢ Watch extensions.ingress resources @ Child K8s
○ [handler] ingressWorkloadController
■ Create service for deployment(seen as workload in Rancher) specified by ingress
resource
● Only ingress resource created by Rancher API
(/v3/project/<cluster-id>/<cluster-id>:<project-name>/ingress) is the target
resource this controller deal with.
● We can tell ingress resource is created by Rancher API or not with annotation
“field.cattle.io.ingress/state”
84
2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
2.2.5. Controllers We didn’t cover this time
Management Controllers:
* pkg/controllers/management/auth
* pks/controllers/management/catalog
* pkg/controllers/management/compose
* pkg/controllers/management/nodedriver
* pkg/controllers/management/podsecuritypolicy
Cluster(User) Controllers:
* pkg/controllers/user/helm
* pkg/controllers/user/namespacecompose
* pkg/controllers/user/networkpolicy
* pkg/controllers/user/nslabel
* pkg/controllers/user/pipeline
* pkg/controllers/user/usercompose
Workload Controllers:
* pkg/controllers/user/dnsrecord
* pkg/controllers/user/targetworkloadservice
* pkg/controllers/user/workload
API Controllers:
* pkg/api/controllers/dynamicschema
* pkg/api/controllers/settings
* pkg/api/controllers/whitelistproxy
85
2. Deep dive Rancher Server 2.2. Rancher Controllers
1. Rancher 2.0 Overview
2. Deep dive Rancher Server
2.1. Rancher API
2.2. Rancher Controllers
2.3. Controller/Context
3. Dependent Library
3.1. Norman Framework
3.2. Kontainer-Engine
3.3. RKE
86
Agenda
Context represents Controller groups
Contexts
ScaledContext
ManagementContext
UserContext
UserOnlyContext
Credential
For parent
Credential
For parent
Credential
For child
Credential
For child
➢ Each Context have a group of controllers to run
➢ Each Context have kubernetes config for a certain
environment
○ ScaledContext for Parent k8s
○ ManagementContext for Parent k8s
○ UserContext for Child k8s and Parent k8s
○ UserOnlyContext for Child k8s
➢ UserContext and UserOnlyContext are created for
each Child k8s
➢ All contexts except for UserOnlyContext have
dialer to “Node/Cluster Agent’s TCP Proxy Server”
and Use it to access Child k8s
UserContext
UserOnlyContext
Credential
For child
Credential
For child
87
2. Deep dive Rancher Server 2.3. Controller/Context
Context/Controllers Relation
Controllers
API Controllers
Management
Controllers
Cluster Controllers
Workload Controllers
Contexts
ScaledContext
ManagementContext
UserContext
UserOnlyContext
Clientconfig
For parent
Clientconfig
For parent
Clientconfig
For child
Clientconfig
For child
They are watching api related CRD in parent
k8s and do whatever action like replacing
certificates.
They are started when rancher server start.
All controllers can be seen in
pkg/api/server/server.go.
They are watching deployed cluster/node
related CRD in parent k8s and do whatever
action like actual provisioning cluster, adding
node...
They are started when rancher server start.
All controllers can be seen in
pkg/controllers/management/controller.go
After deployed cluster, These controllers have a
responsibility to monitor and sync data for the
cluster and nodes, They are watching many
objects in both of k8s cluster parent and child.
They are re/started when cluster change is
detected by usercontroller which is one of the
management controllers, which means we have
active controller for each cluster
All controllers can be seen in
pkg/controllers/user/controllers.go
Cluster Controllers
(User Controllers)
Workload Controllers
UserContext
UserOnlyContext
Clientconfig
For child
Clientconfig
For child They are just watching resource in child k8s.
They don’t require access to Parent k8s and
aim for extending existing in child k8s. Trigger
to start is same as Cluster(User) controllers
All controllers can be seen in
pkg/controllers/user/controllers.go
88
2. Deep dive Rancher Server 2.3. Controller/Context
ManagementContext(Context)
Start
How/When Rancher Start Controller?
ScaledContext (Context)
Clientconfig
For parent
Start
➢ Context have a responsibility to run Controllers
➢ Context.Start is the function start controllers be in charged of
Clientconfig
For parent
Is it same?
vendor/github.com/rancher/types/config/context.go
Execute Start
Execute Start
89
2. Deep dive Rancher Server 2.3. Controller/Context
ManagementClient B
ManagementContext(Context)
Start
Register Concept
ScaledContext (Context)
Clientconfig
For parent
Start
Clientconfig
For parent
ManagementClient A
starters
nodeController …
Management Controllers
nodeController
userController
…
…
Register
Start only controllers in starters
Execute Start
Execute Start
…ControllerA
different
90
2. Deep dive Rancher Server 2.3. Controller/Context
Flow to start Controllers
Actual Example: app/app.go
Generate Context Register Controllers
Context Start
(Start Controllers)
➢ Always create context when start controllers in Rancher
➢ For management, scaled controllers are started when
start rancher server. See bellow code
91
2. Deep dive Rancher Server 2.3. Controller/Context
1. Rancher 2.0 Overview
2. Deep dive Rancher Server
2.1. Rancher API
2.2. Rancher Controllers
2.3. Controller/Context
3. Dependent Library
3.1. Norman Framework
3.2. Kontainer-Engine
3.3. RKE
92
Agenda
3.1. Norman (https://github.com/rancher/norman)
➢ Rancher actively use this framework
➢ Provide API Framework working with Kubernetes as a backend storage
○ All Data will be stored in k8s as a CRD resource (technically we can configure other storage as well)
➢ Provide easy way to build Controller on Kubernetes (Wrapper for client-go)
Something App
Norman
API Part
Controller Part
Create Resource as CRD
CRD (Test)
Resource A
CRD (Test)
Resource B
Watch the changesReturn If app stored resource
information into k8s or not
Do whatever you want
asynchronously
93
3. Dependent Library
3.1.1. Norman API Part
Something App
Norman
API Part
Controller Part
CRD (Test)
Resource A
CRD (Test)
Resource B
Create Resource as CRD
Return If app stored resource
information into k8s or not
94
3. Dependent Library 3.1 Norman Framework
API Server
➢ This is Simple API Server provide ServeHTTP function
➢ API Server need API Schema(Concept in Norman) having actual logic
➢ We can register multiple API Schemas
➢ API Server lookup proper schema from requested URL and HTTP Method
95
3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
➢ Each API schema can define
○ How/Where the data need to be stored
■ Rancher use CRD in k8s as a datastore
○ The actual logic for each action
(CREATE, DELETE, UPDATE, LIST)
○ Which HTTP Method this schema can accept
API Schema
https://github.com/rancher/norman/blob/master/types/types.go
96
3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
API Schema Generation
➢ 1 Struct represent Rancher’s resource can be used for…
○ Generating API Schema be used by norman.APIServer (norman/api/server.go)
■ https://github.com/rancher/norman/blob/aecae32b4ae6b73b9945cdedef5a5b0dafa11973/types/r
eflection.go#L75
○ Generating CRD definition (technically from API Schema)
■ https://github.com/rancher/norman/blob/aecae32b4ae6b73b9945cdedef5a5b0dafa11973/store/c
rd/init.go#L122
rancher/vendor/github.com/rancher/types/apis/
management.cattle.io/v3/cluster_types.go
API Schema
For Cluster
CRD
For Cluster
Generate
Use as Store
97
3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
How APIServer lookup schema?
Norman API Server
ServeHTTP
/v3/
/v3/project
Schema.Version.Path
cluster
node
reflect.TypeOf(<type A>).Name()
Node Schema
A Schema
POST
/v3/cluster
CreateHandler DeleteHandler
…..
Cluster Schema
2. Check if method is allowed or not
3. Execute proper Handler
according to HTTP method, URL Parameter
CollectionMethods:
GET POST
1. Choose correct schema
98
3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
Rancher Use Norman in
Management API
Use norman framework.
Difficult to know how
management API implemented
without understanding Norman
1 API Server instance have multiple
schemas which have different data
store.
- Use parent k8s
(managementstored)
- Use child k8s
(userstored)
All custom handlers other than
default handler Norman defined are
stored in this directory.
About handler of Norman, Plz see
explanation page of Norman
99
3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
3.1.2. Norman Controller Part
Something App
Norman
API Part
Controller Part
CRD (Test)
Resource A
CRD (Test)
Resource B
Watch the changes
Do whatever you want
asynchronously
100
3. Dependent Library 3.1 Norman Framework
The concepts around Controller
➢ Generic Controller
○ Implement basic Controller with client-go
○ Allow us to build Custom Controller by just following
■ Passing K8s Client for specific resource to controller
■ Passing the function(handler) to controller. This handler is executed when specific
etcd key changed
● Do whatever you want
➢ 2 Types of Handler
○ Normal Handler
○ LifeCycle Handler
101
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
Generic Controller
➢ Generic Controller
○ Implement basic Controller with client-go
○ Allow us to build Custom Controller by just following
■ Passing K8s Client for specific resource to controller
■ Passing the function(handler) to controller. This handler is executed when specific
etcd key changed
● Do whatever you want
➢ 2 Types of Handler
○ Normal Handler
○ LifeCycle Handler
102
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
Generic Controller Internal
Generic Controller
(norman/controller/generic_controller.go)
cache.SharedIndexInformer
(k8s.io/client-go/tools/cache)
cache.ListWatch
workqueue.RateLimitingInterface
(k8s.io/client-go/util/workqueue)
Client
handler
handler
handler
handler
handlers
handler
handler
handler
Updated key
Deleted key
Added key
Watch specific key
By using Client
When adde/updated/created
Evaluate all registered
handlers With key
You can make your own controller by
defining Client and handlers
Name1
Name2
Name3
103
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
How Rancher Use: Generic Controller
vendors/github.com/rancher/
types/apis/management.cattle.io/v3/zz_generated_node_controller.go
type nodeController struct {
controller.GenericController
}
~~omit~~ Each Controller embed GenericController struct and override few function (e.g.
AddHandler)
This Controller is created when nodeClient.Controller method executed in the first
time like “management.Management.Clusters("").Controller()” (see
pkg/controllers/management/clusterprovisioner/provisioner.go). Rancher usually
named the function generating Controller and pushing controller to starter list as
“Register” function
This is started by Context.Start (see app/app.go in rancher repository) 104
3. Dependent Library 3.1 Norman Framework
<apiGroup>/<version>/<resource_name>
3.1.2 Controller Part
How Use:
Prepare XXXController for each Resource Type
<apiGroup>/<version>/<resource_name>
nodeController
Generic Controller
(Norman)
Handlers
clusterController
Generic Controller
(Norman)
Handlers
XXXController
Generic Controller
(Norman)
Handlers
vendors/github.com/rancher/
types/apis/management.cattle.io/v3/zz_generated_XXX_controller.go
・・・
pkg/controllers/ managements/ user/
XX.go ZZ.go
Just Add Handlers to
the controller to watch 1
resource type
105
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
How Ranche Use:
Prepare XXXController for each Resource Type
nodeController
Generic Controller
(Norman)
Handlers
clusterController
Generic Controller
(Norman)
Handlers
XXXController
Generic Controller
(Norman)
Handlers
vendors/github.com/rancher/
types/apis/management.cattle.io/v3/zz_generated_XXX_controller.go
・・・
pkg/controllers/ managements/ user/
XX.go ZZ.go
Just Add Handler or
LifeCycle to the
controller watching 1
resource type
There is directory seems defining controllers (pkg/controllers/) in
rancher repository.
But the code under this directory is just to add handler or lifecycle
to the proper Controller to watch 1 resource type.
And some handler/lifecycle have controller-ish name like
cluster-provisioner-controller but this is actually just name of
handler. Keep it mind.
106
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
How Rancher Use: Override AddHandler function
vendors/github.com/rancher/
types/apis/management.cattle.io/v3/zz_generated_node_controller.go
<apiGroup>/<version>/<resource_name>
func (c *nodeController) AddHandler(name string, handler NodeHandlerFunc) {
c.GenericController.AddHandler(name, func(key string) error {
obj, exists, err := c.Informer().GetStore().GetByKey(key)
~~omit~~
return handler(key, obj.(*Node))
})
}
Override function so as to pass object data itself as well as key to
handler function. That’s why all handler registered by Rancher
can get object not only just etcd key
This function is actually
called by Generic
Controller(Norman)
This handler is actual
function rancher would
register
107
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
Normal Handler
➢ Generic Controller
○ Implement basic Controller with client-go
○ Allow us to build Custom Controller by just following
■ Passing K8s Client for specific resource to controller
■ Passing the function(handler) to controller. This handler is executed when specific
etcd key changed
● Do whatever you want
➢ 2 Types of Handler
○ Normal Handler
○ LifeCycle Handler
108
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
What’s Normal Handler
➢ Handler Function is executed by GenericController when changes detected
in Kubernetes
➢ Handler Function need to have a argument
(key name of the data which is changed in etcd)
○ Rancher Override AddHandler so that handler function can have 2 argument (key name,
object data)
➢ The registered handlers are executed when following event happened
○ Monitored Keys is “Created”
○ Monitored Keys is “Deleted”
○ Monitored Keys is “Updated”
=> We can not know which kind of event triggers handler.
By Created or Deleted or Updated? 109
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
How Rancher Use: Register Handler
pkg/controllers/management/clusterprovisioner/provisioner.go
func Register(management *config.ManagementContext) {
~~ omit ~~
mngNodes := management.Management.Nodes("")
mngNodes.AddHandler("cluster-provisioner-controller", p.machineChanged)
~~ omit ~~
}
~~ omit ~~
func (p *Provisioner) machineChanged(key string, machine *v3.Node) error {
Register “p.machineChanged” function with the name of “cluster-provisioner-controller”
as a handler
Thanks to override of AddHandler, callback function will get object itself not only just key
110
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
LifeCycle Handler
➢ Generic Controller
○ Implement basic Controller with client-go
○ Allow us to build Custom Controller by just following
■ Passing K8s Client for specific resource to controller
■ Passing the function(handler) to controller. This handler is executed when specific
etcd key changed
● Do whatever you want
➢ 2 Types of Handler
○ Normal Handler
○ LifeCycle Handler
111
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
What’s LifeCycle Handler
➢ Kind of framework of handler function for Controller
➢ Need to Implement following functions for Lifecycle struct
○ Create, Remove, Updated
➢ Generic Controller expect Lifecycle struct to be wrapped by
ObjectLifecycleAdapter(norman/lifecycle/object.go) and developer to
register ObjectLifecycleAdapter.sync function as a handler
➢ The check for object is already created(initialized) or not is judged by extra
annotation (lifecycle.cattle.io/create.<lifecycle name>)
Generic Controller
Handlers
ObjectLifecycleAdapter
sync
Lifecycle
Create
Updated
Removed
Register sync function as
Normal Handler
Execute 112
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
Inside ObjectLifecycleAdapter.sync function
func (o *objectLifecycleAdapter) sync(key string, obj runtime.Object) error {
~~ omit ~~
metadata, err := meta.Accessor(obj)
~~ omit ~~
if cont, err := o.finalize(metadata, obj); err != nil || !cont {
return err
}
if cont, err := o.create(metadata, obj); err != nil || !cont {
return err
}
copyObj := obj.DeepCopyObject()
newObj, err := o.lifecycle.Updated(copyObj)
if newObj != nil {
o.update(metadata.GetName(), obj, newObj)
}
return err
}
Get Metadata from object
Check this object is deleted or not.
If deleted, call o.lifecycle.Remove.
Finalize and return
Check if object is already created or not by annotation
“lifecycle.cattle.io/create.<lifecycle name>” == true.
If not, add finalizer “controller.cattle.io/<lifecycle name>”.
And Call o.lifecycle.Create and then return
Simply just execute o.lifecycle.Updated
vendors/github.com/rancher/norman/lifecycle/object.go
113
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
How Rancher Use: Register Add LifeCycle
func Register(management *config.ManagementContext) {
p := &Provisioner{~~ omit ~~}
p.Clusters.AddLifecycle("cluster-provisioner-controller", p)
}
func (p *Provisioner) Remove(cluster *v3.Cluster) (*v3.Cluster, error) {
~~ omit ~~}
func (p *Provisioner) Updated(cluster *v3.Cluster) (*v3.Cluster, error) {
~~ omit ~~}
func (p *Provisioner) Create(cluster *v3.Cluster) (*v3.Cluster, error) {
~~ omit ~~}
pkg/controllers/management/clusterprovisioner/provisioner.go
Add Lifecycle object as handler
Remove function is executed
only when object is deleted
Updated function is executed
only when object is updated
Create function is executed
only when object is created 114
3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
1. Rancher 2.0 Overview
2. Deep dive Rancher Server
2.1. Rancher API
2.2. Rancher Controllers
2.3. Controller/Context
3. Dependent Library
3.1. Norman Framework
3.2. Kontainer-Engine
3.3. RKE
115
Agenda
3.2. Kontainer-Engine
(https://github.com/rancher/kontainer-engine)
Coming Soon...
116
1. Rancher 2.0 Overview
2. Deep dive Rancher Server
2.1. Rancher API
2.2. Rancher Controllers
2.3. Controller/Context
3. Dependent Library
3.1. Norman Framework
3.2. Kontainer-Engine
3.3. RKE
117
Agenda
3.3. RKE (https://github.com/rancher/rke)
Coming Soon...
118
Concern about Rancher 2.0
In the context of backend for Verda k8s as a Service
119
1 Rancher Server binary have ton of features
➢ It’s tough to operate
➢ We want to use only features we need and trace the changelog.
➢ Difficult performance tuning
○ We can not run multiple instances for specific feature even if that feature allow us to run
multiple instances
■ E.g. Run 5 process for K8s Proxy API
■ E.g. Run 10 process for rkenodeconfig API and so on
=> We will separate some of feature from a server binary and Run separately
=> We will disable unneeded Controllers
120
Scalability Problem to manage k8s deployed
➢ Current design won’t work with Active-Active HA
○ There is no scheduling logic in Rancher
■ Currently only 1 Rancher Server can manage all of k8s cluster
■ Need a extra layer before actual managing cluster to scale
● Like Pod Scheduler and Kubelet relations
○ Controller depend on websocket session to Rancher Server
■ Even If we can redesign Rancher Server with scheduling concept and Active-Active,
We still need to think about how scheduled controller talk to Agent
=> We will have simple scheduling concept in front of Rancher Server
121
K8s Proxy API depend on Websocket to Agent
➢ Even If Rancher Server can access to k8s deployed, Currently Rancher
Server proxy k8s API request via websocket session to Cluster Agent
➢ This design prevent us from running multiple K8s Proxy, even if we
succeeded in separating this process from Rancher Server binary
=> The load of K8s Proxy API is most difficult to predict
because this is really depending on How the user use k8s API
=> We separate this feature from a server binary and run multiple process
by modifying current way which is using websocket session
122
How I feel overall in Rancher 2.0
➢ Less document and Reading Code is only way to know well
➢ I can see some effort to make it easy to deploy Rancher Server
○ Less precondition to run Rancher Server
■ Rancher Server depend on k8s but doesn’t define it as precondition (automatically
detect missing k8s and deploy all in one k8s working with)
○ Less precondition to deploy k8s
■ Even for the environment having NAT
○ But this effort prevent from scaling
➢ If we need scalability like maintaining 1000 clusters, we need additional
consideration
○ Separate the feature from 1 massive binary. Run multiple instances.
○ Use other data store from Kubernetes for some resources.
○ Ask yourself if you need all features or not and If not, let’s disable after understood impact
123
How I feel overall in Rancher 2.0
➢ There are some interesting Controllers like alert, logging, eventsync...
➢ Need to know Norman Framework If you want to know more Rancher
➢ Easy to modify/add Rancher behaviour thanks to Norman Framework
○ Change datastore for specific API resource thanks to Norman API Schema
○ Add Custom Controller thanks to Norman Generic Controller
124
Enhancement Plan for Rancher 2.0
125
Server
Server
K8s Proxy
K8s Proxy
XXX API
After start service, see the
performance and consider
to separate/scaleWithout touching anything
If we can not scale
Rancher Server anymore,
we will add one more
cluster.
Phase 1 Phase 2
Rancher Scalability Improvement
Scheduling
Other
Datastore
Use other datastore
for some data
Extra Monitoring
Enhance
Monitoring
Point 2
Point 1
Point 3
Point 4
126
Custom
API
Custom
API
Appendix: Kind of Code Structure
I straighten my understandings as a diagram.
It’s available in (https://github.com/ukinau/rancher-analyse)
127

Mais conteúdo relacionado

Mais procurados

Kubernetes Concepts And Architecture Powerpoint Presentation Slides
Kubernetes Concepts And Architecture Powerpoint Presentation SlidesKubernetes Concepts And Architecture Powerpoint Presentation Slides
Kubernetes Concepts And Architecture Powerpoint Presentation SlidesSlideTeam
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes IntroductionPeng Xiao
 
Docker Swarm for Beginner
Docker Swarm for BeginnerDocker Swarm for Beginner
Docker Swarm for BeginnerShahzad Masud
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideBytemark
 
The Kubernetes Operator Pattern - ContainerConf Nov 2017
The Kubernetes Operator Pattern - ContainerConf Nov 2017The Kubernetes Operator Pattern - ContainerConf Nov 2017
The Kubernetes Operator Pattern - ContainerConf Nov 2017Jakob Karalus
 
Kubernetes internals (Kubernetes 해부하기)
Kubernetes internals (Kubernetes 해부하기)Kubernetes internals (Kubernetes 해부하기)
Kubernetes internals (Kubernetes 해부하기)DongHyeon Kim
 
Introduction to the Container Network Interface (CNI)
Introduction to the Container Network Interface (CNI)Introduction to the Container Network Interface (CNI)
Introduction to the Container Network Interface (CNI)Weaveworks
 
Rancher MasterClass - Avoiding-configuration-drift.pptx
Rancher  MasterClass - Avoiding-configuration-drift.pptxRancher  MasterClass - Avoiding-configuration-drift.pptx
Rancher MasterClass - Avoiding-configuration-drift.pptxLibbySchulze
 
Deploy Application on Kubernetes
Deploy Application on KubernetesDeploy Application on Kubernetes
Deploy Application on KubernetesOpsta
 
Kubernetes - introduction
Kubernetes - introductionKubernetes - introduction
Kubernetes - introductionSparkbit
 
Kubernetes Networking with Cilium - Deep Dive
Kubernetes Networking with Cilium - Deep DiveKubernetes Networking with Cilium - Deep Dive
Kubernetes Networking with Cilium - Deep DiveMichal Rostecki
 
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...Edureka!
 
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요Jo Hoon
 
Kubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive OverviewKubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive OverviewBob Killen
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes VMware Tanzu
 
Kubernetes
KubernetesKubernetes
KubernetesHenry He
 
Kubernetes a comprehensive overview
Kubernetes   a comprehensive overviewKubernetes   a comprehensive overview
Kubernetes a comprehensive overviewGabriel Carro
 

Mais procurados (20)

Kubernetes Concepts And Architecture Powerpoint Presentation Slides
Kubernetes Concepts And Architecture Powerpoint Presentation SlidesKubernetes Concepts And Architecture Powerpoint Presentation Slides
Kubernetes Concepts And Architecture Powerpoint Presentation Slides
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
Docker Swarm for Beginner
Docker Swarm for BeginnerDocker Swarm for Beginner
Docker Swarm for Beginner
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory Guide
 
Kubernetes 101
Kubernetes 101Kubernetes 101
Kubernetes 101
 
The Kubernetes Operator Pattern - ContainerConf Nov 2017
The Kubernetes Operator Pattern - ContainerConf Nov 2017The Kubernetes Operator Pattern - ContainerConf Nov 2017
The Kubernetes Operator Pattern - ContainerConf Nov 2017
 
Kubernetes internals (Kubernetes 해부하기)
Kubernetes internals (Kubernetes 해부하기)Kubernetes internals (Kubernetes 해부하기)
Kubernetes internals (Kubernetes 해부하기)
 
Introduction to the Container Network Interface (CNI)
Introduction to the Container Network Interface (CNI)Introduction to the Container Network Interface (CNI)
Introduction to the Container Network Interface (CNI)
 
Rancher MasterClass - Avoiding-configuration-drift.pptx
Rancher  MasterClass - Avoiding-configuration-drift.pptxRancher  MasterClass - Avoiding-configuration-drift.pptx
Rancher MasterClass - Avoiding-configuration-drift.pptx
 
Deploy Application on Kubernetes
Deploy Application on KubernetesDeploy Application on Kubernetes
Deploy Application on Kubernetes
 
Introduction of kubernetes rancher
Introduction of kubernetes rancherIntroduction of kubernetes rancher
Introduction of kubernetes rancher
 
Kubernetes - introduction
Kubernetes - introductionKubernetes - introduction
Kubernetes - introduction
 
Kubernetes Networking with Cilium - Deep Dive
Kubernetes Networking with Cilium - Deep DiveKubernetes Networking with Cilium - Deep Dive
Kubernetes Networking with Cilium - Deep Dive
 
Introduction to kubernetes
Introduction to kubernetesIntroduction to kubernetes
Introduction to kubernetes
 
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
 
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
 
Kubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive OverviewKubernetes - A Comprehensive Overview
Kubernetes - A Comprehensive Overview
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
Kubernetes a comprehensive overview
Kubernetes   a comprehensive overviewKubernetes   a comprehensive overview
Kubernetes a comprehensive overview
 

Semelhante a Let’s unbox Rancher 2.0 <v2.0.0>

Kubernetes #1 intro
Kubernetes #1   introKubernetes #1   intro
Kubernetes #1 introTerry Cho
 
Lessons learned and challenges faced while running Kubernetes at Scale
Lessons learned and challenges faced while running Kubernetes at ScaleLessons learned and challenges faced while running Kubernetes at Scale
Lessons learned and challenges faced while running Kubernetes at ScaleSidhartha Mani
 
LINE's Private Cloud - Meet Cloud Native World
LINE's Private Cloud - Meet Cloud Native WorldLINE's Private Cloud - Meet Cloud Native World
LINE's Private Cloud - Meet Cloud Native WorldLINE Corporation
 
Production ready tooling for microservices on kubernetes
Production ready tooling for microservices on kubernetesProduction ready tooling for microservices on kubernetes
Production ready tooling for microservices on kubernetesChandresh Pancholi
 
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...Ajeet Singh Raina
 
CN Asturias - Stateful application for kubernetes
CN Asturias -  Stateful application for kubernetes CN Asturias -  Stateful application for kubernetes
CN Asturias - Stateful application for kubernetes Cédrick Lunven
 
Overview of OpenDaylight Container Orchestration Engine Integration
Overview of OpenDaylight Container Orchestration Engine IntegrationOverview of OpenDaylight Container Orchestration Engine Integration
Overview of OpenDaylight Container Orchestration Engine IntegrationMichelle Holley
 
How to install and use Kubernetes
How to install and use KubernetesHow to install and use Kubernetes
How to install and use KubernetesLuke Marsden
 
Container world hybridnetworking_rev2
Container world hybridnetworking_rev2Container world hybridnetworking_rev2
Container world hybridnetworking_rev2Prem Sankar Gopannan
 
DockerCon 2022 - From legacy to Kubernetes, securely & quickly
DockerCon 2022 - From legacy to Kubernetes, securely & quicklyDockerCon 2022 - From legacy to Kubernetes, securely & quickly
DockerCon 2022 - From legacy to Kubernetes, securely & quicklyEric Smalling
 
Apache Spark on K8s and HDFS Security
Apache Spark on K8s and HDFS SecurityApache Spark on K8s and HDFS Security
Apache Spark on K8s and HDFS SecurityDatabricks
 
Docker Enterprise Workshop - Technical
Docker Enterprise Workshop - TechnicalDocker Enterprise Workshop - Technical
Docker Enterprise Workshop - TechnicalPatrick Chanezon
 
How to Install and Use Kubernetes by Weaveworks
How to Install and Use Kubernetes by Weaveworks How to Install and Use Kubernetes by Weaveworks
How to Install and Use Kubernetes by Weaveworks Weaveworks
 
Deploying WSO2 Middleware on Kubernetes
Deploying WSO2 Middleware on KubernetesDeploying WSO2 Middleware on Kubernetes
Deploying WSO2 Middleware on KubernetesImesh Gunaratne
 
Kubernetes overview 101
Kubernetes overview 101Kubernetes overview 101
Kubernetes overview 101Boskey Savla
 
Orchestrating Microservices with Kubernetes
Orchestrating Microservices with Kubernetes Orchestrating Microservices with Kubernetes
Orchestrating Microservices with Kubernetes Weaveworks
 
Containers, Clusters and Kubernetes - Brendan Burns - Defrag 2014
Containers, Clusters and Kubernetes - Brendan Burns - Defrag 2014Containers, Clusters and Kubernetes - Brendan Burns - Defrag 2014
Containers, Clusters and Kubernetes - Brendan Burns - Defrag 2014brendandburns
 

Semelhante a Let’s unbox Rancher 2.0 <v2.0.0> (20)

Kubernetes #1 intro
Kubernetes #1   introKubernetes #1   intro
Kubernetes #1 intro
 
Introduction of k8s rancher
Introduction of k8s rancherIntroduction of k8s rancher
Introduction of k8s rancher
 
Lessons learned and challenges faced while running Kubernetes at Scale
Lessons learned and challenges faced while running Kubernetes at ScaleLessons learned and challenges faced while running Kubernetes at Scale
Lessons learned and challenges faced while running Kubernetes at Scale
 
LINE's Private Cloud - Meet Cloud Native World
LINE's Private Cloud - Meet Cloud Native WorldLINE's Private Cloud - Meet Cloud Native World
LINE's Private Cloud - Meet Cloud Native World
 
Cloud Native SDN
Cloud Native SDNCloud Native SDN
Cloud Native SDN
 
Production ready tooling for microservices on kubernetes
Production ready tooling for microservices on kubernetesProduction ready tooling for microservices on kubernetes
Production ready tooling for microservices on kubernetes
 
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
Collabnix Online Webinar - Demystifying Docker & Kubernetes Networking by Bal...
 
CN Asturias - Stateful application for kubernetes
CN Asturias -  Stateful application for kubernetes CN Asturias -  Stateful application for kubernetes
CN Asturias - Stateful application for kubernetes
 
Overview of OpenDaylight Container Orchestration Engine Integration
Overview of OpenDaylight Container Orchestration Engine IntegrationOverview of OpenDaylight Container Orchestration Engine Integration
Overview of OpenDaylight Container Orchestration Engine Integration
 
How to install and use Kubernetes
How to install and use KubernetesHow to install and use Kubernetes
How to install and use Kubernetes
 
Container world hybridnetworking_rev2
Container world hybridnetworking_rev2Container world hybridnetworking_rev2
Container world hybridnetworking_rev2
 
DockerCon 2022 - From legacy to Kubernetes, securely & quickly
DockerCon 2022 - From legacy to Kubernetes, securely & quicklyDockerCon 2022 - From legacy to Kubernetes, securely & quickly
DockerCon 2022 - From legacy to Kubernetes, securely & quickly
 
Apache Spark on K8s and HDFS Security
Apache Spark on K8s and HDFS SecurityApache Spark on K8s and HDFS Security
Apache Spark on K8s and HDFS Security
 
Docker Enterprise Workshop - Technical
Docker Enterprise Workshop - TechnicalDocker Enterprise Workshop - Technical
Docker Enterprise Workshop - Technical
 
How to Install and Use Kubernetes by Weaveworks
How to Install and Use Kubernetes by Weaveworks How to Install and Use Kubernetes by Weaveworks
How to Install and Use Kubernetes by Weaveworks
 
Deploying WSO2 Middleware on Kubernetes
Deploying WSO2 Middleware on KubernetesDeploying WSO2 Middleware on Kubernetes
Deploying WSO2 Middleware on Kubernetes
 
Kubernetes overview 101
Kubernetes overview 101Kubernetes overview 101
Kubernetes overview 101
 
Orchestrating Microservices with Kubernetes
Orchestrating Microservices with Kubernetes Orchestrating Microservices with Kubernetes
Orchestrating Microservices with Kubernetes
 
Containers, Clusters and Kubernetes - Brendan Burns - Defrag 2014
Containers, Clusters and Kubernetes - Brendan Burns - Defrag 2014Containers, Clusters and Kubernetes - Brendan Burns - Defrag 2014
Containers, Clusters and Kubernetes - Brendan Burns - Defrag 2014
 
Demystfying container-networking
Demystfying container-networkingDemystfying container-networking
Demystfying container-networking
 

Mais de LINE Corporation

JJUG CCC 2018 Fall 懇親会LT
JJUG CCC 2018 Fall 懇親会LTJJUG CCC 2018 Fall 懇親会LT
JJUG CCC 2018 Fall 懇親会LTLINE Corporation
 
Reduce dependency on Rx with Kotlin Coroutines
Reduce dependency on Rx with Kotlin CoroutinesReduce dependency on Rx with Kotlin Coroutines
Reduce dependency on Rx with Kotlin CoroutinesLINE Corporation
 
Kotlin/NativeでAndroidのNativeメソッドを実装してみた
Kotlin/NativeでAndroidのNativeメソッドを実装してみたKotlin/NativeでAndroidのNativeメソッドを実装してみた
Kotlin/NativeでAndroidのNativeメソッドを実装してみたLINE Corporation
 
Use Kotlin scripts and Clova SDK to build your Clova extension
Use Kotlin scripts and Clova SDK to build your Clova extensionUse Kotlin scripts and Clova SDK to build your Clova extension
Use Kotlin scripts and Clova SDK to build your Clova extensionLINE Corporation
 
The Magic of LINE 購物 Testing
The Magic of LINE 購物 TestingThe Magic of LINE 購物 Testing
The Magic of LINE 購物 TestingLINE Corporation
 
UI Automation Test with JUnit5
UI Automation Test with JUnit5UI Automation Test with JUnit5
UI Automation Test with JUnit5LINE Corporation
 
Feature Detection for UI Testing
Feature Detection for UI TestingFeature Detection for UI Testing
Feature Detection for UI TestingLINE Corporation
 
LINE 新星計劃介紹與新創團隊分享
LINE 新星計劃介紹與新創團隊分享LINE 新星計劃介紹與新創團隊分享
LINE 新星計劃介紹與新創團隊分享LINE Corporation
 
​LINE 技術合作夥伴與應用分享
​LINE 技術合作夥伴與應用分享​LINE 技術合作夥伴與應用分享
​LINE 技術合作夥伴與應用分享LINE Corporation
 
LINE 開發者社群經營與技術推廣
LINE 開發者社群經營與技術推廣LINE 開發者社群經營與技術推廣
LINE 開發者社群經營與技術推廣LINE Corporation
 
日本開發者大會短講分享
日本開發者大會短講分享日本開發者大會短講分享
日本開發者大會短講分享LINE Corporation
 
LINE Chatbot - 活動報名報到設計分享
LINE Chatbot - 活動報名報到設計分享LINE Chatbot - 活動報名報到設計分享
LINE Chatbot - 活動報名報到設計分享LINE Corporation
 
在 LINE 私有雲中使用 Managed Kubernetes
在 LINE 私有雲中使用 Managed Kubernetes在 LINE 私有雲中使用 Managed Kubernetes
在 LINE 私有雲中使用 Managed KubernetesLINE Corporation
 
LINE TODAY高效率的敏捷測試開發技巧
LINE TODAY高效率的敏捷測試開發技巧LINE TODAY高效率的敏捷測試開發技巧
LINE TODAY高效率的敏捷測試開發技巧LINE Corporation
 
LINE 區塊鏈平台及代幣經濟 - LINK Chain及LINK介紹
LINE 區塊鏈平台及代幣經濟 - LINK Chain及LINK介紹LINE 區塊鏈平台及代幣經濟 - LINK Chain及LINK介紹
LINE 區塊鏈平台及代幣經濟 - LINK Chain及LINK介紹LINE Corporation
 
LINE Things - LINE IoT平台新技術分享
LINE Things - LINE IoT平台新技術分享LINE Things - LINE IoT平台新技術分享
LINE Things - LINE IoT平台新技術分享LINE Corporation
 
LINE Pay - 一卡通支付新體驗
LINE Pay - 一卡通支付新體驗LINE Pay - 一卡通支付新體驗
LINE Pay - 一卡通支付新體驗LINE Corporation
 
LINE Platform API Update - 打造一個更好的Chatbot服務
LINE Platform API Update - 打造一個更好的Chatbot服務LINE Platform API Update - 打造一個更好的Chatbot服務
LINE Platform API Update - 打造一個更好的Chatbot服務LINE Corporation
 
Keynote - ​LINE 的技術策略佈局與跨國產品開發
Keynote - ​LINE 的技術策略佈局與跨國產品開發Keynote - ​LINE 的技術策略佈局與跨國產品開發
Keynote - ​LINE 的技術策略佈局與跨國產品開發LINE Corporation
 

Mais de LINE Corporation (20)

JJUG CCC 2018 Fall 懇親会LT
JJUG CCC 2018 Fall 懇親会LTJJUG CCC 2018 Fall 懇親会LT
JJUG CCC 2018 Fall 懇親会LT
 
Reduce dependency on Rx with Kotlin Coroutines
Reduce dependency on Rx with Kotlin CoroutinesReduce dependency on Rx with Kotlin Coroutines
Reduce dependency on Rx with Kotlin Coroutines
 
Kotlin/NativeでAndroidのNativeメソッドを実装してみた
Kotlin/NativeでAndroidのNativeメソッドを実装してみたKotlin/NativeでAndroidのNativeメソッドを実装してみた
Kotlin/NativeでAndroidのNativeメソッドを実装してみた
 
Use Kotlin scripts and Clova SDK to build your Clova extension
Use Kotlin scripts and Clova SDK to build your Clova extensionUse Kotlin scripts and Clova SDK to build your Clova extension
Use Kotlin scripts and Clova SDK to build your Clova extension
 
The Magic of LINE 購物 Testing
The Magic of LINE 購物 TestingThe Magic of LINE 購物 Testing
The Magic of LINE 購物 Testing
 
GA Test Automation
GA Test AutomationGA Test Automation
GA Test Automation
 
UI Automation Test with JUnit5
UI Automation Test with JUnit5UI Automation Test with JUnit5
UI Automation Test with JUnit5
 
Feature Detection for UI Testing
Feature Detection for UI TestingFeature Detection for UI Testing
Feature Detection for UI Testing
 
LINE 新星計劃介紹與新創團隊分享
LINE 新星計劃介紹與新創團隊分享LINE 新星計劃介紹與新創團隊分享
LINE 新星計劃介紹與新創團隊分享
 
​LINE 技術合作夥伴與應用分享
​LINE 技術合作夥伴與應用分享​LINE 技術合作夥伴與應用分享
​LINE 技術合作夥伴與應用分享
 
LINE 開發者社群經營與技術推廣
LINE 開發者社群經營與技術推廣LINE 開發者社群經營與技術推廣
LINE 開發者社群經營與技術推廣
 
日本開發者大會短講分享
日本開發者大會短講分享日本開發者大會短講分享
日本開發者大會短講分享
 
LINE Chatbot - 活動報名報到設計分享
LINE Chatbot - 活動報名報到設計分享LINE Chatbot - 活動報名報到設計分享
LINE Chatbot - 活動報名報到設計分享
 
在 LINE 私有雲中使用 Managed Kubernetes
在 LINE 私有雲中使用 Managed Kubernetes在 LINE 私有雲中使用 Managed Kubernetes
在 LINE 私有雲中使用 Managed Kubernetes
 
LINE TODAY高效率的敏捷測試開發技巧
LINE TODAY高效率的敏捷測試開發技巧LINE TODAY高效率的敏捷測試開發技巧
LINE TODAY高效率的敏捷測試開發技巧
 
LINE 區塊鏈平台及代幣經濟 - LINK Chain及LINK介紹
LINE 區塊鏈平台及代幣經濟 - LINK Chain及LINK介紹LINE 區塊鏈平台及代幣經濟 - LINK Chain及LINK介紹
LINE 區塊鏈平台及代幣經濟 - LINK Chain及LINK介紹
 
LINE Things - LINE IoT平台新技術分享
LINE Things - LINE IoT平台新技術分享LINE Things - LINE IoT平台新技術分享
LINE Things - LINE IoT平台新技術分享
 
LINE Pay - 一卡通支付新體驗
LINE Pay - 一卡通支付新體驗LINE Pay - 一卡通支付新體驗
LINE Pay - 一卡通支付新體驗
 
LINE Platform API Update - 打造一個更好的Chatbot服務
LINE Platform API Update - 打造一個更好的Chatbot服務LINE Platform API Update - 打造一個更好的Chatbot服務
LINE Platform API Update - 打造一個更好的Chatbot服務
 
Keynote - ​LINE 的技術策略佈局與跨國產品開發
Keynote - ​LINE 的技術策略佈局與跨國產品開發Keynote - ​LINE 的技術策略佈局與跨國產品開發
Keynote - ​LINE 的技術策略佈局與跨國產品開發
 

Último

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 

Último (20)

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 

Let’s unbox Rancher 2.0 <v2.0.0>

  • 1. Let’s unbox Rancher 2.0 <v2.0.0> 1 LINE Corporation Verda2 Yuki Nishiwaki
  • 2. 1. Rancher 2.0 Overview 2. Deep dive Rancher Server 2.1. Rancher API 2.2. Rancher Controllers 2.3. Controller/Context 3. Dependent Library 3.1. Norman Framework 3.2. Kontainer-Engine 3.3. RKE 2 Agenda
  • 3. Casts in Rancher 2.0 rancher server Node1 Node2 Node3 rancher node-agent rancher node-agent rancher node-agent rancher cluster-agent1. Rancher Server 2. Rancher Cluster Agent 3. Rancher Node Agent Parent Kubernetes Child Kubernetes deployed by Rancher Child Kubernetes deployed by Rancher Parent k8s: k8s working with rancher Child k8s: k8s deployed by rancher 3 1. Rancher 2.0 Overview
  • 4. Casts in Rancher 2.0 rancher server Node1 Node2 Node3 rancher node-agent rancher node-agent rancher node-agent rancher cluster-agent1. Rancher Server 2. Rancher Cluster Agent 3. Rancher Node Agent Parent Kubernetes Child Kubernetes deployed by Rancher Child Kubernetes deployed by Rancher Parent k8s: k8s working with rancher Child k8s: k8s deployed by rancher 4 1. Rancher 2.0 Overview
  • 5. About Rancher Server ➢ Provide the user with GUI/API ➢ Rancher Server depend on Kubernetes Cluster (Need to be deployed before start to run Rancher Server) ○ If you try to run Rancher Server without Kubernetes, Rancher Server automatically try to build all in one Kubernetes on same host ○ All data related to Rancher will be stored in Kubernetes as a CRD ○ Rancher run custom controllers to deploy/maintain multiple Kubernetes ➢ Everytime Rancher Server need to talk to deployed Kubernetes, Use Rancher Cluster/Node Agent as a TCP Proxy Server via Websocket ➢ Multiple Rancher Server deployment is not available ○ HA Configuration is available but This is just run 1 Rancher Server on multi-node Kubernetes environment behind Loadbalancer 5 1. Rancher 2.0 Overview
  • 6. About Rancher Server Implementation ➢ One binary cover all of following features…. ○ Rancher API ○ Various Controller ○ gRPC Server of Kontainer-Engine ○ GUI of Rancher ➢ Depend on some other Rancher’s library/middleware ○ Norman ■ Used as a framework by API, Controller implementation ○ Kontainer Engine ■ Used by Controller to Deploy/Update k8s cluster on various environment like GKE, EKS, Any Server with RKE…. ○ RKE ■ Used by Kontainer Engine, API(Nodeconfigserver) to Deploy/Update k8s cluster on Server 6 1. Rancher 2.0 Overview
  • 7. What Server does? Server API Controllers CRD Kind: Cluster Node1 Node2 Node3 rancher node-agent rancher node-agent rancher node-agent rancher cluster-agent Child Kubernetes deployed by Rancher Child Kubernetes deployed by Rancher CRD Kind: Node All data stored as CRD in k8s Watch CRD Deploy Monitor Cluster/Sync Data Call docker/k8s API via websocket, If need. Don’t access to docker/k8s api directly from rancher server Websocket session Point 2 Point 3 Point 4 Point 5 Point 1 Provide API 7 1. Rancher 2.0 Overview
  • 8. What Server does? Server API Controllers CRD Kind: Cluster Node1 Node2 Node3 rancher node-agent rancher node-agent rancher node-agent rancher cluster-agent Child Kubernetes deployed by Rancher Child Kubernetes deployed by Rancher CRD Kind: Node Provide unified access to multiple k8s cluster Point 6 8 1. Rancher 2.0 Overview
  • 9. Casts in Rancher 2.0 rancher server Node1 Node2 Node3 rancher node-agent rancher node-agent rancher node-agent rancher cluster-agent1. Rancher Server 2. Rancher Cluster Agent 3. Rancher Node Agent Parent Kubernetes Child Kubernetes deployed by Rancher Child Kubernetes deployed by Rancher 9 1. Rancher 2.0 Overview
  • 10. About Rancher Agent ➢ All nodes need to run ➢ Periodically call /v3/connect/config API and Check If node need to run any container or create any file ➢ Provide TCP Proxy via websocket (/v3/connect) Rancher Node Agent Rancher Cluster Agent ➢ Cluster need to run 1 agent ➢ Provide TCP Proxy via websocket (/v3/connect) Use same binary and switch agent type by environment variable(CATTLE_CLUSTER) There are 2 types of Agent to run on Kubernetes deployed by Rancher 10 1. Rancher 2.0 Overview
  • 11. What Agent does? Node Agent Node A Cluster Agent Child Kubernetes Node Agent Node B Parent Kubernetes Server Dialer API (pkg/dialer) RkeNodeConfig API (pkg/rkenodeconfigserver) Controllers websocket session (/v3/connect) /v3/connect/config Use session For access (k8s, docker) Rancher Agent is basically to establish websocket to provide TCP Proxy and just check NodeConfig periodically. Almost all configurations will be done/triggered by controllers through websocket Point 2 Establish websocket session Point 1 Provide TCP Proxy via websocket Point 3 Check If file,container need to create/run or not periodically 11 1. Rancher 2.0 Overview
  • 12. Rancher 2.0 overview summary Almost all logics are in Rancher Server and Agent is just sitting as a TCP Proxy Server in k8s deployed so that Rancher Server can use ➢ Rancher Server ○ All data for Rancher is stored as CRD in Kubernetes (translating Rancher’s resource into CRD) ○ Rancher’s API is kind of wrapper for Kubernetes API ○ Rancher have various controllers to watch CRD resources in parent k8s to deploy k8s (Management Controllers) ○ Rancher have various controllers to watch resources including CRD in parent k8s to inject some data to k8s deployed (User Controllers) ➢ Rancher Agent ○ Establish websocket to provide TCP Proxy ■ This is used when Rancher Server want to talk to Child Kubernetes ■ This is used when user want to call Kubernetes API ○ Check periodically if node need to create something file or run something container 12 1. Rancher 2.0 Overview
  • 13. Rancher 2.0 overview summary Almost all logics are in Rancher Server and Agent is just sitting as a TCP Proxy Server in k8s deployed for Rancher Server ● Rancher Server a. All data for Rancher stored as CRD in Kubernetes (translating Rancher’s resource into CRD) b. Rancher’s API is kind of proxy to Kubernetes API c. Rancher have various controllers to watch CRD resources in parent k8s to deploy k8s (Management Controllers) d. Rancher have various controllers to watch CRD resources in parent k8s to inject some data to k8s deployed (User Controllers) e. Use websocket session to access deployed Node or K8s Cluster. ● Rancher Agent a. Establish websocket to provide TCP Proxy b. Check periodically if node need to create something file or run something container If we want to know more about How Rancher maintain Kubernetes Cluster, It’s enough to see just Rancher Server. Because Agent is just to provide proxy. 13 1. Rancher 2.0 Overview
  • 14. 1. Rancher 2.0 Overview 2. Deep dive Rancher Server 2.1. Rancher API 2.2. Rancher Controllers 2.3. Controller/Context 3. Dependent Library 3.1. Norman Framework 3.2. Kontainer-Engine 3.3. RKE 14 Agenda
  • 15. 2.1. Rancher API Server API Controllers CRD Kind: Cluster CRD Kind: Node Node1 Node2 Node3 rancher node-agent rancher node-agent rancher node-agent rancher cluster-agent Child Kubernetes deployed by Rancher All data stored as CRD in k8s Point 2 Point 1 Provide API 15 2. Deep dive Rancher Server
  • 16. Rancher API Overview ➢ All data used/created by Rancher will be stored as a Kubernetes Resource ○ Which Kubernetes is used is depending on API Path ■ Parent Kubernetes ■ Child Kubernetes ➢ What you can do in Rancher API(Management API) can be done by calling directly k8s API as well ➢ Rancher API allow you to create almost all resources type in K8s, not only CRD ○ Proxy request to k8s after some manipulation like adding annotation, label ➢ API can be classified into 5 types (It’s not officially classified) 16 2. Deep dive Rancher Server 2.1. Rancher API
  • 17. 5 types of API Server Controllers API Parent Kubernetes ➢ API can be classified into 5 types ➢ Some API is for only Agent ○ API for user ■ Management ■ Auth ■ K8s Proxy ○ API for agent ■ Dialer ■ RKE Node Config Auth API Management API K8s Proxy API Dialer API RKE Node Config API Main /v3-public /v3/token /v3/ /k8s/clusters /v3/connect /v3/connect/register /v3/connect/config Agent User 17 2. Deep dive Rancher Server 2.1. Rancher API
  • 18. Management API Server Controllers API Parent Kubernetes Auth API Management API K8s Proxy API Dialer API RKE Node Config API Main /v3/ Child Kubernetes deployed by Rancher Create/Update/Get Resource Create/Update/Get Resource POST /v3/cluster POST /v3/project/ <cluster-id>:<project-id>/pods CRD Cluster PodAgent depending on Path Use TCP Proxy Cluster Agent provide 18 2. Deep dive Rancher Server 2.1. Rancher API
  • 19. Management API ➢ This is Main API for Rancher These resources are created by Management API 19 2. Deep dive Rancher Server 2.1. Rancher API
  • 20. Management API ➢ Provide CRUD for Almost All CRD(like cluster) and k8s resources(like pod) ➢ Use Norman Framework ○ Schema definition can be seen in following path ■ types/apis/management.cattle.io/v3/schema/schema.go ■ types/apis/cluster.cattle.io/v3/schema/schema.go ■ types/apis/project.cattle.io/v3/schema/schema.go ➢ According to resource type, Management API use proper data store ○ Use Parent Kubernetes for CRD like Cluster ○ Use Child Kubernetes for Kubernetes Core resource like Pod ➢ It doesn’t try to create actual resource and provisioning ○ This API just create cluster CRD in Parent Kubernetes ○ Provisioning is a responsibility of Controller 20 2. Deep dive Rancher Server 2.1. Rancher API
  • 21. Auth API Server Controllers API Parent Kubernetes Management API K8s Proxy API Dialer API RKE Node Config API Main CRD User Auth API /v3-public /v3/token Authenticate with User CRD resource for Rancher API Get Token 21 2. Deep dive Rancher Server 2.1. Rancher API
  • 22. K8s Proxy API Server API Parent Kubernetes Management API Dialer API RKE Node Config API Main Child Kubernetes deployed by Rancher CRD Token Auth API Authenticate with User CRD resource for Rancher API K8s Proxy API Controllers Websocket Sessions Agent Call Child K8s API via TCP Proxy via Websocket GET /k8s/clusters/<cluster> /api/v1/componentstatuses /k8s/clusters GET /api/v1/componentstatuses 22 2. Deep dive Rancher Server 2.1. Rancher API
  • 23. K8s Proxy API ➢ All request to Kubernetes deployed by Rancher is *authenticated* by this API ➢ If authentication succeeded, Proxy the request to k8s via websocket session to Cluster Agent ➢ Use Impersonate-User, Impersonate-Group HTTP Headers to propagate User information to k8s and always use same ServiceAccountToken to be created when cluster deployed ➢ Authorization will be done by k8s deployed not K8s Proxy ○ Although Rancher’s Controller inject role,clusterrole and these binding information into k8s deployed according to the value of roletemplate(CRD) 23 2. Deep dive Rancher Server 2.1. Rancher API
  • 24. K8s Proxy Implementation Node Agent Node A …. /k8s/clusters/<id of clusterA> Cluster Agent Cluster A As you can see here, somehow User access to k8s will be done through Websocket session to Cluster Agent 24 2. Deep dive Rancher Server 2.1. Rancher API We use always same Service Account which is created for each cluster. Impersonate-User, Impersonate-Group HTTP Headers are used to tell the user by k8s cluster
  • 25. Dialer API Server API Parent Kubernetes Management API RKE Node Config API Main Child Kubernetes deployed by Rancher Auth API K8s Proxy API Controllers Websocket Sessions Agent Dialer API /v3/connect /v3/connect/register wss://<rancher-server>/v3/connect CRD ClusterRegisterToken Start Provide TCP Proxy via websocket Check which cluster Does agent belong to Add websocket session for “K8s Proxy” and Controllers to use TCP Proxy 25 2. Deep dive Rancher Server 2.1. Rancher API
  • 26. Dialer API ➢ This is most important API from Agent perspective ➢ Dialer API(TunnelServer) provide endpoint of websocket and maintain sessions for the controllers in Rancher ➢ All Cluster/Node Agent establish 1 websocket session and Provide TCP Proxy ➢ The websocket sessions established here will be used by Controllers and K8s Proxy API ○ If agent failed to establish the websocket session, Any controllers in Rancher can not do anything. ○ Keep it in mind Rancher also fail to proxy k8s access because k8s proxy API use websocket session to Cluster Agent 26 2. Deep dive Rancher Server 2.1. Rancher API
  • 27. Dialer API Implementation Rancher Controllers Agent wss://<rancher-server>/v3/connect Node A Provide TCP Proxy Interface to - lookup proper websocket session - start/maintain connection over websocket TCP 127.0.0.1:443 Rancher K8s proxy API Various components of Rancher Server use websocket session to access docker/k8s running on the target node Cluster A 27 2. Deep dive Rancher Server 2.1. Rancher API
  • 28. RKE Node Config API Server API Parent Kubernetes Management API Main Child Kubernetes deployed by Rancher Auth API K8s Proxy API Controllers Agent Dialer API RKE Node Config API/v3/connect/config CRD Cluster RKE library Check Config Generate NodeConfig According to NodeConfig - Create File - Create container via docker 28 2. Deep dive Rancher Server 2.1. Rancher API
  • 29. RKE Node Config API ➢ This is the API only for Node Agent on k8s deployed by RKE ○ If Node Agent running on GKE, EKS, This API always return HTTP Status Code 404 ➢ This API return NodeConfig object which include followings ○ Files to create ○ Process to run (Container) ○ Certificate to use ➢ This Node Config object is generated based on one of attributes in Cluster Object which is rancherKubernetesEngineConfig 29 2. Deep dive Rancher Server 2.1. Rancher API
  • 30. RKE Node Config API Implementation Node Agent Node A /v3/connect/config Create file/Run container based NodeConfig Node Config ● What process need to run ● What file need to create ● What certificate need to use Get NodeConfig for Node A rke generate NodeConfig based on rancherKubernetesEngineConfig which is one of the attributes in Cluster CRD Cluster A 30 2. Deep dive Rancher Server 2.1. Rancher API
  • 31. 1. Rancher 2.0 Overview 2. Deep dive Rancher Server 2.1. Rancher API 2.2. Rancher Controllers 2.3. Controller/Context 3. Dependent Library 3.1. Norman Framework 3.2. Kontainer-Engine 3.3. RKE 31 Agenda
  • 32. 2.2. Rancher Controllers Server API Controllers CRD Kind: Cluster Node1 Node2 Node3 rancher node-agent rancher node-agent rancher node-agent rancher cluster-agent Child Kubernetes deployed by Rancher CRD Kind: Node Watch CRD Deploy Monitor Cluster/Sync Data Call docker/k8s API via websocket, If need. Don’t access to docker/k8s api directly from rancher server Websocket session Point 3 Point 4 Point 5 32 2. Deep dive Rancher Server
  • 33. 2.2.1 Rancher Controllers Overview ➢ Rancher API just create CRD resource in k8s ➢ Actual provisioning/configuration is done by Controller when detect something change in k8s ➢ Rancher Controller watch resource@Child K8s not only resource@Parent K8s ➢ Many Controllers Rancher would run, altogether more than 40 Controllers ➢ Rancher Controller Implementation actively use Norman Framework 33 2. Deep dive Rancher Server 2.2. Rancher Controllers
  • 34. API Controllers Management Controllers Cluster(User) Controllers Workload Controllers 4 types of Controllers Server API Controllers Parent Kubernetes Create Resource Watch Resource ➢ Rancher Controllers can be classified into 4 types of group ➢ Each group have own trigger to start ➢ Triggered when Server start ○ API Controllers ○ Management Controllers ➢ Triggered when new Cluster detected ○ Cluster(User) Controllers ○ Workload Controllers 34 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
  • 35. API Controllers Management Controllers Cluster(User) Controllers Workload Controllers API Controllers Server API Controllers Parent Kubernetes Create Resource Watch Resource Configure ➢ Watch CRD resource related to API Server Configuration ○ settings ○ dynamicschemas ○ nodedrivers ➢ Configure API server according to the change of resource 35 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
  • 36. API Controllers Management Controllers Cluster(User) Controllers Workload Controllers Management Controllers Server API Controllers Parent Kubernetes Create Resource Watch Resource Provisioning/Update Cluster Start Cluster(User), Workload Controllers Child Kubernetes deployed by Rancher ➢ Watch Cluster/Node related CRD ➢ Provision/Update Cluster according to the change of resource ➢ After Provision, Start Cluster(User), Workload Controllers to start data sync and monitor 36 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
  • 37. API Controllers Management Controllers Cluster(User) Controllers Workload Controllers Cluster(User) Controllers Child Kubernetes deployed by Rancher Server Controllers Parent Kubernetes Create Resource Watch ResourceCreate Resource Watch Resource Update/Create CRD According to Child K8s Update/Create Resource including Pod According to Parent K8s CRD 37 Cluster CRD Secret Alerts CRD Status Spec Node For updating CRD in Parent K8s Resource Sync between Parent and Child K8s 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
  • 38. API Controllers Management Controllers Cluster(User) Controllers Workload Controllers Workload Controllers Child Kubernetes deployed by Rancher Server API Controllers Parent Kubernetes Create Resource Watch ResourceCreate Resource Watch Resource The Simple Custom Controller to extend K8s ➢ Watch only resource@Child K8s ➢ Create/Update/Delete related resource ➢ These Controller are more like enhancing K8s feature itself 38 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.1 Overview
  • 39. 2.2.2. Important Management Controllers Controllers 39 API Controllers Management Controllers Cluster(User) Controllers Workload Controllers Server API Create Resource Watch Resource Parent Kubernetes 2. Deep dive Rancher Server 2.2. Rancher Controllers
  • 40. Cluster Controller (pkg/controllers/management) Overview ➢ Deploy actual Kubernetes Cluster by Kontainer-Engine ➢ After deployed Child Kubernetes, Do followings ○ Make sure cluster-agent/node-agent run on Child Kubernetes ○ Make sure Cluster(User Controllers) start against Child Kubernetes ➢ If the attributes of cluster has been changed, Update cluster by Kontainer-Engine ➢ Update Status of Cluster based on Node’s(nodes.management.cattle.io) information which is synced with Node’s(core.v1.nodes) information 40 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 41. Cluster Controller Implement (pkg/controllers/management) Parent k8s Server Cluster Controller (one of management controllers) handlers lifecycles cluster-provisioner-controller cluster-agent-controller cluster-scoped-gc cluster-deploy cluster-stats CRD Cluster A Informer Child k8s Cluster A watch Execute deploy Node Agent Cluster Agent deploy Cluster(User) Controllers Alerts ingress ... Run Cluster Controllers for Cluster A CRD Node A CRD Node B Update Cluster Collect status 41 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 42. Cluster Controller Implement (pkg/controllers/management) ➢ [lifecycle] cluster-provisioner-controller (clusterprovisioner/provisioner.go) ○ Create: Initialize condition object of cluster. Call Create RPC of proper driver in Kontainer-Engine (rke, gke…) ○ Update: Call Update RPC of proper driver in Kontainer-Engine if RancherKubernetesEngineConfig(this is generated by translating each driver config like rancherKubernetesEngineConfig) got changed from before ○ Remove: Call Remove RPC of proper driver in Kontainer-Engine ➢ [lifecycle] cluster-agent-controller (usercontrollers/controller.go) ○ Update: Re/Start All Controllers related to UserContext, User Only Context ○ Remove: Stop All Controllers related to UserContext, UserOnlyContext ➢ [lifecycle] cluster-scoped-gc (clustergc/cluster_scoped_gc.go) ○ Remove: Remove cluster-name finalizer from the objects cluster depending on roletemplate, project… ➢ [handler] cluster-deploy (clusterdeploy/clusterdeploy.go) ○ Have a responsibility to run cluster-agent/node-agent on child kubernetes ➢ [handler] cluster-stats (clusterstats/statsaggregator.go) ○ Update cluster status by collecting all machine’s status like the number of active pod, the memory consumed Watch clusters.management.cattle.io(CRD) resources. Trigger followings 42 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 43. Node Controller (pkg/controllers/management) Overview ➢ Deploy VM by docker-machine command if need ➢ Run rancher node-agent after deployed VM ➢ Rancher node-agent established websocket session and register own node to the cluster ○ This will trigger Cluster Controller to provision Kubernetes related process 43 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 44. Node Controller Implement (pkg/controllers/management) Parent k8s CRD Node A CRD Node B Server Node Controller (one of management controllers) handlers lifecycles node-controller cluster-provisioner-controller cluster-stats nodepool-provisioner Informer watch Execute VM Node Agent Managements Controllers Cluster Controller NodePool Controller Just trigger handlers Run Node Agent Create VM If doesn’t exist docker-machine trigger handlers Create VM 44 2. Deep dive Rancher Server 2.2. Rancher Controllers Call wss://<server>/v3/connect/register To register node into specific cluster 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 45. Node Controller Implement (pkg/controllers/management) ➢ [lifecycle] node-controller (node/controller.go) ○ Create: Initialize nodeConfig object and store it as a secret resource in k8s ○ Update: If Node doesn’t have the condition “Provisioned is True”, Try to create new VM on Public/Private Cloud by docker-machine command ○ Remove: Just delete nodeConfig (secret) in k8s ➢ [handler] cluster-provisioner-controller (clusterprovisioner/provisioner.go) ○ Enqueue job to Cluster Controller ➢ [handler] cluster-stats (clusterstats/statsaggregator.go) ○ Enqueue job to Cluster Controller ➢ [handler] nodepool-provisioner (nodepool/nodepool.go) ○ Enqueue job to NodePool Controller Watch nodes.management.cattle.io(CRD) resource. Trigger followings 45 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 46. Node Pool Controller (pkg/controllers/management) Overview ➢ This Controller allow user to create multiple Nodes as a group ➢ If we specify the group having 3 nodes, This controller automatically create Node CRD for each and Node Controller will do actual provisioning as usual 46 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 47. Node Pool Controller Implement (pkg/controllers/management) Parent k8s CRD(Kind: Node) Node A CRD(Kind NodePool) Nodes For Cluster A CRD Node B Server Nodes For Cluster A - Node A - Role: Etcd - Node B - Role: Control - Node C - Role: Worker Note: Just image about what nodepool defined NodePool Controller lifecycles nodepool-provisionerInformer CRD Node C Create If missing Check all Nodes exist Watch Execute Confirmed exist 47 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 48. Node Pool Controller Implement (pkg/controllers/management) ➢ [lifecycle] nodepool-provisioner (nodepool/nodepool.go) ○ Update: Check If all node CRD in the nodepool are created or not. if not, create node CRD in k8s ○ Remove: Delete node CRD described in the deleted nodepool in k8s Watch nodepools.management.cattle.io(CRD) resources. Trigger followings 48 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.2. Management Controllers
  • 49. 2.2.3. Important User Controllers 49 API Controllers Management Controllers Cluster(User) Controllers Workload Controllers Server API Controllers Parent Kubernetes Create Resource Watch Resource 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers
  • 50. Alerts Controller (pkg/controllers/user/alerts) Overview ➢ User can define Alerts against Child K8s to check ○ Event, Resource status, Node mem/cpu… ➢ User can define notifiers like slack, email, webhook ○ This notifier definition is used when alert fired ➢ Prometheus Alertmanager is deployed @Child K8s for the part to send notification ○ You can easily support any other notification system as long as Prometheus support 50 2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 51. Alerts Controller Implement (pkg/controllers/user/alerts) Parent k8s CRD ClusterAlerts CRD Node Server ClusterAlerts Controller handlers cluster-config-syncer cluster-alert-deployer Informer watch Execute Watchers EventWatcherNodeWatcher ProjectAlerts Controller Child k8s alertmanager Config.yaml notification.tmpl secret mount Pod Maintain config files According to ClusterAlerts CRD Deploy alertmanager if need …. Check Node violate ClusterAlerts Check Event violate ClusterAlerts Send Notify Send Alert via email, slack Same as ClusterAlerts 51 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 52. Alerts Controller Implement(1/2) (pkg/controllers/user/alerts) ➢ Watch clusteralerts.management.cattle.io resource @Parent K8s ○ [handler] cluster-alert-deployer ■ Make sure alert manager(prom/alertmanager/) is running with alert-manager helper(rancher/alertmanager-helper) on Child k8s when notifier and alerts resources are created ● Alertmanager-helper is just to watch config and reload alertmanager by API ● Alertmanager refer config file which is secret resource via secret mount and have a responsibility to notify alert to notifiers(slack, webhook…) ○ [handler] cluster-config-syncer ■ Maintain config file on secret resource according to the change of ClusterAlerts ➢ Watch projectalerts.management.cattle.io resource @Parent K8s ○ [handler] project-alert-deployer ■ Same as “cluster-alerts-deployer” ○ [handler] project-alert-syncer ■ Same as “cluster-config-syncer” 52 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 53. Alerts Controller Implement(2/2) (pkg/controllers/user/alerts) ➢ Watch notifiers.management.cattle.io resource @Parent K8s ○ [handler] notifier-config-syncer ■ Same as “cluster-config-syncer” ➢ Run Alert Evaluation threads(called as Watcher) for Events/Pod/Node….. ○ One thread(watcher) for One resource type ○ Periodically evaluate the condition alert described with current status and If current status broke the condition of alerts, Call http://<alert-manager service ip>/api/alerts to let alert manager know new alert fired via websocket session to cluster-agent 53 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 54. NodeEndpointsController (pkg/controllers/user/endpoints) Overview ➢ In Kubernete, It’s not easy to tell which address, port, protocol are exposed to outside cluster on the spot ➢ This NodeEndpointsController check all resources possibly exposed to outside and store all exposed address/port/protocol information into annotation $kubectl get nodes rancher003 -o=custom-columns=NAME:.metadata.annotations NAME map[field.cattle.io/publicEndpoints:[ {"nodeName":"local:machine-z6qww","addresses":["rancher003"],"port":30080,"protocol":"TCP", "serviceName":"cattle-system:cattle-service","allNodes":true}, {"nodeName":"local:machine-z6qww","addresses":["rancher003"],"port":30443,"protocol":"TCP", "serviceName":"cattle-system:cattle-service","allNodes":true} ] 54 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 55. NodeEndpointsController Implement (pkg/controllers/user/endpoints) Parent k8s CRD(Kind: Node) Node A Server core.v1.Node Controller handlers nodeEndpointControllerInformerWatch Execute Child k8s Core.v1.Node Node A Service A Lookup Node Name Corresponding to core.v1.node Node A Get All Service exposed outside Update field.cattle.io/publicEndpoints to store all exposed endpoint 55 1. 2. 3. 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 56. NodeEndpointsController Implement (pkg/controllers/user/endpoints) ➢ Watch core.v1.nodes resource @ Child k8s ○ [handler] nodesEndpointController ■ Check all endpoints exposed to outside child k8s cluster by nodePort, LB service or hostport pod and Store these endpoints with “field.cattle.io/publicEndpoints” annotation in core.v1.nodes resource in child k8s 56 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 57. EventSyncer (pkg/controllers/user/eventssyncer) Overview ➢ This is watching Event resource@Child K8s and Store all of events into Parent K8s as a ClusterEvent CRD ○ This doesn’t update event status/message and always create new event ➢ This ClusterEvent CRD is deleted after 24 hours passed from creation by /pkg/controllers/management/clusterevents/clustereventscleanup.go ➢ This Controller is not enabled at default now because of scaling issue ○ https://github.com/rancher/rancher/issues/11771 ○ If you want to enable this, it’s better to modify this logic to store external database 57 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 58. EventSyncer Implement (pkg/controllers/user/eventssyncer) Parent k8s CRD ClusterEvents Server core.v1.Event Controller handlers events-syncerInformer Execute Child k8s Event Watch Translate event into clusterevents and Create clustereventscleanup (pkg/controllers/management/) Goroutine Watch Delete clusterevent after 24 hours passed from create 58 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 59. EventSyncer Implement (pkg/controllers/user/eventssyncer) ➢ Watch core.Event resource @ Child K8s ○ [handler] events-syncer ■ Create/Update ClusterEvents corresponding to event object in child k8s. This ClusterEvents CRD resource are deleted after 24 hours passed by pkg/controllers/management/clusterevents/clustereventscleanup.go ○ This Controller is not enabled by default now because of scaling issue ■ https://github.com/rancher/rancher/issues/11771 59 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 60. HealthSyncer (pkg/controllers/user/healthsyncer) Overview ➢ Start thread to check component status for Child k8s periodically ○ Call <Child K8s Endpoint>/v1/componentstatuses periodically ○ Store components status into Status.ComponentStatuses in v3.Cluster object @Parent K8s ○ If This controller failed to get component status from Child k8s, Stop All UserControllers and periodically check if Cluster got back to alive or not. 60 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 61. HealthSyncer Implement (pkg/controllers/user/healthsyncer) Parent k8s CRD Cluster A Server healthSyncer Goroutine Child k8s Cluster(User) Controllers ClusterAlert Controller Notifier Controller ... GET /api/v1/componentstatuses Stop if failed to get status Re-start If status got alive Update Status.ComponentStatuses 61 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 62. Cluster(Project)Logging Controller (pkg/controller/user/logging/) Overview ➢ User can specify where all container logs should be sent to ➢ The logs will be sent by fluentd which is deployed by this controller ➢ If user specify the embedded Elasticsearch, Elasticsearch will be deployed onto Child K8s as well 62 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 63. Cluster(Project)Logging Controller Implement (pkg/controller/user/logging/) Parent k8s CRD ClusterLogging Server Child k8s ClusterLogging Controller lifecycle cluster-logging-controllerInformer Execute ProjectLogging Controller Almost same as ClusterLogging Controller Watch Daemonset cluster.conf ConfigMap project.conf ConfigMap /var/lib/docker/containers/ /var/log/containers/ /var/log/pods /var/lib/rancher/rke/log HostPath Mount Mount Mount Deploy Update Out of Scope Send logs 63 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 64. Cluster(Project)Logging Controller Implement (pkg/controller/user/logging/) ➢ Watch clusterloggings.cattle.io resource @Parent K8s ○ [lifecycle] cluster-logging-controller ■ Create: Deploy fluentd daemon set and initial config ● Create 2 ConfigMap @Child K8s. One is for to store cluster.conf, the other is for to store project.conf. ● Deploy daemonset @Child K8s. rancher/fluentd docker image is used. This daemonset have following hostpath volume ○ /var/lib/docker/containers (created by docker) ○ /var/log/containers, /var/log/pods (created by k8s) ○ /var/lib/rancher/rke/log (created by rke) ■ Updated: Update ConfigMap (cluster.conf) according to spec of clusterlogging resource. Configuration template can be seen in pkg/controllers/user/logging/generator/cluster_template.go ■ Remove: Delete the namespace including all logging related resources ProjectLogging Controller have very similar behaviour 64 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 65. User Node Remove (pkg/controller/user/noderemove/) Overview ➢ This controller is in charge of deleting core.v1.Node when nodes.management.cattle.io resource is deleted Implementation ➢ Watch nodes.management.cattle.io@ Parent K8s ○ [lifecycle] user-node-remove ■ Remove: delete core.Node resource @ Child K8s corresponding to Node CRD 65 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 66. NodeSyncer (pkg/controller/user/nodesyncer/) Overview ➢ This controller is in charge of updating nodes.management.cattle.io CRD resource so that it has same status as core.v1.Node core.v1.Node@Child K8s Nodes.management.cattle.io CRD@Parent K8s Spec Spec.InternalNodeSpec Status Status.InternalNodeStatus Annotations Status.NodeAnnotation Labels Status.NodeLabels Name Status.Nodename Translate 66 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 67. NodeSyncer Implement (pkg/controller/user/nodesyncer/) Parent k8s CRD Node A Server Child k8s Core.v1 Node A core.v1.Node Controller handlers Informer nodeSyncer management.cattle.io.v3.Node Controller Informer handlers machinesSyncer machineLabelSycner Watch Execute Execute Watch Execute Get Information Store information into Node CRD Update Label, Annotation Check Spec. Annotation and Label 67 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 68. NodeSyncer Implement(1/2) (pkg/controller/user/nodesyncer/) ➢ Watch core.Node @Child K8s ○ [handler] nodeSycner ■ Just enqueue job to NodeController ➢ Watch nodes.managements.cattle.io @ Parent K8s ○ [handler] machinesSyncer ■ Check all core.Node resources @Child K8s and If there is missing nodes.management.cattle.io resource corresponding to core.Node @Parent K8s, Create new nodes.management.cattle.io. If there is nodes.management.cattle.io resource corresponding to no core.Node, Try to delete that nodes.management.cattle.io resource ■ Update nodes.management.cattle.io resource ● Spec (core.Node) -> Spec.InternalNodeSpec (nodes.management.cattle.io) ● Status (core.Node) -> Status.InternalNodeStatus (nodes.management.cattle.io) ● Calculate requested resource by pod -> Status.Request (nodes.manageme…) ● Annotations (core.Node) -> Status.NodeAnnotation (nodes.manageme… ● Labels (core.Node) -> Status.NodeLabels (node.manageme… ● Name (core.Node) -> Status.NodeName (node.manageme… 68 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 69. NodeSyncer Implement(2/2) (pkg/controller/user/nodesyncer/) ➢ Watch nodes.managements.cattle.io @ Parent K8s ○ [handler] machinesLabelSyncer ■ If nodes.managements.cattle.io resource@Parent K8s have Spec.DesiredNodeLabels or Spec.DesiredNodeAnnotation, Try to make sure core.Node@Child K8s have completely same Annotation or Label 69 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 70. RBAC Controllers (pkg/controller/user/rbac) RoleTemplate Overview ➢ This controller allow user to create roletemplate which is corresponding to clusterrole in K8s ➢ Roletemplate can be assigned to User in Rancher and this information will propagate to all clusters so that all cluster have same RBAC information 70 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 71. RBAC Controllers (pkg/controller/user/rbac) You can choose roletemplate Choose user to assign roletemplate Adding Cluster Member = Assign specific roletemplate to user 71 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 72. RBAC Controllers Implement (pkg/controller/user/rbac) Parent k8s CRD ClusterRoleTemplateBinding Server Child k8s ClusterRole RoleTemplate Controller Informer lifecycle cluster-roletemplate-sync ClusterRoleTemplateBinding Controller lifecycle cluster-crtb-syncInformer CRD RoleTemplate Watch Watch Execute Execute ClusterRoleBindings Check if there is binding resource to refer RoleTemplate Update Create If need Create If need Get Get Get 72 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 73. RBAC: RoleTemplate Controller Implement (pkg/controller/user/rbac) ➢ Watch roletemplate@Parent K8s ○ [lifecycle] cluster-roletemplate-sync ■ Update: ● Check If there is project role template binding referring to the roletemplate ● Check If there is cluster role template binding referring to the roletemplate ● If there is, Make sure roletemplate’s rule is same as clusterrole corresponding to the roletemplate @Child K8s ■ Remove: ● Remove clusterrole @Child K8s corresponding to deleted roletemplate 73 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 74. RBAC: ClusterRoleTemplateBinding Controller Implement (pkg/controller/user/rbac) ➢ Watch clusterroletemplatebindings.management.cattle.io @Parent K8s ○ [lifecycle] cluster-crtb-sync ■ Create: Make sure followings ● There is clusterrole corresponding to roletemplate clusterroletemplatebindings refer to @Child K8s ● There is clusterrolebindings corresponding to clusterroletemplatebindings @Child K8s ■ Update: same as Create ■ Remove: Make sure delete clusterrolebindings corresponding to clusterroletemplatebindings @Child K8s 74 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 75. Secret Controller (pkg/controller/user/secret) Overview ➢ User can create secret resource for all namespaces thanks to this Controller ➢ If user create secret with all namespaces, this controller watch the namespaces resource@Child K8s and create secret for new namespace 75 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 76. Secret Controller Implement (pkg/controller/user/secret) Parent k8s Secret Server Child k8s Namespace Namespace Controller handlers Informer SecretsController Secret Controller Informer handlers secretsController Watch Execute Watch Execute CreateCheck If the secret have field.cattle.io/projectId annotation Secret Create Get All secret in project Check If the namespace have field.cattle.io/projectId annotation 76 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 77. Secret Controller Implement (pkg/controller/user/rbac) ➢ Watch core.Namespaces @Child K8s ○ [handler] secretsController ■ If the namspace have annotation (field.cattle.io/projectId), Create all secrets under <project-name> namespace@Parent K8s into Child K8s ➢ Watch core.Secret @Parent K8s ○ [lifecycle] secretsController ■ Create: If the secret have annotation (field.cattle.io/projectId), Create the secret into specified namespace in Child K8s. ■ Update: Same as Create ■ Remove: Delete secret in Child K8s as well 77 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.3. User Controllers
  • 78. 2.2.4. Important Workload Controllers 78 API Controllers Management Controllers Cluster(User) Controllers Workload Controllers Server API Controllers Parent Kubernetes Create Resource Watch Resource 2. Deep dive Rancher Server 2.2. Rancher Controllers
  • 79. ExternalIPServiceController (pkg/controllers/user/externalservice) Overview ➢ Headless Service(Cluster-IP: None) is usually used to balance traffic between Pods by selector ➢ This is to extend Headless Service to support external IP which is maintained manually in usual k8s Endpoint having all ip addresses as a subset are created 79 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
  • 80. ExternalIPServiceController Implement (pkg/controllers/user/externalservice) Parent k8s Server Child k8s Service Service Controller Informer handlers externalIPServiceController Watch Execute Endpoint Update subset so at to have all ip address described in annotation “field.cattle.io/ipAddresses” Check “Filed.cattle.io/ipAddresses” annotation 80 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
  • 81. ExternalIPServiceController Implement (pkg/controllers/user/externalservice) ➢ Watch core.v1.Service resource @ Child k8s ○ [handler] externalIpServiceController ■ Check “field.cattle.io/ipAddresses” annotation on the service resource and create endpoint object having ip addresses described in annotation 81 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
  • 82. Rancher Ingress Controller (pkg/controllers/user/ingress) Overview ➢ Kubernetes Ingress rule allow you to specify only service as a backend ➢ But Rancher allow you to use Deployment/Pods as a backend for Ingress rule ○ By letting this controller automatically create service for these deployments/Pods Workload is actually Pod, Deployment... 82 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
  • 83. Rancher Ingress Controller Implement (pkg/controllers/user/ingress) Parent k8s Server Child k8s Ingress Ingress Controller Informer handlers ingressWorkloadController Watch Execute Check Ingress definition created by Rancher by annotation “field.cattle.io.ingress/state” including info about target workload Service Create Service with NodePort having selector workload_ingress_*** Workload Controller (pkg/controllers/user/workload/workload_common.go) Informer handlers syncDeployments syncReplicasets …. Deployments Pod ingressEndpointController Add workload label for ingress selector Watch Execute Get info to identify pod Execute 83 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
  • 84. Rancher Ingress Controller Implement (pkg/controllers/user/ingress) ➢ Watch extensions.ingress resources @ Child K8s ○ [handler] ingressWorkloadController ■ Create service for deployment(seen as workload in Rancher) specified by ingress resource ● Only ingress resource created by Rancher API (/v3/project/<cluster-id>/<cluster-id>:<project-name>/ingress) is the target resource this controller deal with. ● We can tell ingress resource is created by Rancher API or not with annotation “field.cattle.io.ingress/state” 84 2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers2. Deep dive Rancher Server 2.2. Rancher Controllers 2.2.4. Workload Controllers
  • 85. 2.2.5. Controllers We didn’t cover this time Management Controllers: * pkg/controllers/management/auth * pks/controllers/management/catalog * pkg/controllers/management/compose * pkg/controllers/management/nodedriver * pkg/controllers/management/podsecuritypolicy Cluster(User) Controllers: * pkg/controllers/user/helm * pkg/controllers/user/namespacecompose * pkg/controllers/user/networkpolicy * pkg/controllers/user/nslabel * pkg/controllers/user/pipeline * pkg/controllers/user/usercompose Workload Controllers: * pkg/controllers/user/dnsrecord * pkg/controllers/user/targetworkloadservice * pkg/controllers/user/workload API Controllers: * pkg/api/controllers/dynamicschema * pkg/api/controllers/settings * pkg/api/controllers/whitelistproxy 85 2. Deep dive Rancher Server 2.2. Rancher Controllers
  • 86. 1. Rancher 2.0 Overview 2. Deep dive Rancher Server 2.1. Rancher API 2.2. Rancher Controllers 2.3. Controller/Context 3. Dependent Library 3.1. Norman Framework 3.2. Kontainer-Engine 3.3. RKE 86 Agenda
  • 87. Context represents Controller groups Contexts ScaledContext ManagementContext UserContext UserOnlyContext Credential For parent Credential For parent Credential For child Credential For child ➢ Each Context have a group of controllers to run ➢ Each Context have kubernetes config for a certain environment ○ ScaledContext for Parent k8s ○ ManagementContext for Parent k8s ○ UserContext for Child k8s and Parent k8s ○ UserOnlyContext for Child k8s ➢ UserContext and UserOnlyContext are created for each Child k8s ➢ All contexts except for UserOnlyContext have dialer to “Node/Cluster Agent’s TCP Proxy Server” and Use it to access Child k8s UserContext UserOnlyContext Credential For child Credential For child 87 2. Deep dive Rancher Server 2.3. Controller/Context
  • 88. Context/Controllers Relation Controllers API Controllers Management Controllers Cluster Controllers Workload Controllers Contexts ScaledContext ManagementContext UserContext UserOnlyContext Clientconfig For parent Clientconfig For parent Clientconfig For child Clientconfig For child They are watching api related CRD in parent k8s and do whatever action like replacing certificates. They are started when rancher server start. All controllers can be seen in pkg/api/server/server.go. They are watching deployed cluster/node related CRD in parent k8s and do whatever action like actual provisioning cluster, adding node... They are started when rancher server start. All controllers can be seen in pkg/controllers/management/controller.go After deployed cluster, These controllers have a responsibility to monitor and sync data for the cluster and nodes, They are watching many objects in both of k8s cluster parent and child. They are re/started when cluster change is detected by usercontroller which is one of the management controllers, which means we have active controller for each cluster All controllers can be seen in pkg/controllers/user/controllers.go Cluster Controllers (User Controllers) Workload Controllers UserContext UserOnlyContext Clientconfig For child Clientconfig For child They are just watching resource in child k8s. They don’t require access to Parent k8s and aim for extending existing in child k8s. Trigger to start is same as Cluster(User) controllers All controllers can be seen in pkg/controllers/user/controllers.go 88 2. Deep dive Rancher Server 2.3. Controller/Context
  • 89. ManagementContext(Context) Start How/When Rancher Start Controller? ScaledContext (Context) Clientconfig For parent Start ➢ Context have a responsibility to run Controllers ➢ Context.Start is the function start controllers be in charged of Clientconfig For parent Is it same? vendor/github.com/rancher/types/config/context.go Execute Start Execute Start 89 2. Deep dive Rancher Server 2.3. Controller/Context
  • 90. ManagementClient B ManagementContext(Context) Start Register Concept ScaledContext (Context) Clientconfig For parent Start Clientconfig For parent ManagementClient A starters nodeController … Management Controllers nodeController userController … … Register Start only controllers in starters Execute Start Execute Start …ControllerA different 90 2. Deep dive Rancher Server 2.3. Controller/Context
  • 91. Flow to start Controllers Actual Example: app/app.go Generate Context Register Controllers Context Start (Start Controllers) ➢ Always create context when start controllers in Rancher ➢ For management, scaled controllers are started when start rancher server. See bellow code 91 2. Deep dive Rancher Server 2.3. Controller/Context
  • 92. 1. Rancher 2.0 Overview 2. Deep dive Rancher Server 2.1. Rancher API 2.2. Rancher Controllers 2.3. Controller/Context 3. Dependent Library 3.1. Norman Framework 3.2. Kontainer-Engine 3.3. RKE 92 Agenda
  • 93. 3.1. Norman (https://github.com/rancher/norman) ➢ Rancher actively use this framework ➢ Provide API Framework working with Kubernetes as a backend storage ○ All Data will be stored in k8s as a CRD resource (technically we can configure other storage as well) ➢ Provide easy way to build Controller on Kubernetes (Wrapper for client-go) Something App Norman API Part Controller Part Create Resource as CRD CRD (Test) Resource A CRD (Test) Resource B Watch the changesReturn If app stored resource information into k8s or not Do whatever you want asynchronously 93 3. Dependent Library
  • 94. 3.1.1. Norman API Part Something App Norman API Part Controller Part CRD (Test) Resource A CRD (Test) Resource B Create Resource as CRD Return If app stored resource information into k8s or not 94 3. Dependent Library 3.1 Norman Framework
  • 95. API Server ➢ This is Simple API Server provide ServeHTTP function ➢ API Server need API Schema(Concept in Norman) having actual logic ➢ We can register multiple API Schemas ➢ API Server lookup proper schema from requested URL and HTTP Method 95 3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
  • 96. ➢ Each API schema can define ○ How/Where the data need to be stored ■ Rancher use CRD in k8s as a datastore ○ The actual logic for each action (CREATE, DELETE, UPDATE, LIST) ○ Which HTTP Method this schema can accept API Schema https://github.com/rancher/norman/blob/master/types/types.go 96 3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
  • 97. API Schema Generation ➢ 1 Struct represent Rancher’s resource can be used for… ○ Generating API Schema be used by norman.APIServer (norman/api/server.go) ■ https://github.com/rancher/norman/blob/aecae32b4ae6b73b9945cdedef5a5b0dafa11973/types/r eflection.go#L75 ○ Generating CRD definition (technically from API Schema) ■ https://github.com/rancher/norman/blob/aecae32b4ae6b73b9945cdedef5a5b0dafa11973/store/c rd/init.go#L122 rancher/vendor/github.com/rancher/types/apis/ management.cattle.io/v3/cluster_types.go API Schema For Cluster CRD For Cluster Generate Use as Store 97 3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
  • 98. How APIServer lookup schema? Norman API Server ServeHTTP /v3/ /v3/project Schema.Version.Path cluster node reflect.TypeOf(<type A>).Name() Node Schema A Schema POST /v3/cluster CreateHandler DeleteHandler ….. Cluster Schema 2. Check if method is allowed or not 3. Execute proper Handler according to HTTP method, URL Parameter CollectionMethods: GET POST 1. Choose correct schema 98 3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
  • 99. Rancher Use Norman in Management API Use norman framework. Difficult to know how management API implemented without understanding Norman 1 API Server instance have multiple schemas which have different data store. - Use parent k8s (managementstored) - Use child k8s (userstored) All custom handlers other than default handler Norman defined are stored in this directory. About handler of Norman, Plz see explanation page of Norman 99 3. Dependent Library 3.1 Norman Framework 3.1.1 API Part
  • 100. 3.1.2. Norman Controller Part Something App Norman API Part Controller Part CRD (Test) Resource A CRD (Test) Resource B Watch the changes Do whatever you want asynchronously 100 3. Dependent Library 3.1 Norman Framework
  • 101. The concepts around Controller ➢ Generic Controller ○ Implement basic Controller with client-go ○ Allow us to build Custom Controller by just following ■ Passing K8s Client for specific resource to controller ■ Passing the function(handler) to controller. This handler is executed when specific etcd key changed ● Do whatever you want ➢ 2 Types of Handler ○ Normal Handler ○ LifeCycle Handler 101 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 102. Generic Controller ➢ Generic Controller ○ Implement basic Controller with client-go ○ Allow us to build Custom Controller by just following ■ Passing K8s Client for specific resource to controller ■ Passing the function(handler) to controller. This handler is executed when specific etcd key changed ● Do whatever you want ➢ 2 Types of Handler ○ Normal Handler ○ LifeCycle Handler 102 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 103. Generic Controller Internal Generic Controller (norman/controller/generic_controller.go) cache.SharedIndexInformer (k8s.io/client-go/tools/cache) cache.ListWatch workqueue.RateLimitingInterface (k8s.io/client-go/util/workqueue) Client handler handler handler handler handlers handler handler handler Updated key Deleted key Added key Watch specific key By using Client When adde/updated/created Evaluate all registered handlers With key You can make your own controller by defining Client and handlers Name1 Name2 Name3 103 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 104. How Rancher Use: Generic Controller vendors/github.com/rancher/ types/apis/management.cattle.io/v3/zz_generated_node_controller.go type nodeController struct { controller.GenericController } ~~omit~~ Each Controller embed GenericController struct and override few function (e.g. AddHandler) This Controller is created when nodeClient.Controller method executed in the first time like “management.Management.Clusters("").Controller()” (see pkg/controllers/management/clusterprovisioner/provisioner.go). Rancher usually named the function generating Controller and pushing controller to starter list as “Register” function This is started by Context.Start (see app/app.go in rancher repository) 104 3. Dependent Library 3.1 Norman Framework <apiGroup>/<version>/<resource_name> 3.1.2 Controller Part
  • 105. How Use: Prepare XXXController for each Resource Type <apiGroup>/<version>/<resource_name> nodeController Generic Controller (Norman) Handlers clusterController Generic Controller (Norman) Handlers XXXController Generic Controller (Norman) Handlers vendors/github.com/rancher/ types/apis/management.cattle.io/v3/zz_generated_XXX_controller.go ・・・ pkg/controllers/ managements/ user/ XX.go ZZ.go Just Add Handlers to the controller to watch 1 resource type 105 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 106. How Ranche Use: Prepare XXXController for each Resource Type nodeController Generic Controller (Norman) Handlers clusterController Generic Controller (Norman) Handlers XXXController Generic Controller (Norman) Handlers vendors/github.com/rancher/ types/apis/management.cattle.io/v3/zz_generated_XXX_controller.go ・・・ pkg/controllers/ managements/ user/ XX.go ZZ.go Just Add Handler or LifeCycle to the controller watching 1 resource type There is directory seems defining controllers (pkg/controllers/) in rancher repository. But the code under this directory is just to add handler or lifecycle to the proper Controller to watch 1 resource type. And some handler/lifecycle have controller-ish name like cluster-provisioner-controller but this is actually just name of handler. Keep it mind. 106 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 107. How Rancher Use: Override AddHandler function vendors/github.com/rancher/ types/apis/management.cattle.io/v3/zz_generated_node_controller.go <apiGroup>/<version>/<resource_name> func (c *nodeController) AddHandler(name string, handler NodeHandlerFunc) { c.GenericController.AddHandler(name, func(key string) error { obj, exists, err := c.Informer().GetStore().GetByKey(key) ~~omit~~ return handler(key, obj.(*Node)) }) } Override function so as to pass object data itself as well as key to handler function. That’s why all handler registered by Rancher can get object not only just etcd key This function is actually called by Generic Controller(Norman) This handler is actual function rancher would register 107 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 108. Normal Handler ➢ Generic Controller ○ Implement basic Controller with client-go ○ Allow us to build Custom Controller by just following ■ Passing K8s Client for specific resource to controller ■ Passing the function(handler) to controller. This handler is executed when specific etcd key changed ● Do whatever you want ➢ 2 Types of Handler ○ Normal Handler ○ LifeCycle Handler 108 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 109. What’s Normal Handler ➢ Handler Function is executed by GenericController when changes detected in Kubernetes ➢ Handler Function need to have a argument (key name of the data which is changed in etcd) ○ Rancher Override AddHandler so that handler function can have 2 argument (key name, object data) ➢ The registered handlers are executed when following event happened ○ Monitored Keys is “Created” ○ Monitored Keys is “Deleted” ○ Monitored Keys is “Updated” => We can not know which kind of event triggers handler. By Created or Deleted or Updated? 109 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 110. How Rancher Use: Register Handler pkg/controllers/management/clusterprovisioner/provisioner.go func Register(management *config.ManagementContext) { ~~ omit ~~ mngNodes := management.Management.Nodes("") mngNodes.AddHandler("cluster-provisioner-controller", p.machineChanged) ~~ omit ~~ } ~~ omit ~~ func (p *Provisioner) machineChanged(key string, machine *v3.Node) error { Register “p.machineChanged” function with the name of “cluster-provisioner-controller” as a handler Thanks to override of AddHandler, callback function will get object itself not only just key 110 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 111. LifeCycle Handler ➢ Generic Controller ○ Implement basic Controller with client-go ○ Allow us to build Custom Controller by just following ■ Passing K8s Client for specific resource to controller ■ Passing the function(handler) to controller. This handler is executed when specific etcd key changed ● Do whatever you want ➢ 2 Types of Handler ○ Normal Handler ○ LifeCycle Handler 111 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 112. What’s LifeCycle Handler ➢ Kind of framework of handler function for Controller ➢ Need to Implement following functions for Lifecycle struct ○ Create, Remove, Updated ➢ Generic Controller expect Lifecycle struct to be wrapped by ObjectLifecycleAdapter(norman/lifecycle/object.go) and developer to register ObjectLifecycleAdapter.sync function as a handler ➢ The check for object is already created(initialized) or not is judged by extra annotation (lifecycle.cattle.io/create.<lifecycle name>) Generic Controller Handlers ObjectLifecycleAdapter sync Lifecycle Create Updated Removed Register sync function as Normal Handler Execute 112 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 113. Inside ObjectLifecycleAdapter.sync function func (o *objectLifecycleAdapter) sync(key string, obj runtime.Object) error { ~~ omit ~~ metadata, err := meta.Accessor(obj) ~~ omit ~~ if cont, err := o.finalize(metadata, obj); err != nil || !cont { return err } if cont, err := o.create(metadata, obj); err != nil || !cont { return err } copyObj := obj.DeepCopyObject() newObj, err := o.lifecycle.Updated(copyObj) if newObj != nil { o.update(metadata.GetName(), obj, newObj) } return err } Get Metadata from object Check this object is deleted or not. If deleted, call o.lifecycle.Remove. Finalize and return Check if object is already created or not by annotation “lifecycle.cattle.io/create.<lifecycle name>” == true. If not, add finalizer “controller.cattle.io/<lifecycle name>”. And Call o.lifecycle.Create and then return Simply just execute o.lifecycle.Updated vendors/github.com/rancher/norman/lifecycle/object.go 113 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 114. How Rancher Use: Register Add LifeCycle func Register(management *config.ManagementContext) { p := &Provisioner{~~ omit ~~} p.Clusters.AddLifecycle("cluster-provisioner-controller", p) } func (p *Provisioner) Remove(cluster *v3.Cluster) (*v3.Cluster, error) { ~~ omit ~~} func (p *Provisioner) Updated(cluster *v3.Cluster) (*v3.Cluster, error) { ~~ omit ~~} func (p *Provisioner) Create(cluster *v3.Cluster) (*v3.Cluster, error) { ~~ omit ~~} pkg/controllers/management/clusterprovisioner/provisioner.go Add Lifecycle object as handler Remove function is executed only when object is deleted Updated function is executed only when object is updated Create function is executed only when object is created 114 3. Dependent Library 3.1 Norman Framework 3.1.2 Controller Part
  • 115. 1. Rancher 2.0 Overview 2. Deep dive Rancher Server 2.1. Rancher API 2.2. Rancher Controllers 2.3. Controller/Context 3. Dependent Library 3.1. Norman Framework 3.2. Kontainer-Engine 3.3. RKE 115 Agenda
  • 117. 1. Rancher 2.0 Overview 2. Deep dive Rancher Server 2.1. Rancher API 2.2. Rancher Controllers 2.3. Controller/Context 3. Dependent Library 3.1. Norman Framework 3.2. Kontainer-Engine 3.3. RKE 117 Agenda
  • 119. Concern about Rancher 2.0 In the context of backend for Verda k8s as a Service 119
  • 120. 1 Rancher Server binary have ton of features ➢ It’s tough to operate ➢ We want to use only features we need and trace the changelog. ➢ Difficult performance tuning ○ We can not run multiple instances for specific feature even if that feature allow us to run multiple instances ■ E.g. Run 5 process for K8s Proxy API ■ E.g. Run 10 process for rkenodeconfig API and so on => We will separate some of feature from a server binary and Run separately => We will disable unneeded Controllers 120
  • 121. Scalability Problem to manage k8s deployed ➢ Current design won’t work with Active-Active HA ○ There is no scheduling logic in Rancher ■ Currently only 1 Rancher Server can manage all of k8s cluster ■ Need a extra layer before actual managing cluster to scale ● Like Pod Scheduler and Kubelet relations ○ Controller depend on websocket session to Rancher Server ■ Even If we can redesign Rancher Server with scheduling concept and Active-Active, We still need to think about how scheduled controller talk to Agent => We will have simple scheduling concept in front of Rancher Server 121
  • 122. K8s Proxy API depend on Websocket to Agent ➢ Even If Rancher Server can access to k8s deployed, Currently Rancher Server proxy k8s API request via websocket session to Cluster Agent ➢ This design prevent us from running multiple K8s Proxy, even if we succeeded in separating this process from Rancher Server binary => The load of K8s Proxy API is most difficult to predict because this is really depending on How the user use k8s API => We separate this feature from a server binary and run multiple process by modifying current way which is using websocket session 122
  • 123. How I feel overall in Rancher 2.0 ➢ Less document and Reading Code is only way to know well ➢ I can see some effort to make it easy to deploy Rancher Server ○ Less precondition to run Rancher Server ■ Rancher Server depend on k8s but doesn’t define it as precondition (automatically detect missing k8s and deploy all in one k8s working with) ○ Less precondition to deploy k8s ■ Even for the environment having NAT ○ But this effort prevent from scaling ➢ If we need scalability like maintaining 1000 clusters, we need additional consideration ○ Separate the feature from 1 massive binary. Run multiple instances. ○ Use other data store from Kubernetes for some resources. ○ Ask yourself if you need all features or not and If not, let’s disable after understood impact 123
  • 124. How I feel overall in Rancher 2.0 ➢ There are some interesting Controllers like alert, logging, eventsync... ➢ Need to know Norman Framework If you want to know more Rancher ➢ Easy to modify/add Rancher behaviour thanks to Norman Framework ○ Change datastore for specific API resource thanks to Norman API Schema ○ Add Custom Controller thanks to Norman Generic Controller 124
  • 125. Enhancement Plan for Rancher 2.0 125
  • 126. Server Server K8s Proxy K8s Proxy XXX API After start service, see the performance and consider to separate/scaleWithout touching anything If we can not scale Rancher Server anymore, we will add one more cluster. Phase 1 Phase 2 Rancher Scalability Improvement Scheduling Other Datastore Use other datastore for some data Extra Monitoring Enhance Monitoring Point 2 Point 1 Point 3 Point 4 126 Custom API Custom API
  • 127. Appendix: Kind of Code Structure I straighten my understandings as a diagram. It’s available in (https://github.com/ukinau/rancher-analyse) 127