Mais conteúdo relacionado

Apresentações para você(20)

Similar a Introduction to containers, k8s, Microservices & Cloud Native(20)


Introduction to containers, k8s, Microservices & Cloud Native

  1. Intro to Containers, k8s, MicroServices, Cloud Native @terrywang Initial version: Dec 2018 Last refreshed: Jan 2020
  2. Topics Container / Containerization● Docker (Company, products, innovations: container runtime, container image, registry, compose, swarm, etc.) ● Kubernetes (k8s) / Container Orchestration● k8s relationship with gravity and Runtime Fabric● MicroServices● Cloud Native (apps & infra)●
  3. Container is no new technology 1980s: chroot● 1990s: jail● 2000s (early): jail > FreeBSD● 2004: Solaris Zones● 2008: Linux Containers (LXC) cgroups + namespaces ● 2010s (early): Docker rising● 2010s (late): k8s● NOTE: Early versions of Docker used LXC as the container execution driver (later dropped in Docker v1.10)
  4. Container vs Virtual Machine Container Advantage Resource efficiency Process level isolation and usage of container host's kernel is more efficient when compared to emulating an entire server (as a VM) ● Portability All the dependencies for an app are bundled into the container. This means they can be easily moved between environments. ● Continuous Deployment & Testing Ability to have consistent environment & flexibility with patching has made container (Docker) a great choice for teams that adopt DevOps approach to software delivery ●
  5. Facts Containers aren't real! ( @jessfraz )● Containers alone do not provide, but container orchestration does!● Docker container runtime is only 1 of the options available (others: containerd, CRI-O/runc, rkt)● Containers on Linux are an assembly of Linux kernel features cgroup / control groups - limits, accounts for, and isolate the resource usage (CPU, memory, disk, network, etc).) of a collection of processes ● namespaces - process isolation (UTS, Mount, User, Network, IPC, Cgroup, etc.) and security features (SELinux, AppArmor, Seccomp, Caps), capabilities. ●
  6. Why k8s is hard? The shift (sh*t!) from traditional Physical (bare metal) and/or Virtual Machine (vagrant) based development environment & workflow, to containerized & Cloud Native infrastructure, the transformation process is NOT a smooth and gentle reform, but a containerized understanding and transformation that covers networking, storage, scheduling, operating systems and distributed system principles, etc.  Knowledge stack, skill matrix, requires both depth and breadth.  Knowledge Gaps: Linux kernel, networking, storage, security, distributed systems, etc. will NOT be covered by Docker or k8s documentations. Typical questions why 1 process per container?1. why k8s pods/services cannot use static IPs? How to debug?2. difference between k8s StatefulSet and Operator?3. How to use PV vs PVC?4. Good reads (ingress)
  7. Docker made big impact on containerization, standardized the following Container Images (mini OS images, tar balls) ● Container Registry● Docker Compose / Dockerfile defines a container ● Docker (Inc) products Docker Swarm (k8s alternative)● Docker Compose● Docker Enterprise● Docker Architecture
  8. Container Orchestration Container Orchestrators - group hosts together to form a cluster, fulfill the following requirements for applications fault-tolerance● scalability on demand● optimal resource utilization● service discovery (discover other apps automatically) and intercommunication● accessible from the external world● zero-downtime deployment (rolling upgrade)●
  9. Kubernetes has won the container orchestration war!  Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes Design Principles Scalability Provides horizontal scaling of pods (stateless and stateful) based on CPU utilization. The threshold for CPU usage is configurable and k8s will automatically start new pods if reached. When there are multiple pods for an app, k8s provides the load balancing across them. NOTE: Vertical scaling is in the making (SIG exists), it's not easy ... ● High Availability K8s addresses HA at app and infra level. Replica Sets ensure the desired number of replicas of a stateless pod for a given app are running; Stateful Sets perform the same for stateful pods. At infra level, k8s supports various storage backends, adding a reliable, available storage layer to ensure HA of stateful workloads. Also, each of the master component can be configured to replicate across all controller nodes (Daemon Sets) to achieve HA. ●
  10. Kubernetes Design Principles (Continued) Security K8s addresses security at multiple levels: cluster, application and network. API endpoints are secured through TLS. Only authenticated users (either service accounts or regular users) can execute operations on the cluster (via API requests) At application level, k8s secretes can store sensitive information per cluster (a virtual cluster if using namespaces, physical otherwise). Network policies for access to pods can be defined in a deployment. A network policy specifies how pods are allowed to communicate with each other and with other network endpoints. ● Portability  A cluster can run on any supported OS (any mainstream Linux distro), Processor architecture (physical machine or VMs), cloud providers, container runtime (Docker, containerd, CRI-O/runc). Thru the concept of federation, it can also support workloads across hybrid (private and public cloud) or multi-cloud environments. Also supports AZ fault tolerance with a single cloud provider. ●
  11. Kubernetes Kubernetes API (kube-apiserver)● CSI (storage)● CNI (networking)● health checks● scheduling (placement)● custom resources● Container image registry (influenced by Docker container registry) ● Container Runtime Pod container lifecycle (start/stop/delete)● Image management (push/pull)● Status● Container interactions (attach, exec, ports, log) ● Kubernetes vs Container Runtime (what they do)
  12. Kubernetes Architecture
  13. Master components etcd cluster● kube-apiserver● kube-controller-manager● kube-scheduler● cloud-controller-manager● Node (worker) components kubelet● kube-proxy● kubectl CLI to interact with API Server
  14. Docker container runtime is ONLY an option, not a MUST. Container runtimes available Docker● containerd● CRI-O/runc● rkt● k8s CRI Implementation kubelet → dockershim → dockerd → containerd → runc ● kubelet → cri-containerd → containerd ● kubelet → cri-o → runc●
  15. Scheduler vs Orchestrator The terms scheduler and orchestrator are often used interchangeably. In most cases, the orchestrator is responsible for all resource utilization in a cluster (e.g., storage, network, and CPU). The term is typically used to describe products that do many tasks, such as health checks and cloud automation. Schedulers are a subset of orchestration platforms and are responsible only for picking which processes and services run on each server.
  16. Cloud Native Application: At a high level, Cloud Native apps are containerized, segmented into microservices, and are designed to be dynamically deployed and efficiently run by orchestration systems like Kubernetes. Cloud native infrastructure is infrastructure that is hidden behind useful abstractions, controlled by APIs, managed by software, and has the purpose of running applications. Running infrastructure with these traits gives rise to a new pattern for managing that infrastructure in a scalable, efficient way. CNCF defines 3 core properties that underpin Cloud Native Applications Packaging apps into containers: containerisation● Dynamic scheduling of these containers: container orchestration● Software architectures that consist of several smaller loosely-coupled and independently deployable services: microservices ● Cloud Native
  17. Cloud Native Infrastructure - O'Reilly A cloud native application is engineered to run on a platform and is designed for resiliency, agility, operability, and observability. Resiliency embraces failures instead of trying to prevent them; it takes advantage of the dynamic nature of running on a platform. Agility allows for fast deployments and quick iterations. Operability adds control of application life cycles from inside the application instead of relying on external processes and monitors. Observability provides information to answer questions about application state.
  18. To effectively deploy, run, and manage Cloud Native apps, the application must implement several Cloud Native best practices. For example, a Cloud Native app should: Expose a health check endpoint so that container orchestration systems can probe application state and react accordingly ● Adopt microservices focused architecture, apps are loosely-coupled so that each can be scaled and recovered by orchestration layer ● Continuously publish logging and telemetry data, to be stored and analyzed by systems like Elasticsearch and Prometheus for logs and metrics, respectively ● Degrade gracefully and cleanly handle failure so that orchestrators can recover by restarting or replacing it with a fresh copy ● Not require human intervention to start and run● What makes an application Cloud Native?
  19. Facts Anypoint Runtime Fabric is powered by k8s● k8s is used as the container orchestrator● Gravity is used to package k8s in a specific way (direct upstream)● Gravitational Telekube (gravity) cluster web UI for operations, management, monitoring and alerting. ● Docker CE is the default container runtime● Amazon ECR (Elastic Container Registry) as private registry for the Mule Runtime container images ● Most of the container images are based on latest Ubuntu LTS (18.04 as of end of 2019)● Helm is used as application package manager for k8s (Runtime Fabric components are packaged as helm charts - check by helm ls) ● Runtime Fabric is Microservices ready platform● k8s's API-centric nature is in line with MuleSoft's API-Led approach● Relationship between Runtime Fabric and Kubernetes