O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

CollabDays 2020 Barcelona - Serverless Kubernetes with KEDA

16 visualizações

Publicada em

KEDA es un escalador para kubernetes basado en eventos externos, pensado para escalar workloads serverless. En esta charla mostré como ejecutar Azure Functions en un Kubernetes y escalarlos con KEDA, así como una estrategia alternativa escalando Jobs.

Publicada em: Software
  • Seja o primeiro a comentar

  • Seja a primeira pessoa a gostar disto

CollabDays 2020 Barcelona - Serverless Kubernetes with KEDA

  2. 2. #netcoreconf ¿Quien soy yo? • Principal Tech Lead @ PlainConcepts BCN • Padre orgulloso • Bebedor de cerveza • Picateclas a mucha honra • Microsoft MVP desde 2012
  3. 3. INDEX • How can I run Azure Functions on Kubernetes? • What is KEDA • Why KEDA • Some examples
  4. 4. WHY SERVERLESS ON KUBERNETES? 1. Easier adoption of hybrid / multicloud 2. Less lock-in 3. Single platform to focus on 4. Unified operations with other workloads 5. More h/w control (i.e. GPU enabled clusters) 6. Run AFs alongside other app (access to service mesh, custom shared environment, …)
  5. 5. SERVERLESS VS KUBERNETES • Not a fight really • You can run serverless workloads on Kubernetes • Also there are some serverless kubernetes implementations (AKS virtual nodes, EKS Fargate) • So, you can have • Serverless on Kubernetes • A serverless Kubernetes • And serverless on a serverless Kubernetes 
  6. 6. THE FUTURE OF K8S IS SERVERLESS • Serverless containers infrastructure is developed (ACI, Fargate,…) • Needs to be orchestrated in some way • Kubernetes orchestration API is current de-facto standard • In near future we will see a mix of nodes and serverless infrastructure orchestrated under the k8s API • Kubernetes community is aware of this and API it’s evolving to support these scenarios https://thenewstack.io/the-future-of-kubernetes-is-serverless/
  7. 7. CAN I RUN AZURE FUNCTIONS IN KUBERNETES? • If you can dockerize them, you can run them in Kubernetes. • func init --docker-only • Let’s see it 
  8. 8. DEPLOYING ON KUBERNETES • You only need a deployment to run the Azure Function • A secret map to store the secrets (connection strings) • And your AF it’s up and running! :) • Again: Let’s see it 
  9. 9. So, running Azure Functions on Kubernetes is not really the issue… The really issue is… scaling them appropiately
  10. 10. KUBERNETES (POD) AUTOSCALING 101 To auto scale a deployment you need two things: 1. A metric on which to scale (like %CPU) 2. An HPA bounded to that metric
  11. 11. KUBERNETES (POD) AUTOSCALING 101 • HPA pulls metrics exposed by the metrics server • OOB metrics server exposes only CPU & Mem • So, OOB you can auto scale an Azure Function based on CPU usage or memory consumption
  12. 12. AUTOSCALING AZURE FUNCTIONS • Usually using CPU or Mem to scale an AF is not the best strategy • You are focusing on symptoms rather than causes • You should scale based on these causes • pending messages to read • pending registers to process • ….
  13. 13. So, KEDA is not about running Azure Functions on Kubernetes KEDA is about scaling them Kubernetes Event Driven Autoscaler
  14. 14. WHAT EXACTLY DOES KEDA? • KEDA is able to read external metrics… • … exposing them to the metrics server… • … allowing the usage of HPA to scale over those metrics.
  15. 15. WHAT EXACTLY DOES KEDA? • KEDA do not auto scale your Azure Functions • But provides all necessary stuff needed by the HPA to auto scale them based on external metrics. • Using KEDA you can auto scale your AFs based on the real causes, not on the symptoms
  16. 16. HOW KEDA DOES ITS JOB? A scaler watches for external triggers (like new message in a specific queue)
  17. 17. HOW KEDA DOES ITS JOB? The trigger updates a metric which is exposed through the metrics server.
  18. 18. HOW KEDA DOES ITS JOB? A standard HPA bound to this metric scales the AF deployment if needed
  19. 19. THE KEDA SCALERS • Currently KEDA provides several scalers for different technologies • More scalers are added over time • https://keda.sh/docs/2.0/scalers/
  20. 20. THE SCALEDOBJECT CRD • To “plug” a scaler to the Kubernetes we use the ScaledObject CRD provided by KEDA • Each ScaledObject configures one scaler to look for external events
  21. 21. AUTO SCALING USING KEDA • So, I have an AF deployed to Kubernetes that I want to auto scale • I need to create an ScaledObject to get the metric on which to scale (like pending messages on a SQS queue) • Then I need to create an HPA bounded to this metric • And the magic will happen! • Let’s see it!
  22. 22. SCALING JOBS • Scaling jobs is an alternative approach to run FaaS-like workloads • Instead of processing N events in a single pod, a new job (which ends creating a pod) is scheduled for each event • Once again… Let’s see it! 
  23. 23. HPA COULD BE THE MOST POWERFUL VILLAIN • Beware with workloads scaled through the HPA • If scale down is triggered HPA will just… snap its fingers • A pod can be killed while processing!
  24. 24. DEFENDING PODS FROM HPA 1. Using pod lifecycles 1. Ask for “additional” time when Kubernetes wants to kill the pod. 2. Works but is… ugly (pod could stand in terminating long time) 2. Using jobs 