O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.
An Introduction to the NGINX
Application Platform
Ed English
16th April 2019
“... when I started NGINX,
I focused on a very specific
problem – how to handle more
customers per a single server.”
- Igo...
MORE INFORMATION AT NGINX.COM
High Performance Webserver
and Reverse Proxy
Web Server
In 2002 …
350million
Total sites running
on NGINX
66.7%
Top 10,000
most visited websites
58%
of all instances on
Amazon Web Services...
5
To master disruption, you must modernize
apps and infrastructure.
But there’s a catch.
Infrastructure Shifts
Closer to Apps
Infrastructure
& Ops teams
Hardware,
scale-up
One infrastructure
for every app
Applic...
Legacy doesn’t go away
Hardware doesn’t adapt to new apps, cloud
Open source doesn’t accommodate standardization
Tools Spr...
A Lightweight Approach
Combats Complexity
PaaS, ESB, &
HW LBs
Containers,
Kubernetes
Cloud-only
Inflexible
Production read...
Modernization Success Is An Evolution
9
App Type
Legacy Modern
App
Architecture
Simple
Complex Monolithic Hybrid services ...
Today’s App Infrastructure Is Complex
10
With NGINX: Simple, lightweight, modern
11
12
Dynamic Application Gateway
Dynamic App Gateway
• A single, clustered ingress/
egress tier in front of apps.
• Optimize...
13
Dynamic
Application
Infrastructure
Dynamic App Infrastructure
• A single app platform for
monoliths, microservices.
• O...
NGINX
Application
Platform
The industry’s only
solution that drives
10x simplification
and 80% cost savings
by combining l...
Embraces A Multitude Of Use Cases
Reverse
Proxy
Load
Balancer
WAF Cache
API
Gateway
Ingress
Controller
Sidecar
Proxy
Web
S...
Ed English
NGINX
Let’s talk about F5, briefly
The Traditional vs. Modern Divide
17
Different Needs On Either Side
18
NGINX + F5: Bridge DevOps and NetOps
19
Open Source-Driven
375M websites powered worldwide
66% of the 10,000 busiest sites
90M downloads per year
Enterprise-Drive...
NGINX + F5: Better Together
21
Owen Garrett
NGINX
Software Load Balancing, across platforms, for
Microservice and Hybrid Applications
Software Load Balancing
AGILITY
RETURN ON
INVESTMENT
What is the purpose of Load Balancing?
CUSTOMER
EXPERIENCE
dev devops ops
80% CAPEX and OPEX savings
Consolidation: 10 solutions to 1
Software on commodity hardware
Free up budget for new projects...
Moving to the next generation of F5 hardware
was going to cost more than $1M per data
center. NGINX Plus gave us 50% more
...
Goal: Improve performance, reduce costs, and go
“hardware free” to improve agility
NGINX Plus performs all load balancing;...
Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
t...
Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
t...
Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
t...
Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
t...
Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
t...
Can software deliver at the scale of hardware?
On our production frontend machines,
SSL/TLS accounts for less than 1% of
t...
App-centric infrastructure
Programmable, composable
Automated for DevOps, CI/CD
Speed your time-to-market
Gain competitive...
App-centric infrastructure
Software-defined, composable
Automated for DevOps, CI/CD
-- Software Development Director, Comc...
Goal: reduce incident impacts, maximize availability,
make changes during business hours
NGINX Plus frontends microservice...
Software Infrastructure responds at app-speed
Increase adoption, reduce churn
Protect your brand and reputation
High performance app delivery
Proven reliability and sca...
We’re a nearly 100-year-old insurance
company with customers that expect an
experience like Google or Facebook. If we
don’...
Goal: User response in 1s, completed transaction in
3s, 99.9% availability, 0 failed customer experiences
NGINX Plus is an...
Micro-caching
with NGINX
proxy_cache_path /tmp/cache 
keys_zone=cache:10m levels=1:2 
inactive=3600s max_size=100m;
server...
AGILITY
RETURN ON
INVESTMENT
CUSTOMER
EXPERIENCE
Software Load Balancing,
across platforms
“If you can’t measure it, you can’t manage it”
Problem Statement
We saw that people:
• Want to deliver their apps better
• Wanted easy configuration, with a minimal amou...
Easy Configuration at Scale
Wizard-style interface to configure LB with a
few clicks
Quickly create basic HTTP/S configura...
Monitor & Analyze Performance
Deep visibility and insights into KPIs (per
instance basis) using an agent:
• Visualize real...
Preemptive Recommendations
Use the built-in configuration analyzer to get:
Enhanced performance and security
based on lear...
Support for Multi-Cloud Environment
NGINX Controller is a Docker package
Can be deployed on any public or
private cloud
Ca...
Software Load Balancing,
across platforms,
for Microservice and Hybrid
Applications
Modern Apps Require a Modern Architecture
From Monolithic ... ... to Dynamic
Three-tier, J2EE-style architectures
Complex ...
In practice
• Use the “Strangler Approach” to extend your
Monolith to using Microservices:
1. Add small pieces of function...
Evolution in Action
Existing Monolith Application
Desktop or Web
Client
Your Existing Application
Evolution in Action
You have New Use Cases
New Applications are
needed New Datasources and business processes
are added
Ho...
Evolution in Action
Implement Hybrid/Strangler Pattern
1. Implement connector microservices to
provide API abstractions fo...
Evolution in Action
2. Implement business-logic microservices
for each business process
Implement Hybrid/Strangler Pattern
Evolution in Action
3. Implement presentation-layer
microservices that are accessed externally
Implement Hybrid/Strangler ...
Evolution in Action
4. Use NGINX Ingress Controller for
external-internal connectivity
Implement Hybrid/Strangler Pattern
Evolution in Action
5. Use NGINX Router Mesh (Service Mesh)
for internal connectivity
Implement Hybrid/Strangler Pattern
Evolution
Successful Hybrid/Strangler Implementation
Evolution
Successful Hybrid/Strangler Implementation
Operating a distributed application is hard
Static, Predictable Monolith: Dynamic, Distributed Application:
Fast, reliable...
What is a service mesh?
A service mesh is an invisible, autonomous, L7 routing
layer for distributed, multi-service applic...
Why do I need a Service Mesh?
• In most cases, you do not need a service mesh
(at least, not yet)
• Your applications will...
Maturity Journey – Step 1
Simple Ingress Router, Kubernetes Networking
Many production
applications start and
finish here
...
Maturity Journey – Step 2
Ingress Router, Per-Service Load Balancer,
Router-mesh Load Balancer
Enhance applications with:
...
But… this approach gets expensive to
manage
The operational complexity and cost of developing bespoke libraries
across lan...
Service Mesh Goal:
Deal with it without changing the app
The infrastructure (the “service mesh”) must alleviate these prob...
Maturity Journey – Step 3
Every container has an embedded proxy
Embed proxy into every container
Proxy intercepts all traf...
Find the balanceCosttooperate
Complexity, Interdependencies, Speed of Change
Single simple app Many complex, interdependen...
Software Load Balancing,
across platforms,
for Microservice and Hybrid
Applications
Deployment Patterns for
API Gateways
Lunch & Learn, London
Liam Crilly
April 2019
API Management
• Policy management
• Analytics & monitoring
• Developer
documentation
API Gateway
• Authenticator
• Reques...
NGINX
Application
Platform
The industry’s only
solution that drives
10x simplification and
80% cost savings by
combining l...
Photo by AussieActive on Unsplash
11
Photo by Cris Saur on Unsplash
83% 40%of all hits are classified as
API traffic (JSON/XML)
of NGINX deployments
are as an API gateway
Source: Akamai Stat...
Why care?
16
• Latency & response time
• Indiscriminate network
hops
• Expensive layer 7 payload
inspection
• Enforced sca...
API Gateway Essential Functions
17
TLS termination
Client
authentication
Fine-grained
access control
Request routing
Rate ...
API A
API B
API C
API A
API B
API C
Edge Gateway
19
API A
API B
API C
• TLS termination
• Client authentication
• Authoriz...
Edge Gateway
20
API A
API B
API C
D
E
F
G
H
• TLS termination
• Client authentication
• Authorization
• Request routing
• ...
Two-Tier Gateway
21
API A
API B API C
D
E
F G
HSecurity Gateway
• TLS termination
• Client authentication
• Centralized lo...
Microgateway
22
E
E
F
G
F
H
D
D
D
E
F
DevOps
Team-
owned
• Load balancing
• Service Discovery
• Authentication per API
• T...
Adapt to your environment
23
• TLS termination
• Client authentication
• Fine-grained access control
• Request routing
• R...
F
E
Microgateway
25
E
E
F
F
D
D
D
• Service discovery integration
• Obtain authentication credentials
• Everything else!
F
E
Sidecar Gateway
26
E
E
F
F
D
D
D
• Outbound load balancing
• Service discovery integration
• Authentication
• Authoriz...
Kubernetes Cluster
F
E
Service Mesh
27
E
E
F
F
D
D
D
Service Mesh Control Plane
Ingress / Edge Gateway
All DevOps teams
F
E
Two-Tier Gateway
28
E
E
F F
D
D
D
F
E
E
E
F F
D
D
D
Bottleneck?
F
E
Bottleneck?
29
E
E
F F
D
D
D
F
E
E
E
F
F
D
D
D
Deployment Pattern Options
Edge Gateway + Monoliths with centralized governance
- Frequent changes, DevOps team-owned micr...
liam@nginx.com | @liamcrilly
fin
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Architecting for now & the future with NGINX London April 19
Próximos SlideShares
Carregando em…5
×

Architecting for now & the future with NGINX London April 19

Presentations by Liam Crilly, Owen Garrett and Ed English NGINX at ‘Architecting for now & the future with NGINX’ Lunch and Learn in the Shangri-La Hotel, At The Shard, London. Presentations provide tips and insight into how NGINX can help to maximize performance and flexibility of cloud environments through laying the foundational building blocks for cloud-based microservices applications, API Management & Service Mesh initiatives.

Livros relacionados

Gratuito durante 30 dias do Scribd

Ver tudo

Audiolivros relacionados

Gratuito durante 30 dias do Scribd

Ver tudo
  • Seja o primeiro a comentar

Architecting for now & the future with NGINX London April 19

  1. 1. An Introduction to the NGINX Application Platform Ed English 16th April 2019
  2. 2. “... when I started NGINX, I focused on a very specific problem – how to handle more customers per a single server.” - Igor Sysoev, NGINX creator and founder Where It All Began
  3. 3. MORE INFORMATION AT NGINX.COM High Performance Webserver and Reverse Proxy Web Server In 2002 …
  4. 4. 350million Total sites running on NGINX 66.7% Top 10,000 most visited websites 58% of all instances on Amazon Web Services 1Billion+ The most pulled image on DockerHub 78% of all sites using http2 1Million+ Pulls of NGINX K8S Ingress Controller 16 years later…
  5. 5. 5 To master disruption, you must modernize apps and infrastructure. But there’s a catch.
  6. 6. Infrastructure Shifts Closer to Apps Infrastructure & Ops teams Hardware, scale-up One infrastructure for every app Application & DevOps teams Software, scale-out Every app gets multiple infrastructures 6
  7. 7. Legacy doesn’t go away Hardware doesn’t adapt to new apps, cloud Open source doesn’t accommodate standardization Tools Sprawl Adds Complexity 7
  8. 8. A Lightweight Approach Combats Complexity PaaS, ESB, & HW LBs Containers, Kubernetes Cloud-only Inflexible Production ready? Not a silver bullet 8
  9. 9. Modernization Success Is An Evolution 9 App Type Legacy Modern App Architecture Simple Complex Monolithic Hybrid services Microservices ↑ Agility “Reusable” E/W performance ↓ Costs “Software-defined” N/S performance ↑ Scale “Refactored” API, K8s traffic ERP, CRM? Mobile App? Digital Services? 1. SW Load balancer 2. API gateway 3. Service mesh
  10. 10. Today’s App Infrastructure Is Complex 10
  11. 11. With NGINX: Simple, lightweight, modern 11
  12. 12. 12 Dynamic Application Gateway Dynamic App Gateway • A single, clustered ingress/ egress tier in front of apps. • Optimizes north/south traffic delivery for apps, APIs. • Combines load balancing, proxying, SSL, caching, WAF, and API management. Web App Firewall Today: Dynamic Application Gateway
  13. 13. 13 Dynamic Application Infrastructure Dynamic App Infrastructure • A single app platform for monoliths, microservices. • Optimizes east/west app traffic and app serving. • Combines web server, app servers, KIC, and service mesh. Future: Dynamic Application Infrastructure Web App Firewall
  14. 14. NGINX Application Platform The industry’s only solution that drives 10x simplification and 80% cost savings by combining load balancers, API gateway, and service mesh into a single, modular platform Load balancer API gateway Service Mesh
  15. 15. Embraces A Multitude Of Use Cases Reverse Proxy Load Balancer WAF Cache API Gateway Ingress Controller Sidecar Proxy Web Server App Server
  16. 16. Ed English NGINX Let’s talk about F5, briefly
  17. 17. The Traditional vs. Modern Divide 17
  18. 18. Different Needs On Either Side 18
  19. 19. NGINX + F5: Bridge DevOps and NetOps 19
  20. 20. Open Source-Driven 375M websites powered worldwide 66% of the 10,000 busiest sites 90M downloads per year Enterprise-Driven 25,000 customers worldwide 49 of the Fortune 50 10 of the world’s top 10 brands NGINX + F5: Complementary Approaches
  21. 21. NGINX + F5: Better Together 21
  22. 22. Owen Garrett NGINX Software Load Balancing, across platforms, for Microservice and Hybrid Applications
  23. 23. Software Load Balancing
  24. 24. AGILITY RETURN ON INVESTMENT What is the purpose of Load Balancing? CUSTOMER EXPERIENCE dev devops ops
  25. 25. 80% CAPEX and OPEX savings Consolidation: 10 solutions to 1 Software on commodity hardware Free up budget for new projects Fund innovation, not status quo RETURN ON INVESTMENT
  26. 26. Moving to the next generation of F5 hardware was going to cost more than $1M per data center. NGINX Plus gave us 50% more transactions per server, for one-sixth the price. We’re now 100% hardware free. “ -- Senior Networking Leader, AppNexus “ RETURN ON INVESTMENT
  27. 27. Goal: Improve performance, reduce costs, and go “hardware free” to improve agility NGINX Plus performs all load balancing; runs on Dell hardware with 50% more transactions, 83% less cost Deployed by network team to replace F5 hardware that was too expensive, too slow RETURN ON INVESTMENT
  28. 28. Can software deliver at the scale of hardware? On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that. - Adam Langley, Google
  29. 29. Can software deliver at the scale of hardware? On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that. - Adam Langley, Google We have deployed TLS at a large scale using both hardware and software load balancers. We have found that modern software-based TLS implementations running on commodity CPUs are fast enough to handle heavy HTTPS traffic load without needing to resort to dedicated cryptographic hardware. - Doug Beaver, Facebook
  30. 30. Can software deliver at the scale of hardware? On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that. - Adam Langley, Google We have deployed TLS at a large scale using both hardware and software load balancers. We have found that modern software-based TLS implementations running on commodity CPUs are fast enough to handle heavy HTTPS traffic load without needing to resort to dedicated cryptographic hardware. - Doug Beaver, Facebook In practical deployment, we found that enabling and prioritizing ECDHE cipher suites caused negligible increase in CPU usage. HTTP keepalives and session resumption mean that most requests do not require a full handshake, so handshake operations do not dominate our CPU usage. - Jacob Hoffman-Andrews, Twitter
  31. 31. Can software deliver at the scale of hardware? On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that. - Adam Langley, Google
  32. 32. Can software deliver at the scale of hardware? On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that. - Adam Langley, Google We have deployed TLS at a large scale using both hardware and software load balancers. We have found that modern software-based TLS implementations running on commodity CPUs are fast enough to handle heavy HTTPS traffic load without needing to resort to dedicated cryptographic hardware. - Doug Beaver, Facebook
  33. 33. Can software deliver at the scale of hardware? On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that. - Adam Langley, Google We have deployed TLS at a large scale using both hardware and software load balancers. We have found that modern software-based TLS implementations running on commodity CPUs are fast enough to handle heavy HTTPS traffic load without needing to resort to dedicated cryptographic hardware. - Doug Beaver, Facebook In practical deployment, we found that enabling and prioritizing ECDHE cipher suites caused negligible increase in CPU usage. HTTP keepalives and session resumption mean that most requests do not require a full handshake, so handshake operations do not dominate our CPU usage. - Jacob Hoffman-Andrews, Twitter
  34. 34. App-centric infrastructure Programmable, composable Automated for DevOps, CI/CD Speed your time-to-market Gain competitive advantage AGILITY
  35. 35. App-centric infrastructure Software-defined, composable Automated for DevOps, CI/CD -- Software Development Director, Comcast It used to take us 2 weeks to make a change in our F5 infrastructure. With NGINX, it takes 30 seconds to load the image and 20 seconds to run the Ansible script. Tada! Like magic it’s in production. “ “ AGILITY
  36. 36. Goal: reduce incident impacts, maximize availability, make changes during business hours NGINX Plus frontends microservices for app routing, load balancing, security; reduced errors: .35% to .025% Deployed by an apps team as part of the customer support app stack (18M account loads/month) AGILITY
  37. 37. Software Infrastructure responds at app-speed
  38. 38. Increase adoption, reduce churn Protect your brand and reputation High performance app delivery Proven reliability and scalability Security for both legacy, modern CUSTOMER EXPERIENCE
  39. 39. We’re a nearly 100-year-old insurance company with customers that expect an experience like Google or Facebook. If we don’t load the first-page in 3 seconds or less, we lose that customer. “ -- DevOps Leader, TIAA-CREF “ CUSTOMER EXPERIENCE
  40. 40. Goal: User response in 1s, completed transaction in 3s, 99.9% availability, 0 failed customer experiences NGINX Plus is an app-level load balancer to improve elasticity and span AWS & Azure Deployed by DevOps in a dedicated digital org as part of a top-down digital transformation initiative CUSTOMER EXPERIENCE
  41. 41. Micro-caching with NGINX proxy_cache_path /tmp/cache keys_zone=cache:10m levels=1:2 inactive=3600s max_size=100m; server { listen 80; proxy_cache cache; proxy_cache_valid 200 1s; proxy_cache_lock on; proxy_cache_use_stale updating; # ...
  42. 42. AGILITY RETURN ON INVESTMENT CUSTOMER EXPERIENCE
  43. 43. Software Load Balancing, across platforms
  44. 44. “If you can’t measure it, you can’t manage it”
  45. 45. Problem Statement We saw that people: • Want to deliver their apps better • Wanted easy configuration, with a minimal amount of NGINX-specific learning required • Want to save time
  46. 46. Easy Configuration at Scale Wizard-style interface to configure LB with a few clicks Quickly create basic HTTP/S configurations • L7 traffic routing based on URI • SSL key and certificate management • Add and remove upstream servers • Add advanced configurations, if desired Save time, costs and effort using push-button deployment of configuration across multiple instances • Create one configuration; deploy across multiple instances.
  47. 47. Monitor & Analyze Performance Deep visibility and insights into KPIs (per instance basis) using an agent: • Visualize real team traffic and system stats • Analyze usage & performance trends including for 200 metrics Advanced performance metrics: • Rate, bandwidth errors, latency, health checks, all per server zone/or per upstream Transaction metrics: • Response codes, cache, filtered by URI, host, header, upstream System performance metrics: • CPU, disk, memory, load
  48. 48. Preemptive Recommendations Use the built-in configuration analyzer to get: Enhanced performance and security based on learnings from thousands of customers Better SLAs by following built-in best practices. Preemptive and actionable recommendations for: • Configuration • Security • SSL status
  49. 49. Support for Multi-Cloud Environment NGINX Controller is a Docker package Can be deployed on any public or private cloud Can manage NGINX Plus instances on multiple public and private clouds
  50. 50. Software Load Balancing, across platforms, for Microservice and Hybrid Applications
  51. 51. Modern Apps Require a Modern Architecture From Monolithic ... ... to Dynamic Three-tier, J2EE-style architectures Complex protocols (HTML, SOAP) Persistent deployments Fixed, static Infrastructure Big-bang releases Silo’ed teams (Dev, Test, Ops) Microservices Lightweight (REST, JSON) Containers, VMs, Functions Infrastructure as Code Continuous delivery DevOps Culture
  52. 52. In practice • Use the “Strangler Approach” to extend your Monolith to using Microservices: 1. Add small pieces of functionality in Microservices. 2. Repeat as needed • Organize team structure around service ownership. • Adopt DevOps mentality – follow: ◦ 12-factor app for design and constraints ◦ Cloud-Native approaches to deploy and manage Holiday Photos
  53. 53. Evolution in Action Existing Monolith Application Desktop or Web Client Your Existing Application
  54. 54. Evolution in Action You have New Use Cases New Applications are needed New Datasources and business processes are added How do we add the new use cases without large- scale rewrites?
  55. 55. Evolution in Action Implement Hybrid/Strangler Pattern 1. Implement connector microservices to provide API abstractions for external dependencies
  56. 56. Evolution in Action 2. Implement business-logic microservices for each business process Implement Hybrid/Strangler Pattern
  57. 57. Evolution in Action 3. Implement presentation-layer microservices that are accessed externally Implement Hybrid/Strangler Pattern
  58. 58. Evolution in Action 4. Use NGINX Ingress Controller for external-internal connectivity Implement Hybrid/Strangler Pattern
  59. 59. Evolution in Action 5. Use NGINX Router Mesh (Service Mesh) for internal connectivity Implement Hybrid/Strangler Pattern
  60. 60. Evolution Successful Hybrid/Strangler Implementation
  61. 61. Evolution Successful Hybrid/Strangler Implementation
  62. 62. Operating a distributed application is hard Static, Predictable Monolith: Dynamic, Distributed Application: Fast, reliable function calls Local debugging Local profiling Calendared, big-bang upgrades ‘Integration hell’ contained in dev Slow, unreliable API calls Distributed fault finding Distributed tracing In-place dynamic updates ‘Continuous integration’ live in prod More things can go wrong, it’s harder to find the faults, everything happens live
  63. 63. What is a service mesh? A service mesh is an invisible, autonomous, L7 routing layer for distributed, multi-service applications. It provides scalability, security and observability for these applications, and enables operational use cases. Most commonly implemented as a ‘sidecar proxy’ Implementations: • Istio/Envoy • Consul Connect • Linkerd2 • NGINX/nginMesh • … and many others to follow
  64. 64. Why do I need a Service Mesh? • In most cases, you do not need a service mesh (at least, not yet) • Your applications will go through a maturity journey: 1. Pre- or early-production applications, mature ‘mode 1’ applications 2. Single simple, business-critical production applications 3. Multiple complex, distributed applications This is where you may need a service mesh
  65. 65. Maturity Journey – Step 1 Simple Ingress Router, Kubernetes Networking Many production applications start and finish here Rely on Kubernetes for: • DNS-based Service Discovery • Scaling and reconfiguration • KubeProxy-based load balancing • Health Checks • Network Policies for Access Control Use a third-party Ingress Router • Pre- and Early-Production Applications, Established Apps
  66. 66. Maturity Journey – Step 2 Ingress Router, Per-Service Load Balancer, Router-mesh Load Balancer Enhance applications with: • Prometheus metrics • OpenTracing tracers • mTLS or SPIFFE ssl Use per-service proxies for specific services Use central router-mesh proxy load balancer Most production apps running in containers over the last ~3 years have taken this approach • More complex, business-critical applications P O T S
  67. 67. But… this approach gets expensive to manage The operational complexity and cost of developing bespoke libraries across languages, frameworks, and runtimes is prohibitive for most organizations, especially those with heterogenous applications and polyglot programming languages. IDC Market Perspective: Vendors Stake Out Positions in Emerging Istio Service Mesh Landscape
  68. 68. Service Mesh Goal: Deal with it without changing the app The infrastructure (the “service mesh”) must alleviate these problems without any changes made to the app: Environmental requirements: • Transparent to the app • Non-Invasive – easy to add or remove • Supports hybrid environments • Headless or GUI Functional requirements: • mTLS for encryption and auth • Observability • Tracing • Traffic Control
  69. 69. Maturity Journey – Step 3 Every container has an embedded proxy Embed proxy into every container Proxy intercepts all traffic and applies advanced functionality Proxy implements L7 policies Requires a comprehensive control plane A service mesh provides standard functionality and services in an invisible, universal fashion • Multiple interdependent, hetrogeneous applications
  70. 70. Find the balanceCosttooperate Complexity, Interdependencies, Speed of Change Single simple app Many complex, interdependent apps Using native Kubernetes and other services Using service mesh As service meshes mature, their cost will go down
  71. 71. Software Load Balancing, across platforms, for Microservice and Hybrid Applications
  72. 72. Deployment Patterns for API Gateways Lunch & Learn, London Liam Crilly April 2019
  73. 73. API Management • Policy management • Analytics & monitoring • Developer documentation API Gateway • Authenticator • Request router • Rate limiter • Exception handler
  74. 74. NGINX Application Platform The industry’s only solution that drives 10x simplification and 80% cost savings by combining load balancers, API gateway, and service mesh into a single, modular platform
  75. 75. Photo by AussieActive on Unsplash
  76. 76. 11 Photo by Cris Saur on Unsplash
  77. 77. 83% 40%of all hits are classified as API traffic (JSON/XML) of NGINX deployments are as an API gateway Source: Akamai State of the Internet Feb-2019 Source: NGINX User survey 2017, 2018
  78. 78. Why care? 16 • Latency & response time • Indiscriminate network hops • Expensive layer 7 payload inspection • Enforced scaling dimensions
  79. 79. API Gateway Essential Functions 17 TLS termination Client authentication Fine-grained access control Request routing Rate limiting Load balancing Service discovery of backends Request/response manipulation
  80. 80. API A API B API C API A API B API C Edge Gateway 19 API A API B API C • TLS termination • Client authentication • Authorization • Request routing • Rate limiting • Load balancing • Request/response manipulation
  81. 81. Edge Gateway 20 API A API B API C D E F G H • TLS termination • Client authentication • Authorization • Request routing • Rate limiting • Load balancing • Request/response manipulation • Façade routing
  82. 82. Two-Tier Gateway 21 API A API B API C D E F G HSecurity Gateway • TLS termination • Client authentication • Centralized logging • Tracing injection Routing Gateway • Authorization • Service discovery • Load balancing
  83. 83. Microgateway 22 E E F G F H D D D E F DevOps Team- owned • Load balancing • Service Discovery • Authentication per API • TLS Termination • Routing • Rate limiting
  84. 84. Adapt to your environment 23 • TLS termination • Client authentication • Fine-grained access control • Request routing • Rate limiting • Load balancing • Service discovery of backends • Request/response manipulation Conway’s Law “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”
  85. 85. F E Microgateway 25 E E F F D D D • Service discovery integration • Obtain authentication credentials • Everything else!
  86. 86. F E Sidecar Gateway 26 E E F F D D D • Outbound load balancing • Service discovery integration • Authentication • Authorization? Edge / Security Gateway • TLS termination • Client authentication • Centralized logging • Tracing injection
  87. 87. Kubernetes Cluster F E Service Mesh 27 E E F F D D D Service Mesh Control Plane Ingress / Edge Gateway All DevOps teams
  88. 88. F E Two-Tier Gateway 28 E E F F D D D F E E E F F D D D Bottleneck?
  89. 89. F E Bottleneck? 29 E E F F D D D F E E E F F D D D
  90. 90. Deployment Pattern Options Edge Gateway + Monoliths with centralized governance - Frequent changes, DevOps team-owned microservices Two-Tier Gateway + Flexibility, independent scaling of functions - Distributed control Microgateway + DevOps teams, high-frequency updates - Hard to achieve consistency, authorization minefield Sidecar Gateway + Policy-based E/W, strict authentication requirements - Control plane complexity
  91. 91. liam@nginx.com | @liamcrilly fin

×