On-demand recording: https://www.nginx.com/resources/webinars/istio-move-to-microservices-service-mesh/
About the webinar
NGINX is widely known, used, and trusted for a variety of purposes. NGINX works as a reliable, high-performance web server, reverse proxy server, and load balancer. NGINX is also a widely used microservices hub, an Ingress controller for Kubernetes, and a sidecar proxy in the Istio service mesh.
In this webinar, we’ll describe the move to microservices, the crucial role that NGINX has already played, and a range of architectural options that organizations have for their microservices apps, including three progressively complex models in the NGINX Microservices Reference Architecture. We’ll then introduce the emergence of Kubernetes as a container orchestration framework, the use of service mesh architectures, and the design of Istio. We’ll finish by showing how NGINX Open Source and NGINX Plus can be used as the sidecar proxy in an Istio service mesh, bringing greater reliability and capability to your service mesh application.
NGINX, Istio, and the Move to Microservices and Service Mesh
1.
2. NGINX, Istio, and the
Move to Microservices and
Service Mesh
How NGINX is emerging as a microservices
hub, Kubernetes Ingress controller, and sidecar
proxy Speaker: Rob Whiteley
May 9th, 2018
3. Who are we?
3
Rob Whiteley
Chief Marketing Officer
Formerly:
• VP Marketing, Hedvig, a leader in
software-defined storage
• Riverbed VP, Forrester Research VP
Floyd Smith
Director, Content Marketing
Formerly:
• Apple, Alta Vista, Google, startups
• Author of multiple books on technology,
including web, marketing, usability
6. Introducing NGINX
6
• NGINX OSS released 2003
• NGINX Plus first released in 2013
• NGINX, Inc. is VC-backed by leading investors in
enterprise software
• Offices in SF, Sunnyvale, Singapore, Cork,
Cambridge, & Moscow
• 1,500+ commercial customers
• 200+ employees
7. >50%of the top 100,000 busiest websites
Source: W3Techs Web Technology Survey
12. Cloud-Native Companies are Disrupting
12
76% CEOs concerned about new entrants disrupting their Business.
CEO Outlook, KPMG, Jun/2016
Video
Rental
Hotel &
Travel
Taxi &
Car Rental
Banking &
Payments
Automotive &
Dealerships
Bookstores,
Publishing, etc.
13. Transition to Modern App Architectures
From Legacy ... ... to Modern
A Giant piece of Software
Silo’ed teams (Dev, Test, Ops)
Big-bang releases
Persistent deployments
Fixed, static Infrastructure
Complex protocols (HTML, SOAP)
Microservices
DevOps culture
Continuous delivery
VMs, containers, functions
Infrastructure-as-code
Lightweight, programmable (REST, JSON)
14. Four Patterns are Inextricably Linked
DevOps - a union of people, process, and products
collaboration aimed at delivering high-quality
software to the end user.
Continuous Delivery – automated pipeline for
shipping small batches of software from
development to production in minutes.
Microservices - an architectural approach to
developing an application as a collection of small
loosely coupled services that communicate over well
defined APIs
Containers - a low-overhead highly portable
packaging; ideal compute vehicle for deploying
individual microservices.
15. Roadmap to Digital Transformation
Prepare
Remove
hardware
dependencies
Package
Adopt containers
and enable continuous
deployment
Re-Platform
Move to
microservices
architecture
Digital transformation
17. What are Microservices?
"An approach to developing a single
application as a suite of small services,
each running in its own process and
communicating with lightweight
mechanisms, often an HTTP resource
API",
Martin Fowler, 12-Factor App
18. Microservices Benefits
1. Speed – deploy small footprint microservices that are faster to install,
debug, update & rollback independently of each other.
2. Tech Choice – ensure each microservice gets the technology stack
that best fits the purpose of that service.
3. Scalability – scale up and down each microservice independently
without negatively impacting to the app
4. Resiliency – detect and remediate microservices failures with no
impact to the overall health of the app.
5. Fault Tolerance – isolate the impact of common failures such as
memory leaks, out of bound access, etc.
19. What is a Service Mesh?
An infrastructure layer to make inter-
service communication fast, reliable
and configurable.
L7 networking for containers,
microservices.
20. Service Mesh Benefits
1. Reliability – improve microservices-ready message handling.
2. Consistency – get consistent operational experience with legacy
apps.
3. Flexibility – tackle multi-cloud networking in modern applications.
4. Visibility – gain insight into the black box of microservices.
5. SLAs – get better operations and policy enforcement.
6. Repeatability – simplify microservices implementations.
7. Velocity – drive faster time-to-market with new services.
8. Security – protect interservice communication.
21. Evolution of the microservices stack
21
Container
wars
Orchestration
wars
Proxy
wars
Microservice packaging is
standard. Docker won.
(container portability)
Microservice automation is
standard. Kubernetes won.
(container scheduling)
Microservices service mesh
is not standard. Winner TBD.
(L7 interconnectivity)
Istio
22. Evolution of NGINX MRA
Proxy Model Router Mesh Service Proxy Service Mesh
Service-Centric
Control
<10 services
App-Centric
Control
Dozens of services
Multi-App Granular
Control
Hundreds of services
Multi-App Granular
Control
Hundreds+ of services
AdvancedBasic
23. Services Proxy - Ingress Controller
Ingress Controller
• Session Persistence
• JWT (+)
• Real Time Stats (+)
• SSL Termination
• Path-based Rules
• URI Rewrites
• Service Discovery
24. Services Proxy – Sidecar Proxy
Service Mesh
NGINX Sidecar
Service Mesh Proxy
deployed alongside the
service in a Pod
25. Services Mesh – Istio Control Plane
Service Mesh
Service Mesh
Data Plane
26. NGINX as a Sidecar Proxy
Why use NGINX as a Sidecar Proxy?
• Battle tested, reliable & high performance proxy
• Operational consistency with your legacy app delivery
• Powerful configuration directives (650+)
• Highly programmable (Lua, NGINX JavaScript)
• Strong community backing – many 3rd Party Modules
29. NGINX
Application
Platform
A suite of technologies
to develop and deliver
digital experiences that
span from legacy,
monolithic apps to
modern, microservices.
31. L4
LB
Before: Complex App Infrastructure
CDN WAF
L4
LB
L7
LB
API
GW
K8s
IC
SP MS
SP MS
SP MS
WS
WS
Web
WS
WS
App
RP
MicroservicesMonolithicApps
Mgr.Mgr. Mgr. Mgr. Mgr. Mgr.
Istio
= Control plane
= Data plane
= Server/app
Mgr.
SP MS
Mgr.
33. 9-Step Journey to Microservices
1. Identify “bounded Context”
2. Define internal APIs
3. Reorganize data store
4. Design inter-service communication
5. Add service discovery
6. Integrate app load balancing
7. Attach an API Gateway
8. Integrate services mesh
9. Deploy microservices to production
Three general methods of managing traffic in a Microservices environment:
A user connects to NGINX Plus, internal services connect to each other through centralized instances of NGINX Plus:
A user connects to NGINX Plus, internal services connect to their own, local NGINX Plus instances, which route traffic to other NGINX Plus instances.
NGINX recommends the Fabric model, due to many unique advantages:
Service Discovery - Automatic built in support for Service Discovery with DNS SRV records. Allowing to resolve both IP address AND Port number
Communication Between Services - Keepalive ‘Persistent SSL’ stateful connections over the network between Microservices, providing ‘Encryption at the Transmission layer’
Load Balancing Between Services - Load Balancing at the CONTAINER level
Reliability & Update - Active Health Checks and Caching, for the Circuit Breaker Pattern
Not ready to break up your Monolith?
Here is how we can we help you start moving to Microservices today!
Pair of HA instances of NGINX Plus, which is found out in front of all three models
Professional Services package for expert guidance and direction
Three general methods of managing traffic in a Microservices environment:
A user connects to NGINX Plus, internal services connect to each other through centralized instances of NGINX Plus:
A user connects to NGINX Plus, internal services connect to their own, local NGINX Plus instances, which route traffic to other NGINX Plus instances.
NGINX recommends the Fabric model, due to many unique advantages:
Service Discovery - Automatic built in support for Service Discovery with DNS SRV records. Allowing to resolve both IP address AND Port number
Communication Between Services - Keepalive ‘Persistent SSL’ stateful connections over the network between Microservices, providing ‘Encryption at the Transmission layer’
Load Balancing Between Services - Load Balancing at the CONTAINER level
Reliability & Update - Active Health Checks and Caching, for the Circuit Breaker Pattern
Not ready to break up your Monolith?
Here is how we can we help you start moving to Microservices today!
Pair of HA instances of NGINX Plus, which is found out in front of all three models
Professional Services package for expert guidance and direction
Three general methods of managing traffic in a Microservices environment:
A user connects to NGINX Plus, internal services connect to each other through centralized instances of NGINX Plus:
A user connects to NGINX Plus, internal services connect to their own, local NGINX Plus instances, which route traffic to other NGINX Plus instances.
NGINX recommends the Fabric model, due to many unique advantages:
Service Discovery - Automatic built in support for Service Discovery with DNS SRV records. Allowing to resolve both IP address AND Port number
Communication Between Services - Keepalive ‘Persistent SSL’ stateful connections over the network between Microservices, providing ‘Encryption at the Transmission layer’
Load Balancing Between Services - Load Balancing at the CONTAINER level
Reliability & Update - Active Health Checks and Caching, for the Circuit Breaker Pattern
Not ready to break up your Monolith?
Here is how we can we help you start moving to Microservices today!
Pair of HA instances of NGINX Plus, which is found out in front of all three models
Professional Services package for expert guidance and direction
Three general methods of managing traffic in a Microservices environment:
A user connects to NGINX Plus, internal services connect to each other through centralized instances of NGINX Plus:
A user connects to NGINX Plus, internal services connect to their own, local NGINX Plus instances, which route traffic to other NGINX Plus instances.
NGINX recommends the Fabric model, due to many unique advantages:
Service Discovery - Automatic built in support for Service Discovery with DNS SRV records. Allowing to resolve both IP address AND Port number
Communication Between Services - Keepalive ‘Persistent SSL’ stateful connections over the network between Microservices, providing ‘Encryption at the Transmission layer’
Load Balancing Between Services - Load Balancing at the CONTAINER level
Reliability & Update - Active Health Checks and Caching, for the Circuit Breaker Pattern
Not ready to break up your Monolith?
Here is how we can we help you start moving to Microservices today!
Pair of HA instances of NGINX Plus, which is found out in front of all three models
Professional Services package for expert guidance and direction
Help developers and IT operators transition to Microservices
Works with Istio Control Plane (not exclusively)
NGINX centric, battle tested infrastructure
Strong 3rd Party Community Modules
High performance & reliability
Flexible Configuration & large number of use cases
Key:
API GW – API Gateway
CDN – Content delivery network
K8s IC – Kubernetes Ingress controller
L4 LB – Layer 4 load balancer
L7 LB – Layer 7 load balancer
Mgr – Management solution
MS – Microservice
RP – Reverse proxy
SP – Sidecar proxy
WAF – Web application firewall
A lot of companies we talk to would love to move to microservices but are held back by complexity and layers of infrastructure acquired over the years of doing business. A company we talked to recently had an architecture similar to this with a CDN, WAF, multiple load balancers, and an API gateway all before hitting any application server or microservice. Each point solution was provided by a different vendor with multiple management solutions needed. The infrastructure has become so fragile for these companies that they fear making any change. This is frustrating for for the business that needs to move with agility to stay competitive.
When we talk to these companies, they’re surprised by how much they can simplify their infrastructure by consolidating common functions onto NGINX Plus.
With the NGINX Application Platform, we can collapse ten disparate functions into a single product suite. This includes web server, load balancer, reverse proxy, content cache, application server, web application firewall (WAF), API gateway, Kubernetes ingress controller, sidecar proxy, and service mesh controller. And with the NGINX Controller we provide a central point of monitoring and management. Using less point solutions helps reduce both cost and complexity; enabling our customers to start moving to microservices.