Donnie Prakoso, Technology Evangelist, ASEAN, AWS.
Container technology provides unparalleled improvements in efficiency and agility of packaging and deploying applications. Containers offer VM-like isolation and process-like efficiency and hence are becoming the de-facto method for deploying micro-services. However, using containers for running services at scale has required that operations team handle complex, dynamically changing infrastructure requirements, or run the risk or under- or over-provisioning infrastructure. Sounds like going back to the days before Cloud? In this session, learn how AWS services for containers take the pain out of managing infrastructure, and best practices for developing new services rapidly while running them at scale.
3. What are Microservices?
Microservices are an architectural and organizational
approach to software development in which software is
composed of small, independent services that
communicate over well-defined APIs. These services are
owned by small, self-contained teams.
Whitepaper: http://bit.ly/2A0qGdt - Running Containerized Microservices on AWS
8. Any app, any language
Containers Are a Perfect Fit!
Image is the version
Test and deploy same artifact
Stateless servers decrease change risk
Self-contained services
CI/CD pipelines
CONTAINER ENGINE
Simple to model services
11. Production Workloads on AWS
AWS VPC
networking mode
Advanced task
placement
Deep integration
with AWS platform
ECS CLI…{ }
Global footprint
Powerful scheduling
engines
Auto scaling
CloudWatch metrics
Load balancers
Linux & Windows
13. 63%
of Kubernetes workload run on
Amazon Web Services today
- CNCF 2017 Survey
https://www.cncf.io/blog/2017/06/28/survey-shows-kubernetes-leading-orchestration-platform/
https://www.cncf.io/blog/2017/12/06/cloud-native-technologies-scaling-production-applications/
14. E L A S T I C C O N TA I N E R S E R V I C E F O R K U B E R N E T E S
(EKS)
28. Automatic Service Scaling
Publish metrics
Auto Scaling ECS service
Availability
Zone A
Availability
Zone B
TASK A
Add/Remove ECS
tasks
TASK C
TASK BScaling Policies
Amazon CloudWatch
Amazon ECS
Application Load
Balancer
35. Developers need to connect micro services
Build apps
invoking other
services by name
Ensure that service
name resolves to
correct IP/port
DEV OPS
36. “Where is Service X?”
Friendly name -> IP + port
E.g., app: {10.0.4.5:8080, 10.0.4.6:8080 }
What is Service Discovery?
Service registry is a database populated with
information on how to dispatch requests to
microservice instances.
37. Why is it non trivial?
Dynamic by design:
Number of containers & instances
Auto assigned IP addresses & ports
Placement, scheduling, scaling
Deployments and upgrades
Health and connectivity
38. SERVICE A
INSTANCE 1
SERVICE B
INSTANCE 1
CLIENT
How to contact you?
What’s your IP+Port?
How to contact you?
What’s your IP+Port?
39. SERVICE A
INSTANCE 1
SERVICE B
INSTANCE 1
CLIENT
Which one is available?
SERVICE A
INSTANCE 2
SERVICE B
INSTANCE 1
Which one is available?
41. SERVICE A
INSTANCE 1
SERVICE B
INSTANCE 1
CLIENT
SERVICE A
INSTANCE 2
SERVICE B
INSTANCE 2
NEW DEPLOYMENT!
SERVICE C
INSTANCE 1
SERVICE C
INSTANCE 2
42. Current patterns require install, setup and management
Load Balancers Key-value store Service Mesh
Service
registry
44. Can We Make It Simpler?
Predictable
Names
for services
Auto updated
with latest,
healthy IP, port
Managed: No
overhead of
installation or
monitoring
High availability,
high scale
Extensible:
Flexible
boundaries for
auto discovery
46. You build apps where
services are invoked by
name & name resolves
to IP/port dynamically
You turn on service
discovery during
deployment —
service creation
DEV OPS
48. Route 53 provides Service Registry
Route 53 provides APIs to create
• Namespace
• CNAME per service autoname
• A records per task IP
• SRV records per task IP + portService
CNAME: A / SRV record
Namespace
49. ECS schedules & places service endpoints
ECS Scheduler updates on:
• Service scaling
• Task registrations
• Task de-registrations
• Task health
• Scheduling / Placement changes
• ECS instance changes
ECS maintains latest state of the
dynamic environment in Service
Registry
Cluster: myapp
app
IP
web
IP
app
IP
web
IP
AZ 1 AZ 2
50. ECS updates service endpoints in Route 53
Cluster: myapp
AZ 1 AZ 2
app
10.0.6.5:8080
web
10.0.8.6:8080
app
10.0.4.5:8080
web
10.0.3.6:8080
Namespace: myapp.local
Service
web.myapp.local CNAME
10.0.4.5:8080
10.0.3.6:8080
Service
app.myapp.local CNAME
10.0.6.5:8080
10.0.8.6:8080
51. Services connect to latest endpoints via DNS
Service
web.myapp.local CNAME
10.0.4.5:8080
10.0.3.6:8080
app
>dig web.myapp.local
> 10.0.4.5:8080
>
app web
web
Cluster: myapp DNS Server:
AZ 1 AZ 2
52. Benefits of this approach
Cluster: myapp
AZ 1 AZ 2
Service
Namespace
Managed
Just turn on
Highly available
Tied to Route 53
availability, scale
Extensible
Public APIs that can be
used across AWS
Works across clusters,
accounts, AZs
Works across AWS services
53. Enables these use cases
1
Blue green deployments
• myapp.staging.local
• myapp.prod.local
• Private IP
• abstract cluster details
2
Internal micro services
• web.myapp.local
• Expose Private IP
3
External micro services
• web.myapp.mycompany.com
• Expose public IP or ELB EIP
• network + container health
check
4
Across ECS & EKS
• Service1.myapp.ecs
• Service2.myapp.eks
5
Across ECS &
AWS & onPrem
• Service1.myapp.ecs
• Service2.myapp.ec2
• Service3.myapp.onprem
6
Expose to service mesh
• Service1.myapp.local
• Service2.myapp.local