Cloud Native Night October 2016, Mainz: Talk by Simon Bäumler (Technical Chief Designer at QAware).
Join our Meetup: www.meetup.com/cloud-native-night
Abstract: This talk takes a practice oriented approach to examine microservice oriented architecture. It will show two real systems, one build from scratch in a microservice architecture, the other migrated from a monolithic system to a microservice architecture.
With the example of these two systems the pittfalls, advantages and lessons learned using microservice oriented architectures will be discussed.
While both systems use the java stack, including spring boot and spring cloud many topics will be kept general and will be of interest for all developers.
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
Microservices @ Work - A Practice Report of Developing Microservices
1. Microservices @ Work
A practice report of developing microservices
Mainz, 25.10.2016
Simon Bäumler
2. 2
Agenda
1. Messaging-Backbone: Migrating towards microservices
2. System-Asset-Scanner: Developing a microservice oriented architecture from scratch
3. Lessions learned
3. Migration of a messaging backbone towards microservice
oriented technologies
3
■ Message backend with groupware functions:
■Messaging
■Contacts
■Calendar
■File Store
■etc
■ System runs in a cluster of 12 servers
■ Codebase about 75k LoC (Java)
■ System has more than 4k requests per second (that is >10 trillion per month)!
4. The system before migration was monolithic. Parallel
develpment in different teams was difficult
4
5. The technology stack before migration
5
■ Java
■ Spring-Dependency-Injection
■ OSGI for dynamic loading of modules
■ Camel for Message Routing
■ Karaf as Runtime Container / Server
Problems with this Architecture:
■ Domains could not be developed independently
■ Camel was not really a used
■ Dynamic swapping of modules with OSGI was
not used
■ OSGI + Camel added a significant amount of
technical overhead
■ New Features required testing of all domains,
even if only the functionality of one Domain was
changed.
6. The new technology stack used
6
■ Java with Spring-Boot
■Boot-Strap-Framework.
■Allows fast setup of a microservice
■Easy to integrate common functionality like metrics,
logging, etc
■ Spring-MVC to implement REST services
■Services are defined by annotations
■Easy to integrate with Spring-Boot
■ API-Doku: Swagger
■Generates HTML-Dokumentation from Spring-
MVC annotations
■HTML-Dokumentation also provides test-calls to
Rest-Services
■ Build-Framework: Gradle
■Enthält Dependency Management
■Maven-Archetypes are used for quick-setup of a
new Microservice
■ Execution of Services: Supervisor
7. The system after migration allowed independend development
and deployment of different domains
7
8. Design for reliability is important for a high-load system
8
■ Circuit breaker (e.g Netflix Hystrix)
■Backend-Integration
■Service-2-Service Communication
■ API-Management
■Authentication & Autorisation with AppId/AppSecrets
■Rate-Limiting / Throttling
■ Monitoring/restarting of processes
■Supervisor
■„Securing of evidence“ is crucial
9. Design for diagnosability: The "magic" diagnosis triangle
answers the challenges in the diagnosis of distributed systems.
9
Spring Boot Admin UI
Diagnosis of
distributed systems
Metriken
TracesLogs
Prometheus
10. Summary: The system is quite mature in this state with few open
issues
10
■ Migration took over a year
■ New Architecture was deployed in production a year ago
■ Main efforts drivers were:
■Framework evaluation
■Proof of concept building
■Coordination with operations
■Solving technical details
■ Current task: Improve monitoring and metrics
■Traces: Zipkin
■Metriken: Prometheus
■ The system is stable and the architecture is sustainable
11. Agenda
1. Messaging-Backbone: Migrating towards microservices
2. System-Asset-Scanner: Developing a microservice oriented architecture from scratch
3. Lessons learned
11
12. A system from scratch: System-Asset-Scanner (SAS)
Collecting reports from datacenter-servers
12
■ Core idea
■Servers send collected data to SAS
■Data is extracted and transformed to reports
■Extraction can be quite complex, e.g. looking up external databases, using external services, etc
■Reports and assets are stored in different databases
■ Separation of services part of the security concept
■ Also flexibility is a key feature
■Planned to run in different environments
■Custom data extractors used in various environments
■Only a fracture of all features used in all environments
14. The technology stack uses heavily the Spring cloud stack
14
■ Java with Spring-Boot
■ Boot-Strap-Framework.
■ Allows fast setup of a microservice
■ Easy to integrate common stuff like metrics, logging, tc
■ Spring-MVC to implement REST services
■ Services are defined by annotations
■ Easy to integrate with Spring-Boot
■ API-Doku: Swagger
■ Generates HTML-Dokumentation from Spring-MVC
annotations
■ HTML-Dokumentation also provides test-calls to Rest-
Services
■ Backend-Client: Netflix Feign
■ REST-Client
■ Client is also created from Spring-MVC annotations
■ Build-Framework: Maven
■ Enthält Dependency Management
■ Maven-Archetypes are used for quick-setup of a new
Microservice
■ Docker for Test-Environments
■ Using Docker in Production is a long-term goal
■ CI-Build with Jenkins
■ Go language is used by a 3rd party to implement
some data extractors
16. 16
Inflow of data and requests
Usually not constant (low tide, high tide)
Unexpected variation may occur (flood, drought)
Processing of data and requests
Maximum rate, that can be processed
without problems
Rate, where the system is damaged
?
17. 17
1
3
2
1
2
3
Reports, how much max. flow is currently possible
Adjusts the valve, so that the actual max. flow rate
is not exceeded
Dam up in a big dam lake
18. In SAS, the Queues are responsible for
Implementing the back pressure principle
18
19. Status: The next major step is to integrate more cloud features
to simplify operation
19
■ Currently we have 16 different microservices
■ Codebase size is about 36k LoC
■ The System went into production a year ago No severe problems yet
■ Development of new features is still continuing
■ The architecture can be still improved in several aspects
■Improve resilience of architecture (e.g. by adding service-discovery, cloud-config, circuit-breakers…)
■At beginning of development we decided to use a single codebase to speedup development
Decouple versioning/codebase of services to deploy single services independently
■Improve Metrics and Monitoring
20. Agenda
1. Messaging-Backbone: Migrating towards microservices
2. System-Asset-Scanner: Developing a microservice oriented architecture from scratch
3. Lessons learned
20
21. The Spring-Cloud framework is a stable
platform for projects this size
21
■ Spring Cloud provides a opinionated
framework for microservice and cloud
features
■ When using the Spring Cloud components,
you automatically reach a high level in the
Cloud Native Maturity Model
■ Almost all features are optional, but easy to
use
■ Quality is production ready
■ API documentation is generated by Swagger
from sourcecode
Source: pivotal.io
22. Module structure of a service: We always create a client module
with the API
22
package sas.service.a.api;
public interface ServiceAPI {
@RequestMapping(value = "service/path",
produces = MediaType.APPLICATION_JSON_VALUE,
method = RequestMethod.GET)
ResultDTO restServiceMethod(@PathVariable("id") String id) ;
}
package sas.service.a.app;
@RestController
public class ServiceController implements ServiceAPI {
@Override
public ResultDTO restServiceMethod(@PathVariable("id") String id) {
// implement service here
…
}
}
package sas.service.a.client;
@FeignClient(url = "${services.serviceurl}")
public interface ServiceClient extends ServiceAPI {
// no implementation is needed, as Netflix Feign takes care of that
}
Runnable code and
configuration can
be created by a
Maven Archetype
23. The Job-DSL Plugins is trivial, yet the advantages are significant
23
■ The plugin generates from the groovy scripts the „config.xml“ of the
Jenkins jobs
■ Best practise: Use a simple „Seed“-Job to configure all other jobs
with the Job-DSL plugin
■ The description of the CI-builds is stored in the SCM (as the
description of the build, e.g. with Maven POM)
■ Restoring or cloning of CI-Jobs is a matter of seconds
■ Build configurations are versioned in the SCM
24. CI-as-Code with the Jenkins Job-DSL plugin
24
job('SAS/SAS-INPUT-QUEUE-BUILD') {
// additional description of the job
description('SAS Input Queue Maven build')
// configure jdk
jdk('jdk-1.8-docker-node')
// git configuration and trigger
scm {
git {
branch('origin/master')
remote {
url('https://www.qaware.de/git/SAS')
credentials('xxx')
}
configure { scm ->
// configure "git" (not "jgit") and fisheye repository browser
scm / gitTool << 'Git'
scm / browser(class: ‚
hudson.plugins.git.browser.FisheyeGitRepositoryBrowser') {
url('https://www.qaware.de/fisheye/changelog/SAS')
}
// only include current folder
scm / 'extensions' / 'hudson.plugins.git.extensions.impl.PathRestriction' {
'includedRegions'('code/input-queue/.*')
}
}
}
}
triggers {
scm('H/15 * * * *') // every fifteen minutes (e.g. um :07, :22, :37, :52)
}
// configure docker container to execute maven build
wrappers {
buildInDocker {
dockerHostURI('tcp://nio-build-1.intern.qaware.de:4243')
image('10.81.16.196/sas/buildnode')
startCommand('/bin/cat')
}
}
configure { node ->
// configure the network bridge to 'host'
node / buildWrappers
/ 'com.cloudbees.jenkins.plugins.okidocki.DockerBuildWrapper'
/ net << 'host'
}
steps {
// build dependencies
maven {
rootPOM('code/commons/pom.xml')
goals('clean install -Dmaven.test.failure.ignore=true')
}
// build input-queue
maven {
rootPOM('code/input-queue/pom.xml')
goals('clean install -Dmaven.test.failure.ignore=true')
}
}
// post build publishers
publishers {
archiveJunit('**/target/surefire-reports/*.xml')
}
}
If we were to start today, we would use the Jenkins pipeline DSL.
25. Provisioning
of Docker
Jenkins nodes
Compile, Test &
Package
Create App
Packages
Provisioning of
Docker App
Images
Run Integration-
Test
Deploy & Run
Staging-Env
Containerize your CI pipeline: More flexibility and throughput of
the CI process
25
Docker
file(s)
Docker file
26. A test pyramid with tests of various granularity ensures code
quality and integration
26
■ Unit Tests: The classic unit tests (JUnit, Mockito)
■ Service Tests: Tests the REST-Controller and Client
of Services (JUnit, Spring MVC Tests, Wiremock)
■ Integration Tests:
■Tests the interaction of multiple deployed containers
(JUnit, Spring MVC Tests)
■Performance Tests with Gatling
■ UI-Tests: Tests basic UI-functionality against a
deployed system (Protractor)
Run all these tests continuously in your build pipeline
and check the results (test errors, test coverage, run
times, resource consumption, etc.)
UI
tests
Unit tests
Service tests
Integration
tests
27. In both projects the key was to simplify and automate
development, testing, building and operating the system
27
■ Spring boot is a solid technology
■ Archetypes can be used for to bootstrap a new microservice
■ Diagnosability is much more important than in traditional systems
■ Protect services with intelligent handling of exessive loads
■ The Job-DSL plugin automates maintaining the build pipeline
■ Using a test pyramid to test different layers and stages in the build and deployment process