Organizations have accepted that "cloud" is the de-facto platform of the future, and the benefits and flexibility it brings have ushered in a renaissance in software architecture. The disposable infrastructure of cloud has enabled the first "cloud native" architecture, microservices. continuous delivery, a technique that is radically changing how tech-based businesses evolve, amplifies the impact of cloud as an architecture. We expect architectural innovation to continue, with trends such as containerization and software-defined networking providing even more technical options and capability.
2. INNOVATION IN
ARCHITECTURE
Organisations have accepted that “cloud” is the de-facto platform of the future, and the benefits
and flexibility it brings have ushered in a renaissance in software architecture. The disposable
infrastructure of cloud has enabled the first “cloud native” architecture, microservices.
Continuous Delivery, a technique that is radically changing how tech-based businesses evolve,
amplifies the impact of cloud as an architecture. We expect architectural innovation to continue,
with trends such as containerisation and software-defined networking providing even more
technical options and capability.
3. MICROSERVICES?
An approach to developing a single application as a suite of small services, each running in its
own process and communicating with lightweight mechanisms, often an HTTP resource API.
These services are built around business capabilities and independently deployable by fully
automated deployment machinery. Martin Fowler
5. MONOLITHIC
• EASY TO GET UP AND RUNNING
• ONE LANGUAGE OR
TECHNOLOGY
• SIMPLE TO DEPLOY
• LONG TERM TECH
COMMITMENT
• NO INDEPENDENT TEAMS
• BLOATED DEV ENVIRONMENTS
• LARGE MEMORY FOOTPRINT
• DIFFICULT TO SCALE
6. Time
• UI EVOLVES FASTER THAN BUSINESS
LOGIC
• INITIAL ARCHITECTURE MIGHT NOT
MEET NEW DEMANDS
• CHANGES ARE NOT ISOLATED. TIGHT
COUPLING
• STARTUP TIMES, COMPLEXITY, LINES
OF CODE INCREASES
7. MICROSERVICES
An approach to developing a single application as a suite of small services, each running in its
own process and communicating with lightweight mechanisms, often an HTTP resource API.
These services are built around business capabilities and independently deployable by fully
automated deployment machinery. Martin Fowler
HTTP
AMQP
HTTP
HTTP
8. • AUTOMATED DEPLOYMENTS
• MONITORING
• FAILURE
• EVENTUAL CONSISTENCY (BASE VS ACID)
• SERVICE REGISTRATION
• PROLIFERATION OF SERVICES
• LOAD BALANCING
We don't have perfect hardware running perfect apps on a perfect network, what we do have are
buggy apps running on hardware that fails on networks that disappear
ADDS COMPLEXITY
11. CONTAINERISATION
Provides the isolation and management benefits of a virtual machine without the overhead usually
associated with general-purpose virtualisation. In the container model, the guest OS is limited to
being the same as the underlying host OS.
Today we are launching the May Tech Radar even though it's been out for a few weeks already.
The radar is released once or twice in a year and it lists Techniques, Platforms, Tools & Langauages / Frameworks and breaks them up into 4 rings that reflect our position.
Adopt, Trial, Assess and Hold.
Each release also has looks at general trends.
The last radar focused on the explosive growth in the devops arena, the next generation data platforms and developer focus on security minded tooling.
If you have been anywhere near the internet in the past year you will have seen articles, presentations and flame wars in some cases about at least one of these topics and pretty much the reason for this is because this past year has seen a massive growth in the use of microservices, the promises of containerisation and the related rise of Docker and the required practices that needs to be in place to enable these services, including Continuous Delivery and Software Defined networking.
So what are microservices?
There is a general perception that microservices are an answer to many of the problems of some other architectural choices or that it’s just another buzz word to describe something that we are already doing.
But I think it’s actually a convenient term that can be used to encompass a number of new practices that are gaining traction.
Let’s take a step back and have a look at the monolithic architecture. Again, if you have been anywhere near the internet this is the general compression and who am I to argue with the internet!
So a monolithic application or system is generally a single-tiered software application in which the user interface and data access code are combined into a single program from a single platform. Modularised, but all modules run in the same process and deployed on one machine.
Generally the idea is that the application is responsible for performing all the steps required to complete a function.
Generally quite simple to start and build. IDE are good at this. These could be websites, desktop applications and to a certain extent even some of the ESB implementations have become quite monolithic in nature.
They generally are one developed on one core language.
However… these kind of applications and even a lot of the ESB implementations that we see become locked into the technology choices. Upgrading to new language versions and even trying to implement new technologies becomes increasingly difficult.
A single code base means that the whole project is opened up in the development environment and things start to get a bit sluggish. Not a pleasant development experience.
Yet there's no reason why you can't make a single monolith with well defined module boundaries. At least there's no reason in theory, in practice it seems too easy for module boundaries to be breached and monoliths to get tangled and large
So back to microservices, here is a possible overview of what a microservices architecture might look like.
Firstly, each services is responsible for its own data. Changes to the database schema of a service should not require a change to another.
Each one manages its own life cycle and is independently deployable. To the extent that when a micro service is deployed it should do so with all of its requirements. Which is why we are seeing Embedded Servers over Application Servers which simplifies the automated deployments.
Smaller, contained, easy to test and understand
Communication over HTTP or AMQP (Advanced Message Queue Protocol)
Out of process communication
Services can use different database, languages
The idea of “smart endpoints, dumb pipes” is that you should minimize the intelligence in your infrastructure, putting more of an onus on the services to implement their own functional, and non-functional capabilities; and on the service consumers to understand how to consume each of the microservices. Tolerant reader, etc
Swagger & APIMan
ESB keeps us locked into a technology
Has similar problems to the monolith approach
XSD and XML…Leads to versioning… what about Tolerant Reader and Consumer-Driven Contracts
ACID - Atomicity, Consistency, Isolation, Durability
BASE - Basically Available, Soft state, Eventual consistency
The complexity that drives us to microservices: large teams, multi-tenancy, supporting many user interaction models, allowing different business functions to evolve independently, and scaling. But the biggest factor is that of sheer size - people finding they have a monolith that's too big to modify and deploy.
Microservice Envy…avoid the trap of diving headlong into microservices. They require additional overhead. Start small, one or two services and grow as the team adjusts and the right level of granularity is found
In order to enable Microservices, especially when we need to manage a number of them, we need some sort of continuous deployment and a mechanism that is as automated as possible.
Each service should have its own pipeline and each service should pass through the required steps on its journey to production. Failure at any one of the steps in the pipeline should be transparent.
The most difficult part of microservices should be the decisions of granularity and service size rather than how to deploy it and if the microserver can pass through our pipelines as a immutable container with all of the required dependancies we can also be assured that what is deployed in production is in the correct state. No more “works on my machine” excuses I am afraid.
Over the past year or so containers have become very fashionable. Even though the technology is not new.
What’s new are the standards and ease of deployment. This is where Docker has gained a lot of support and popularity.
But how does something like Docker work. You can create a Docker template - the docker template indicates all of the requirements that your container needs. The first time the container is built it downloads, installs and does everything that is needed. This creates a snapshot and caches things so that the next time the container starts up there is no wait time.
OS Level Virtualisation - smaller footprint. Instead of a full size VM per service we can package only what is required.
Still uses the same base Host server but creates separate sandboxes / containers.
Fast startup, better use of resources
Mark Russinovich CTO Azure
A service can register itself on the network
Apps or other services simply need to know the name of the service that they need. Instead of a location (hostname or IP address)