A presentation on why or why not microservices, why a platform is important, discovering how to break down a monolith and some of the challenges you'll face (data, transactions, boundaries, etc). Last section is on Istio and service mesh introductions. Follow on twitter @christianposta for updates and more details
2. Christian Posta
Chief Architect, cloud application development
Twitter: @christianposta
Blog: http://blog.christianposta.com
Email: christian@redhat.com
Slides: http://slideshare.net/ceposta
• Author “Microservices for Java developers”
• Committer/contributor lots of open-source
projects
• Worked with large Microservices, web-scale,
unicorn company
• Blogger, speaker about DevOps, integration,
and microservices
5. Rough path of discussions
today
• Microservices: What, Why, When?
• “Cloud-native” with a Platform
• Service decomposition and integration
• Microservice resilience, routing, and control
@christianposta
7. Microservice
A highly distracting word that serves to confuse developers, architects,
and IT leaders into believing that we can actually have a utopian application
architecture.
@christianposta
8. Microservice
A highly distracting word that serves to confuse developers, architects,
and IT leaders into believing that we can actually have a utopian application
architecture.
An architecture optimization that treats the modules of an application
as independently owned and deployed services for the purposes of
increasing an organization’s velocity
@christianposta
9. • Single, self-contained, autonomous
• Isolated and Resilient to faults
• Faster software delivery
• Own their own data
• Easier to understand individually
• Scalability
• Right technology for the problem
• Test individual services
• Individual deployments
Microservices?
@christianposta
10. Why would one implement a system
as microservices?
@christianposta
11. Pain we may feel…
@christianposta
• Making changes in one place negatively affects
unrelated areas
• Low confidence making changes that don’t break
things
• Spend lots of time trying to coordinate work between
team members
• Structure in the application has eroded or is non-
existant
• We have no way to quantify how long code merges
will take
12. @christianposta
• Development time is slow simply because the project
is so big (IDE bogs down, running tests is slow, slow
bootstrap time, etc)
• Changes to one module force changes across other
modules
• Difficult to sunset outdated technology
• We’ve built our new applications around old
premises like batch processing
• Application steps on itself at runtime managing
resources, allocations, computations
Pain we may feel…
13. • System complexity
• Operational complexity
• Testing is harder across services
• Security
• Hard to get boundaries right (transactions,
APIs, etc)
• Resource overhead
• Network overhead
• Lack of tooling
Drawbacks to microservices
@christianposta
15. So, do we “microservices” all the way down?
@christianposta
16. Ask a very honest, and critical, question:
Is our application architecture the bottleneck
for being able to go faster?
@christianposta
17. “No”, “Not really”, “Not yet”… then stop
Go find out what is. Improve that. Then come back.
@christianposta
18. DON’T optimize for microservices if…
@christianposta
• You’re building a Minimum Viable Product (MVP), testing a
hypothesis
• You’re building a CRUD application
• You system isn’t CRUD, but the business logic not very
complicated
• Your system doesn’t have > 10 people all trying to
coordinate to work on it
• Your application doesn’t need to scale
• You deliver packaged software
• You’re building HPC systems
20. We can now assert with confidence that
high IT performance correlates with
strong business performance, helping
to boost productivity, profitability and
market share.
@christianposta
https://puppet.com/resources/whitepaper/2014-state-devops-report
21. High performing IT teams need these
IT capabilities and practices
@christianposta
• Continuous Integration (build from master)
• Continuous Delivery (automated pipelines)
• Safe, reliable delivery mechanisms
• Modern, scalable, resilient application architectures
• Self-service, on-demand infrastructure
• Automated testing
• Metrics, logs, traces, observability
• Feedback loops
• Security as part of the pipeline
36. • Break things into smaller,
understandable models
• Surround a model and its
“context” with a boundary
• Implement the model in code
or get a new model
• Explicitly map between
different contexts
• Model transactional
boundaries as aggregates
Focus on domain models, not data models
@christianposta
37. How do we share information?
• REST, RPC
• Streams/Events(ActiveMQ, JMS, AMQP, STOMP, Kafka,
etc)
• Legacy (SOAP, mainframe, file processing, proprietary)
• Routing, Aggregation, Splitting, Transactions,
Compensations, Filtering, etc.
@christianposta
38. • Small Java library
• 200+ components for integrating systems (bring along only
the ones you use)
• Powerful EIPs (routing, transformation, error handling)
• Distributed-systems swiss-army knife!
• Declarative DSL
• Embeddable into any JVM (EAP, Karaf, Tomcat, Spring
Boot, Dropwizard, Wildfly Swarm, no container, etc)
Apache Camel
@christianposta
40. public class OrderProcessorRouteBuilder extends RouteBuilder {
@Override
public void configure() throws Exception {
rest().post(“/order/socks”)
.description(“New Order for pair of socks”)
.consumes(“application/json”)
.route()
.to(“activemq:topic:newOrder”)
.log(“received new order ${body.orderId}”)
.to(“ibatis:storeOrder?statementType=Insert”);
}
Camel REST DSL
@christianposta
41. Saga pattern?
@christianposta
https://www.nicolaferraro.me/2018/04/25/saga-pattern-in-apache-camel/
• Multiple steps in an execution that must all complete
successfully or rolled back/compensated
• Participants register their interactions as well as
callback/compensation mechanisms
• The saga coordinator is stateful
• Upon successful completion of an execution, an optional
“completion” callback is executed
• Upon failure of an execution all of the compensation actions
are executed
54. As we move to services architectures,
we push the complexity to the space
between our services.
@christianposta
55. Things you must solve for because…
distributed systems
• Service discovery
• Retries
• Timeouts
• Load balancing
• Rate limiting
• Thread bulk heading
• Circuit breaking
56. …continued
• Routing between services (adaptive, zone-aware)
• Deadlines
• Back pressure
• Outlier detection
• Health checking
• Traffic shaping
• Request shadowing
61. But I’m using Vert.x!
• vertx-circuit-breaker
• vertx-service-discovery
• vertx-dropwizard-metrics
• vertx-zipkin?
• …..
• ......
@christianposta
62. Screw Java - I’m using NodeJS!
JavaScript is for rookies, I use Go!
But python is so pretty!
I prefer unreadability… Perl for me!
@christianposta
63. Let’s abstract this functionality to a single
binary and apply to all services.
• Allow heterogeneous architectures
• Remove application-specific implementations of this
functionality
• Consistently enforce these properties
• Correctly enforce these properties
• Opt-in as well as safety nets
@christianposta
66. Envoy is…
• service proxy
• written in C++, highly parallel, non-blocking
• L3/4 network filter
• out of the box L7 filters
• HTTP 2, including gRPC
• baked in service discovery/health checking
• advanced load balancing
• stats, metrics, tracing
• dynamic configuration through xDS
70. A service mesh is decentralized application-
networking infrastructure between your services
that provides resiliency, security, observability,
and routing control.
A service mesh is comprised of a data plane
and control plane.
Time for definitions:
@christianposta
76. • Have self-service infrastructure automation?
• Have self-service application automation?
• Have working CI/CD?
• Have health checking, monitoring,
instrumentation?
• Have logging, distributed tracing?
• Able to release services independently?
• Honoring backward and forward
Are you doing microservices?
@christianposta
77. • Download and explore OpenShift
• https://www.openshift.org/minishift/
• Checkout Spring Boot/WildFlySwarm/Vert.x on
OpenShift:
• https://launch.openshift.io
• Reach out to your Red Hat rep to discuss more and/or
get me/my team involved with your initiatives
What next?
78.
79. DEV TRACK - WHAT YOU’LL LEARN
● Introduction to cloud-native applications
● Building microservices with Wildfly Swarm, Spring Boot, and
Eclipse Vert.x
● Increasing developer productivity with containers
● Resilient and fault-tolerant microservices' continuous integration
and continuous delivery
RSVP@
https://red.ht/containers-seattle
80. Thanks!
BTW: Hand drawn diagrams made with Paper by FiftyThree.com @christianposta
Twitter: @christianposta
Blog: http://blog.christianposta.com
Email: christian@redhat.com
Slides: http://slideshare.net/ceposta
Follow up links:
• http://openshift.io
• http://launch.openshift.io
• http://blog.openshift.com
• http://developers.redhat.com/blog
• https://www.redhat.com/en/open-innovation-labs
• https://www.redhat.com/en/technologies/jboss-middleware/3scale
• https://www.redhat.com/en/technologies/jboss-middleware/fuse
Notas do Editor
One large database!
We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
Business Agility!!!
Journey …
Understand them
Test them
Change them
Different pace – rate of change is key !!!
Agile business…
Before going to far, we should have a definition about what microservices are. When we talk about microservices, we talk about breaking up complicated, potentially really large systems, whatever they may be, into smaller components. We break them into smaller components so we can understand them individually, test them individually, scale them and ultimately change them at a different pace than the rest of the system. You can imagine having to please every master in a monoithic environment can slow/bogg things down and inhibit change. Which as we discussed in the beginning is the key here. We need to be able to work on systems that can change with the rest of the business as it’s getting even more competitive and disruptive.
One of the keys to this flexibility and ability to change is to focus on autonomy. Systems should be designed to be more autonomous so that changes don’t affect other downstream systems, faults don’t ripple across into cascading failures etc. The more dependencies we have (on other systems, protocols, shared libraries, databases, etc) the harder it can be to make changes. So we talk about services having and owning their own data, chosing the right technology for their function, and conciously enforcing modularity through APIs and contracts.
Autonomy is key here
But autonomoy of systems includes autonomoy of teams as well. Microservices can be a means to an end for a company serious about investing into their digital experiecne they provide to customers. It’s not in and of itself the end goal. It’s part of a digital transformation that encompasses all parts of the organization.
Open question to the audience:
Looking for participation
Optimized for speed…. Speed of what?!
Raw Performance?
Speed of competition?
Like agile?
How many in this room practice agile?
We found that companies with high IT performance are twice as likely to exceed their pro tability, market share and productivity goals. Investing in IT initiatives can deliver real returns, and give businesses a competitive edge.
"Cloud native" is an adjective that describes the applications, architectures, platforms/infrastructure, and processes, that together make it *economical* to work in a way that allows us to improve our ability to quickly respond to change and reduce unpredictability. This includes things like services architectures, self-service infrastructure, automation, continuous integration/delivery pipelines, observability tools, freedom/responsibility to experiment, teams held to outcomes not output, etc.
"Cloud native" is an adjective that describes the applications, architectures, platforms/infrastructure, and processes, that together make it *economical* to work in a way that allows us to improve our ability to quickly respond to change and reduce unpredictability. This includes things like services architectures, self-service infrastructure, automation, continuous integration/delivery pipelines, observability tools, freedom/responsibility to experiment, teams held to outcomes not output, etc.
Back in 2013 when Docker rocked the technology industry, Google decided it was time to open-source their next-generation successor to Borg, which they named Kubernetes. Today, Kubernetes is a large, open, and rapidly growing community with contributions from Google, Red Hat, CoreOS and many others (including lots of independent individuals!). Kubernetes brings a lot of functionality for running clusters of microservices inside Linux containers at scale. Google has packaged over a decade of experience into Kubernetes, so being able to leverage this knowledge and functionality for our own microservices deployments is game changing. The web-scale companies have been doing this for years and a lot of them (Netflix, Amazon, etc) had to hand build a lot of the primitives that Kubernetes now has baked-in. Kubernetes has a handful of simple primitives that you should understand before we dig into examples. In this chapter, we'll introduce you to these concepts and in the following chapter we'll make use of them for managing a cluster of microservices.
Back in 2013 when Docker rocked the technology industry, Google decided it was time to open-source their next-generation successor to Borg, which they named Kubernetes. Today, Kubernetes is a large, open, and rapidly growing community with contributions from Google, Red Hat, CoreOS and many others (including lots of independent individuals!). Kubernetes brings a lot of functionality for running clusters of microservices inside Linux containers at scale. Google has packaged over a decade of experience into Kubernetes, so being able to leverage this knowledge and functionality for our own microservices deployments is game changing. The web-scale companies have been doing this for years and a lot of them (Netflix, Amazon, etc) had to hand build a lot of the primitives that Kubernetes now has baked-in. Kubernetes has a handful of simple primitives that you should understand before we dig into examples. In this chapter, we'll introduce you to these concepts and in the following chapter we'll make use of them for managing a cluster of microservices.
Red Hat OpenShift 3.x is a Apache v2 licensed open-source developer self-service platform (OpenShift Origin: https://github.com/openshift/origin) that has been revamped to use Docker and Kubernetes. OpenShift at one point had its own cluster management and orchestration engine, but with the knowledge, simplicity, and power that Kubernetes brings to the world of container cluster management, it would have been silly to try and re-create yet another one. The broader community is converging around Kubernetes and Red Hat is all in with Kubernetes.
OpenShift has many features, but of the most important is that it's still native Kubernetes under the covers and supports features many enterprises need role-based access control, out of the box software defined networking, security, logins, developer builds, and many other things.
Application Architecture
Organizations are moving from monoliths to microservices. Independently deployable, optimized for agility and fast time to market.
How does this relate to Containers?
Containers can help you package & deploy microservices efficiently, compared to using traditional VMs.
Container platforms provide integrated service discovery, orchestration and help manage your microservices deployments at scale.
Containers can also be leveraged for traditional applications (JavaEE, stateful, etc.), so you don’t leave all of your existing apps behind (if you pick the right platform like OpenShift).
Key Bank
Keybank: Approximately 20,000 employees, 5.8 billion in annual revenue,
Primary Goal: Digital channel modernization
The scope of the digital channel modernization project was to migrate a 15-year old Java web app, developed on a homegrown MVC framework, to a more modern web experience and to create a new mobile web app.
This web app had grown more expensive to maintain and to meet their SLAs.
It was the quintessential monolith app.
Extremely strict and complex change management / release process (there was an Excel spreadsheet with about 70 manual steps), done during the weekend, in hopes that the system was back online for regular business processing by Monday morning.
Based on websphere, the mainframe of Java app servers (Donald Ferguson: designing websphere to be a mainframe was a mistake)
Were forced to use WebSphere even for digital channel apps
Slow startup time, 5-10 MINUTES
Asked to use Liberty, … told to use Liberty on laptops, but use WAS in production!!!!!!
Asked for upgrades to Java ... Took 6months
Asked for caching tools... None officially supported, but maybe one day we could get one
WTF?!?!! moment
400+ requests, tickets, etc to stand up an environment for developers
Looking at how to best build a cloud platform for internal teams to flip this model around.
OpenShift at Keybank
Launched a “digital channel modernization initiative”
Re-architected WebSphere apps to “fast moving monoliths” on tomcat
Built an automated continuous delivery pipeline on OpenShift with Jenkins
Went from quarterly to weekly deployments to production
From exploring Docker and Kubernetes to production apps on OpenShift in <7 months!
More info at:
https://developers.redhat.com/blog/2016/10/27/the-fast-moving-monolith-how-we-sped-up-delivery-from-every-three-months-to-every-week/
WebSphere: http://www.bbc.com/news/business-11944966
What's the biggest technology mistake you ever made - either at work or in your own life?
When I was at IBM, I started a product called Websphere [which helps companies to operate and integrate business applications across multiple computing platforms].
Because I had come from working on big mission-critical systems, I thought it needs to be scalable, reliable, have a single point of control ... I tried to build something like a mainframe, a system that was capable of doing anything, that would be able to do what might be needed in five years.
I call it the endgame fallacy. It was too complex for people to master. I overdesigned it.
Because we were IBM, we survived it, but if we'd been a start-up, we'd have gone to the wall.
A is the book checkout system -- book is a physical copy (second edition, volume I, II, etc all individual books)
B is the book search system – book can be individual works where a composition may be multiple books and volumes I and II are all the same book
C is the checkout reporting engine – a book is what A thinks is a book
D is a recommendation engine – a book isn’t even a book, it’s an “interest” which has a mapping between books
Book recommendations (D)
D also wants to consume messages from A.
But the things we need to do in our service is sufficiently different that we want to change the language.
A and B have a translation and are coordinating,
D is not coordinating and will build an AC that will do the translation. And it’s nobody else’s business how this works.
In this case, maybe we have a Book recommendation engine that also reads what gets checked out and whom. Maybe D has some more complicated models it uses for describing recommendations. It wants to use A’s data, but it doesn’t want to conform to A’s domain model. It builds an Anti-Corruption layer to keep its model pure and that can do the translation between its and A’s models.
Why microservices is going to be different, and a bit harder for enterprieses…. Because of this complexity in domain AND scale...
Doman driven design discussions..
This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.
This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.
Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.
Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.
Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.
Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
Apache Camel is very well suited for integration in a microservices environment. It’s not an ESB, doesn’t pre-suppose suites of software or servers. It’s a small, lightweight library that can be embedded in your choice of JVM runtime like Spring Boot, Dropwizard, WildFly/Swarm, EAP, Jetty, Tomcat, Karaf, or anything.
Companies have to change. This has always been true, but nowadays, this has to happen much faster than previous.
Companies like Blockbuster, borders, Kodak, traditional media, etc have felt this.
Average life of a human being is about 67 yrs.
The average life expectancy of a company in the S&P 500 has gone from 75 years in 1940, in 1960 to 61yrs, and in 1980 to 25yrs…. to ~15 years in 2016. By 2027
75% of S&P companies will be replaced.
http://www.reuters.com/article/idUS206536+13-Feb-2012+BW20120213
According to the report, the 61-year tenure for the average firm in 1958 narrowed to 25 years in 1980—to 18 years in 2011. At the current churn rate, 75% of the S&P 500 will be replaced by 2027.
If you’re company is not changing, not adapting, it will be done.
Technology is at the forefront of this change.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
One large database!
We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
…… new challenge….. Let’s come back to that…..
One large database!
We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
One large database!
We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.
This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.
Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.
Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.
Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.
Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.
This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.
Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.
Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.
Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.
Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.
This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.
Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.
Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.
Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.
Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.
This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.
Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.
Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.
Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.
Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.
This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.
Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.
Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.
Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.
Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.
Get back to first principles.
Focus on principles, patterns, methodologies.
Tools will help, but you cannot start with tools.