Monoliths, Micro Services and Service Oriented Architectures
Organising chaos is never going to be easy so it’s no surprise that designing good software is hard. Huge systems tackling complex domains sees experience as important. The consequences of well meaning design decisions early on can manifest themselves several months down the track when the only option is to press on. Reducing complexity is a common thread amongst experienced developers.
In the traditional Monolith model of large scale system design, all of the application functionality is rolled up into one huge system. The user interface speaks to the server side application which talks to the database. The system services from the most basic to the most complex are all handled in a single code base. In a medium to large system the complexity can be huge. One change to the login system (for example) means a complete system redeploy on the server. Scaling the application means adding more servers with a deployment of the full code base on each server where the application is served up from behind a load balancer. Each deployment runs as one single unit in one process.
Although this seems simple, the architecture can lead to a fragile system. The system design and development must be disciplined as creating a tight coupling between the system components is easy to achieve. Sharing a common codebase means that the design and code can grow complex in unforeseen ways when the boundaries between different domain contexts become blurred whilst developers struggle to achieve a cohesive system in the light of complex business requirements and the nuances behind them. This usually happens after considerable time has been put into development and it is difficult to turn back.
Deployments become riskier. If one aspect of the system fails it can mean the whole system is out of action. As for scaling, each part of the application is treated equally regardless of its contribution to system load. It’s hardly optimal.
One architecture that is popular at the moment is the idea of micro-services. That is, an approach to deploying discrete software applications operating in their own process to which other services and systems can connect and interact with. It’s a smart idea but the concept is by no means new - service oriented architectures (SOA) were doing much of the same thing well before the talk of micro-services. Moreover, the term “micro” is somewhat of a misnomer as many of these services are in fact not “micro”. They can be large detailed applications with considerable functionality. That being said, they are often tied closely to a particular domain context and they are in stark contrast to the monolith described above.
Whichever way you look at it, the micro-service architecture comes with a number of very clear pros that make it an appealing option when working with systems of complexity. Let’s discuss why they can be helpful.
Understanding contextual boundaries
Breaking a domain up into discrete models is a fundamental concept behind the ideas presented by Eric Evans in his book Domain Driven Design. Creating bounded contexts is helpful in breaking down the complexity inside even the most basic of domains because it helps to define a clearer model of a particular aspect of the problem at hand. It creates a clear separation that limits the growth of complexity and interdependency. As a system grows, this is a beautiful thing.
The Micro Service
Micro services take this idea one step further by breaking up grouped system functionalities into clear isolated applications from which other applications and services can connect via lightweight protocols such as JSON over HTTP.
Running in separate processes, each service is isolated from the other. If one micro-service goes down the the other services can often continue working thus allowing the application to recover with minimal disruption.
Containerisation tools such as Docker makes this far easier in practice. Simply create your micro service, Dockerise it and deploy. Within the current application I am working on, this is what has been done. We have a number of containers and other services which we bring together to create the overall application. If one thing breaks, we are pretty safe that most of application will continue to work before the broken part comes back on line.
Think of this in terms of a crude trading system. The application may be broken up into two services. 1. A price discovery service which queries the exchange for prices and 2. an order placement service which places orders and receives confirmations from the exchange. Two independent services each running its own process. If one fails, the other can continue working (at least in this example).
The beauty of this architecture is that it breaks down the software into discrete chunks thus making the domain far more manageable. The design is far more forgiving as the contract with the service is clearly defined and the internal state is hidden from the outside world. The only way to interact with the service is via a clear API. The issues associated with tightly coupled APIs are now far less.
Deployments can now be done in isolation. Rather than deploying a monolith, smaller deployments can be planned more frequently and with minimal disruption. This leads to smaller incremental changes and a far better risk profile associated with the deployment process. This normally leads to a leaner and more streamlined test & QA process.
The Development Benefit
Services are arranged around business function with each function being a viewed as a “pluggable” connected unit. Development starts to become very focused on functionality as the exposed API becomes important. How the service interacts with other components beyond this API is not the overarching concern. Decoupling between components is reduced whilst the internal cohesion of each service improves. Higher cohesion means better reusability.
From the developer’s perspective, it frees them. With loose coupling between components and discrete boundaries code bases (per service) are reduced and thus more manageable. Changes to one component are more manageable across the system.
With smaller, more modular components developers can take more ownership of the code base. They can move forward with changes core confidently with the view that the changes they make are more isolated.
With loosely coupled services, system rewrites and upgrades are easier.
Any downsides?
Yep! Several. Service calls are now far more expensive. Although a remote call using JSON over HTTP is pretty fast, the network latency inherent in this means it will never be as fast as a native in process API call. Transaction management and network fault tolerance are also important considerations that are more difficult to achieve.
As requirements change, the modular nature of Micro Services can also make it inflexible. Moving code between service boundaries and domain contexts can become difficult. In this case, it needs to be asked how to share and evolve the model.
As the number of micro-services increases, the management of the eco-system also increases. With multiple services, what is the best way to manage the system deployment? What is the process for ensuring that your newly deployed micro-service is not breaking the services that connect to it.
One last thing, there is a need to consider synchronous vs asynchronous calls to services. As Martin Fowler explains:
Any time you have a number of synchronous calls between services you will encounter the multiplicative effect of downtime. Simply, this is when the downtime of your system becomes the product of the downtimes of the individual components. You face a choice, making your calls asynchronous or managing the downtime. At www.guardian.co.uk they have implemented a simple rule on the new platform - one synchronous call per user request while at Netflix, their platform API redesign has built asynchronicity into the API fabric.
Conclusion
Clearly, this architectural model comes with its challenges despite its many advantages. We can only think about the system design whilst considering the advantages and disadvantages that micro-services bring. Orchestrating the various aspects of this architecture requires careful planning and understanding of the domain requirements.
« Java's auto boxing can be problematic
PSD2 - An Introduction »