How to migrate from monolith to microservice without bleeding

Microservices architecture is gaining attention these days especially in the cloud era which made lots of companies start thinking about migrating their monolithic applications to microservices architecture. But moving towards the microservices needs lots of planning and preparation.
In this article I am going to discuss practices that can help you smoothly migrate to microservices and benefit better.

What is a monolith? How is it compared to microservices?

Monolith is usually referred to a single code based application where all parts and modules are in a single repository. You can imagine something like the picture below. Also, you can see it from the point of view of deployment. Generally monoliths are deployed in a single deployment which has its pros and cons. It could be simple to just deploy a single piece of software/code but it is not flexible enough. If you just fix a small bug you have to redeploy the whole application again. Downside is it could give you downtime sometimes and believe me you don’t want that!

Monolith

Whereas in a microservice every module is providing a specific service but it does not mean it should only do one task. Sometimes it could be a functionality that logically pursues a specific goal. Each of them is placed in a different repository and is usually maintained by a specific team or person who is responsible for development and maintenance of that module/service. You may imagine a microservice architecture as in the picture below. Also deployment wise it is more flexible to deploy an application in different parts. If something goes wrong it could be rolled back easily and quickly. Also patching is easier since every service and function is decoupled and separated.

Why should you migrate to microservice architecture?

Microservices can help you by separating the concerns of different teams and their relative services. Also, it makes engineering of the service easier by isolating them from the other parts so although they are still able to communicate there is no dependency in the code thus it is easier to deploy. For instance, if you have five microservices and you have added new features to only two of them you will only need to update and deploy those two and can leave the others intact. In this way the deployment is potentially easier, faster and with less risk.

Yet there are some benefits in having a monolithic architecture to start off with. Since everything including the relations between the modules are more clear it could be a good choice for a startup or a small team. Also by having the contracts in one place it is easier to pass the data and communicate between different modules and maintain the code. It is easier to change different parts around in case of any design, approach or plan shifts in initial phases due to business decisions. It can help you reduce duplication and develop faster so it can help reduce time to market as well.

Microservices on the other hand, due to their decentralized and loosely coupled nature, are very hard to maintain when it comes to applying system-wide changes compared to monoliths. So keeping the above in mind it could be better to start with a monolith and define module boundaries well then migrate to microservices when the system is more mature. Although there are some specific systems that starting the development with a microservice architecture could make sense.

When should you migrate?

Microservices are famous for their scaling capabilities and that is why many people refer to it as a solution for when the amount of users are scaled and that is when the first error in this process happens.

In fact you should think about moving toward the microservices when your team is scaled because first, when you have your team scaled, conflicts of interests are easier to happen and domains of concerns start to overlap more. That’s when you can really benefit from having a microservice architecture and focus each person on maintaining the part that they are more concerned with. Second, a scaled platform needs a scaled team.
A little history lesson about a legacy system could help you

How should you prepare and migrate to microservices?

It is highly beneficial to first prepare and plan before migration. There are some points worth your attention before and during the process. Experience can play an important role here. To foresee the challenges that might come up along the path is something you can use to avoid unnecessary costs and downtimes. Here at Techspire we can help you with modernizing your legacy application and migrating to microservices architecture.
Further I am going to point out the topics that you should prepare before and during the migration process.

Start with Planing

Always start by planning and setting goals, it is really easy to postpone tasks when there is no deadline or time slot or goal defined especially when there is a relatively larger team involved. There is always more immediate pressing work than working on a non-functional change or resolving technical debt and this is particularly true in case of refactoring codes.
Product managers and stakeholders could see more clearly where the organization is and where they want it to be while provided with a clear plan and goals to achieve. In such a process stakeholder management is very important for you to be able to achieve the goals you are setting. In this way monitoring and planning for required resources is way easier. On the other hand, having a loose timeline could result in pushing back on important tasks to an undefined future when there is more time for the team.

The plan that you should be looking for must cover technology side (including the technology stack to use, tools or platform services, etc.), organizational considerations (project management methodologies, team organization) basically tuning the plan for your organization, and functional analysis to see what changes should be introduced to the migration process and prioritize them and decide what needs to be migrated.

It is worth mentioning while you are preparing for migration to microservice architecture that the initial costs of the infrastructure is going to be high for a microservices architecture since you are going to pay for the CI/CD solutions and the platforms. So you should keep this in account and plan wisely and keep everyone involved in the planning process informed. Although with a good plan and the help of an improved release process it is going to pay off quickly.

A standard for your organization

However microservice architecture is all about decoupling and distribution. It is a good practice to have some standardization in place especially developed in early phases. Many common services like logging, health checks, testing, etc. could be designed and implemented in a standardized manner and be re-used later by other teams thus speeding up the development process and reducing the amount of duplicate work.
These standard patterns, formats, structures, adn tools could come handy when moving developers across teams by reducing the time needed for training and onboarding them. In the long term if a team decides to move from a common tooling the cost of developing and maintaining that fork should also be taken into account. Although the initial standard setup should be enough for them to work effectively.

Clear contracts

Define the contract for each service. For a service to be responsive and serve its purpose you are going to need to pass some data from other services. Both the required data and format of the data should be stated in the service contract of the specific service. I already mentioned how it could be easy to handle these in a monolithic application, in a microservice it is not that easy since each service requires different data and everything is now distributed through different repositories. This is while in a monolithic application services have formalized interfaces through contracts due to the entangled code. This is how the engineering of microservices architecture could be different from monolithic architecture.
It can help a lot if you make a library of those contracts and use them when trying to communicate to a service. This way it is easier to maintain and update if necessary the contract with only one downside which is to be constantly updating all libraries used by different services to access the other services.

It is all about communication

Communication between services is an important thing that should not be taken out of the equation. Generally speaking microservices architecture favors asynchronous communication over synchronous type of communication. Consequently it reduces the load on the underlying infrastructure and makes scaling easier and faster. In case of any load spike not only consumers are not overloaded, messages are put in queue to be processed and it is easier to just add more instances of message consumers to process the excess messages in the queue. In microservice architecture requesting service and responding service do not have to be up and running at the same time. Instead requested messages are put into the queue and whenever the responder service (consumer) is up it can catch up the messages.

There are some types of communication that do not play well with asynchronous calls. For instance imagine a service which is responsible for real time feedback when the user is searching for something to come up with recommendations. Such feedback has to be processed in a synchronous manner. Converting these calls and putting them in queues not only reduces the efficiency sometimes but also can introduce more complexity and overhead related to message processing.

To give you an example of the above, imagine you have an analytic service that is responsible for giving you some reports. You can use a message queue solution there and if some messages were delayed or the analytics service is down at the moment it wouldn’t be the end of the world since the messages are actually stored in queues and waiting to be processed whenever your analytics service is back online. But if you had a payment and a profile service, the communication between these services could be so crucial to the business. So it is better to use a faster and more reliable communication method like a request/response that wouldn’t miss a single request that needs to be processed by those services. Off course with the benefits comes the costs of the required infrastructure and availability related efforts that should be in place in such scenarios.

What about the transactions?

Now that we talked about the communication between services it is worth mentioning how we should control these transactions between services to ensure data consistency. Let’s start with an example, imagine you have an e-commerce application that uses microservices architecture. One of the benefits of using microservices architecture is that we have the option to choose different technology stacks for different services. It means you can use a relational database to store data of a specific service and choose to implement the database of another service as a NoSQL solution. In another word we can have a database per service implemented in our system. While it is very good and flexible it introduces two main challenges regarding the management of transactions that happen in many different scenarios. Back to our e-commerce application when a user orders some product off of your website there are different services engaged in the background. You have a service to process the creation of the order, a service that is responsible for processing the payment, another one for updating inventory and finally a service that delivers the order. To achieve the goal of serving the customer these micro services need to process some local transactions. What if one of them fails in processing its local transaction? Probably the data consistency is interrupted throughout the whole system, and we don’t want that.

Challenge #1

How to ensure the correctness of our transactions? That is achieved if a transaction is Atomic, Consistent, Isolated and Durable. Atomicity ensures that all or none of the steps of a transaction should complete. Consistency ensures that the data is valid between one state and another. Isolation makes sure if the transactions happened concurrently they would yield the same result as if they were sequentially happening. And durability ensures that if a system failure happens the committed transactions would remain committed.
In such distributed transaction scenarios it is always a high concern to ensure ACID specially when transactions span multiple services.

Challenge #2

You have a service responsible for a specific functionality and it has its own database and it’s working as a charm. But there is another service which also has its own database and responsibility in your application and to achieve its goal it needs to read the data from that other service in the system. Should that service give out the new data or the old one?

2PC

The solution to the above challenges could be the use of a pattern called two-phase commit which is a commonly used pattern to implement distributed transactions. This pattern uses a component called coordinator which has the logic to manage the transactions and it controls transactions. The pattern includes two phases as the name suggests. Prepare phase and commit phase.

Prepare phase: The coordinator checks with participants which are the corresponding services to see if they are ready to commit the transaction and they will answer with yes or no.

Commit phase: If all participants in phase one answer positively to the coordinator then it will ask them to commit otherwise if even one of the participants responded negatively then it will ask all of them to rollback their local transactions.
Thus this pattern can help with solving the challenges with distributed transactions. But there are some shortcomings that can be addressed with this pattern.
There is a risk of a single point of failure since the coordinator is responsible for managing the transactions. Also this pattern is slow by design since every node has to wait for the slowest one’s confirmation. Also since the coordinator is very chatty and each service depends on it for the transactions it is very difficult to scale. It is also important to note that this pattern does not support NoSQL databases so when in your microservice architecture there is one or more NoSQL databases you can not use this pattern anymore.

Saga pattern

Saga pattern is a transaction management system that uses q sequence of local transactions. It also uses compensation transactions to rollback unsuccessful transactions. In the saga pattern a compensation transaction should be idempotent and retryable to ensure that no manual intervention is needed for managing transactions. The saga execution coordinator is the central component in saga workflow that ensures the successful completion of transactions or the corresponding compensation transactions.

The SEC (saga execution coordinator) captures every event of sequences of transactions in a log. In case of failure it inspects the log to find the affected participants and the sequence in which the compensation transactions should be executed.
If the SEC fails it can read the log after startup and find out which compensation transaction is completed and which is yet waiting to be executed. Then the SEC will take the required actions based on the findings from the log.

Saga pattern is usually implemented in two ways: choreography and orchestration. Choreography is more suitable for a green field situation with preferably fewer participants in a transaction. It needs microservices to be using a specific framework to implement the choreographic saga pattern. You can use Axon Saga, Eclipse MicroProfile LRA, Eventuate Tram Saga or Seata framework to implement choreography patterns. And also, the SEC could be implemented either as a standalone component or embedded inside a microservice. In this method the flow is successful only if all microservices in an application report their local transaction as completed. In case of failure the failed participant should report to SEC and it is SEC’s responsibility to execute proper compensation transactions.

Orchestration pattern requires a single orchestrator component to manage the transactions taking place. In case of any failure the orchestrator is responsible for invoking the required compensation transactions which also we need to define in this pattern. It can be a good choice for situations where we have already implemented some microservices and we want to deploy the saga pattern in our architecture. To address some framework with which you can implement this pattern, you can use Camunda or Apache Camel.
Make sure messages are delivered
You should have an efficient routing mechanism in between your microservices so the requests can reach their desired destination efficiently. Here we are talking about load balancers, proxies, service registry or service mesh that should act as a router in the microservices architecture based on services capabilities.

Use automation

Earlier I mentioned the simplicity of the deployment in a microservices environment and how the changes that you have done on a single service are not going to affect other services while deploying. To get the most benefit out of it you should also think of automating the process.
The easiest way to deploy an application is to copy the code from the repository to the server and run it by just firing a single command, yes that would off course work but if you had 20 or 50 services to deploy that is not a very efficient way to do. Instead you can use CI/CD solutions to automate these processes, also you can use docker containers instead of deploying directly to an AWS or Azure box over the cloud. There are many tools that can help you in this.
Infrastructure as code could be named as an essential part of having a microservice. Now that I talked about automating every process I have to mention a modernized architecture requires a modernized infrastructure. There is no use to automating different phases of deploying an application while your infrastructure is not as agile and flexible. Having your infrastructure as code gives you the flexibility of automating everything even more and you can benefit from the modernization of your architecture better.

Unified logging

There is a big difference between how you would log events/data in a monolithic application versus microservices. You can not go and log in to different services and servers and track an incident or a user with an ID for example. That is why all the logging should be unified in a way so you can have a good idea of what is happening and what is wrong whenever you try to debug or troubleshoot an error. To have your log throughout the entire system unified you could use a framework to add tracing IDs to your stack traces. Jaeger is an example of such a solution. You can use something like ELK stack or Splunk to have your logs collected from different services and parse and visualize them in a centralized manner.
Generally when we are talking about unified logging we are talking about increasing the visibility of the application/architecture. As I mentioned earlier there are lots of transactions happening between services tracing them in an end to end way needs its tools as well. You can use something like OpenTelemetry. It is a set of tools, SDKs, APIs, and etc. which help you with capturing metrics, distributed traces and logs from applications.

How micro should a service be?

In a microservice architecture each service is responsible for serving one unique service to the users or other services.in fact this one of the basic principles of microservice system design is “single responsibility”. Sometimes it is really tempting to just break down a microservice into even smaller pieces. This can cause a mid-level complex system to grow to over hundreds of services in the first months of development. Also, this can result in making the track of and maintaining all those services hard and complex for developers, DevOps engineers, and infrastructure teams. A better approach is to think of boundaries and domains of context, while the resulting services are not going to be so small but they are still responsible for only one area of the business domain. Also you can minimize the number of services to a more logical amount.

The question I want to answer here is how small should you make a service? Well the best way to answer is whether just see if the below characteristics apply?

1- Is the part that you are separating from the service going to serve any other services?
2- Does the part that you are separating from the service not need to be updated whenever you update your service?
3- Does the part that you are separating from the service have different business concerns?

If the answer to above questions are no then it means that you better not break down the service further.

Migration policy

Now that you have planned your way to microservices and designed the base for the migration. should you be converting the whole monolithic application into microservices overnight? The answer is probably no.
One of the biggest benefits of microservice architecture is that you can gradually migrate your system from a monolith to a microservices-based application. Thus reducing the efforts and resources needed to maintain a legacy system simultaneously. Although, initially you need to perform some changes to your monolith upfront to be able to interact well with microservices but this effort pays off when you are able to manage the monolith the same way you manage microservices.

One of the patterns that can be used is known as the “Strangler pattern”. In this method you will start by developing new features as a microservice alongside with some parts of monolith that could be converted to microservice easier. Eventually you add more new features such as microservice and convert more parts of the monolith to microservice to the extent that there you have converted all the monolithic application parts to a microservice-based application.
Conclusion
There is no doubt that microservices can introduce another level of complexity. Resiliency, consistency, saga patterns, asynchronous messaging along with many other concepts need to be carefully considered, planned for and maintained. That being said, it is crucial that the teams and people involved in the migration process have a good previous experience or at least a clear understanding of microservices design and challenges with enough time to investigate the topic.
To make sure your migration project is a success it is important to have a team of microservices professionals by your side with a proven track record of navigating migration projects.

Who am I ?!

I am Yavar, and I am a DevOps engineer at Techspire. I have been in the IT industry for 9 years now and when I am not at my desk I am probably riding a bike through the city of The hague.

Do you think you have what it takes to work with us? At Techspire we’re looking for people who love technology as much as we do, looking to push themselves to expand their knowledge. Also, we love a good story, a good laugh and a few beers.

LinkedIn
Twitter
WhatsApp
Facebook