As late as the 1990s, most heavily used business applications were running on mainframe computers or powerful network servers. These programs were typically monolithic, with the entire program needing to run for every use case. While this worked at the time, today’s systems demand scalability, resilience and responsiveness that such older models often cannot deliver.
Microservices and event sourcing can radically increase efficiency
Over the last decade, business application usage patterns have rapidly shifted. Today, many users expect nearly instant responsiveness even when accessing resources over networks. At the same time, both rapid scalability and resilience are becoming primary concerns for app developers.
Microservice architecture can address these concerns and others. Microservices are a means by which the functionality of monolithic programs can be broken down into much smaller, well-defined services, upon which an entire application can be built. Microservices each carry out a specific business process and allow for maximum flexibility in terms of design, language and scalability.
Unlike a monolithic program, a single application that is pieced together using microservices can have different parts written in different languages. And it can be carried out by small development teams with well-defined responsibilities. This approach can also significantly reduce or eliminate overlap between the work that multiple teams are doing.
Amazon, a company that frequently uses microservices architecture for its applications, limits the size of its development teams to groups that can be fed by two pizzas. This small-team orientation, when well executed, can eliminate some of the massive complexities that arise within large development projects, allowing teams to stay nimble and focused on readily achievable goals.
Microservice architecture also naturally promotes the loose coupling of application components. This means that applications that use microservices naturally tend towards independence, fault tolerance and are highly adaptable to asynchronous execution. As a consequence, they tend to be far more resilient to system faults. This turns out to be one of the key benefits of both microservices and event-sourced programming: When done right, such systems can virtually eliminate fatal errors and downtime. Each microservice should have limited scope, well-defined responsibilities and have its own data model.
Microservices also naturally promote scalability, both upwards and downwards, as well as fault tolerance. When one service is in high demand, scaling to meet that demand becomes trivial by simply creating new instances of the isolated microservice instead of running the entire monolithic program, which may require orders of magnitude more computational resources. Similarly, when a single microservice develops a fault, other microservices can continue asynchronously working. This is usually not possible if the fault were to develop within a function that was part of a monolithic program.
Event sourcing and microservices create powerful synergies
Using microservice architecture in conjunction with event-sourced programming can have a number of positive effects.
Event sourced programming, in the simplest terms, refers to a programming method that is based on the history of state transitions rather than on the current state of the world. This style of programming turns out to have a number of compelling benefits, including being a more-natural way of thinking about real-world processes. People tend to think about processes in terms of sequential events rather than a snapshot of the state of the world at any given time.
Event-sourced architecture allows for programming in a way that will make intuitive sense not only for the domain expert who is setting the project requirements but also for the programmers themselves, who will expend less cognitive and time resources on dealing with abstracting into their data model. In short, event-sourced programming consists of creating applications whose data model is akin to telling a story. Rather than just the end state of process, a microservice can then review the entire history of that process, allowing for precise targeting of specific events. For example, a microservice could be called that offers a last-minute discount before a customer checks out if there was an event where the customer put an item in the shopping cart and then later removed it. Without event-sourced recording of those transactions, the microservice could not be called for that event because it would not have been recorded at all.
Event-sourced coding can allow for effective asynchronous messaging between microservices, allowing any microservice to see the entire history of the process in question.
The goal of all this is to work towards a more reactive systems model. Reactive programming is well-suited for the tough demands of today’s web- and cloud-based applications. Reactive systems display the following characteristics:
- They are responsive. Responsive systems are able to serve users quickly under normal conditions. But they are also able to quickly detect and handle problems.
- They are resilient. Reactive systems do not become unresponsive when an element within them encounters a fault. These systems are designed to have high degrees of redundancy and have effective means of delegating tasks away from degraded or failed components. They also naturally contain faults while being designed to isolate any component that has failed.
- They are elastic. Reactive applications are able to quickly scale with user demand. They are able to remain highly responsive under wildly varying workloads, actively managing resource allocation as well as spawning or closing instances of critical components. They are able to do this effectively on commodity hardware.
- They rely heavily on asynchronous messaging. This is the form of inter-component communication that ensures the loosest possible coupling. Event sourcing can help here by ensuring that every microservice that makes up a reactive system always has access to the most-up-to-date system state and the complete history of how it got there.
Should you migrate to a microservices and event-sourced architecture?
Many developers will be curious to know what criteria would indicate that they should consider adopting an event-sourced microservices architecture in the future or even rewrite their existing applications to reflect this paradigm. Ultimately, it depends on a number of factors.
Generally speaking, if your system will be serving a wide breadth of client-specific and varied platforms, it will be difficult to take full advantage of the efficiencies offered by microservices and event-sourced architectures. This is because it may be difficult or impossible to monitor the client platforms’ health and dynamically assign resources to best meet system demands. Quick scaling through spawning or closing microservice instances may also be difficult because the relevant metrics may not be available, and you may not have control over client-side resources.
An even more-important consideration may be the user load of your application during peak times. If your application is not going to be fielding hundreds or thousands of user requests per second, then it is unlikely that you will see significant performance gains from transitioning to a microservices architecture that allows asynchronous calls to independent services. But the higher your peak-time demand gets, the more you stand to gain from the event-sourced microservices model.
Even if the above constraints indicate that migrating to an event-sourced microservices architecture would be the smart move, you will ultimately have to consider whether completely upending your existing code in order to make the shift is worth your while.
Migrating to microservices from a monolith
Migrating to microservices from a monolith may appear to be a nearly impossible task. The massive disruptions that shifting to a radically new architecture imply may be too much for decision makers to countenance. That’s why the right approach will often involve not rewriting existing code at all until a planned new feature is going to be rolled out or a serious problem is uncovered.
In effect, the old synchronous architecture can be sidestepped in a piecewise fashion. This can often be accomplished through the creation of an asynchronous façade that will allow existing synchronous functionality to continue normally while tying in new asynchronous functionality alongside it. Eventually, much of the app’s extended functionality as well as rewritten old functionality, which will arise on an as-needed basis, can be structured into a pure asynchronous implementation.
Programmers who have successfully migrated legacy apps to a reactive structure generally recommend following a few key steps when carrying out these transitions. Here are six steps that generally work well when migrating from a monolithic architecture to an event-sourced microservices design.
Carefully determine the primary use cases
One of the hardest steps when moving to a microservices approach is carefully defining the use cases. These should be well defined and circumspect while ideally performing a valuable core task that can potentially be used by many future applications.
The goal is to create microservice modules that are both highly decoupled as well as autonomous. This will give each microservice the capability of storing data in a format and shape that is required for its own use.
Separate internal components by use case and redefine them in terms of microservices
One important design goal of a microservices-based app is to minimize the interactions each service needs to have with other services while also minimizing the need for data transformation. Still, it may be necessary to carry out extensive interservice messaging as well as choosing optimal data structures for the service itself, which may frequently not correspond to the way that data is expressed or stored in the original program.
For instance, a service that only produces JSON output may be best served by a JSON database platform, such as Elasticsearch.
Design efficient and universal APIs
The next step will be to facilitate communication between the microservices through the creation of efficient APIs. You will also need to abstract every state transition within the system into events, which will then be stored in event objects.
These events will be read by many different microservices in order to build their internal states. Choosing the right technology for the encoding of these events is a critical step to ensure that processing times are fast and that each event is adequately captured and encoded.
There are a number of technologies that are excellent choices. These include Protobluf, Thrift and Avro. All three have remarkably superior performance to JSON and XML when it comes to serialization times and file sizes. Protobluf is a well-documented and extensively used solution that Google has deployed for most of its applications. It has also proven adept at allowing for the evolution of APIs.
Choose an event handler
The entire framework of microservices and event-sourcing is centered around using events to call microservices. Therefore, choosing the right event broker is a key consideration when implementing this architecture.
Many developers are fans of Apache Kafka for projects that require thousands of events each second to be stored and handled. Kafka is capable of both permanently storing events and acting as a message broker. The program divides events into topics, which can then be consumed by any service within the application.
Wisely designing topics is critical
Topics are like a combination of a continuous newsfeed and a searchable history of events that have taken place. Defining the topics in terms of entities like logins, placed orders or generating graphs can be an effective way of doing things. This allows modules to get succinct information concerning only the state transitions with which they primarily deal.
Putting it all together
It is important to extensively test the simpler new design, collecting relevant performance metrics on each use case and testing the event handling.
As a distributed system, programs that use microservices architectures are inherently complex. This means that it is unlikely that you’ll be able to get the migration to microservices correct on the first try. But following the above guidelines should be a good place to start.
Going with an incremental migration will help ease the disruption caused by radically altering the structure of an existing application. It will also help reduce the workload of the development team. It is important to identify the primary use cases and then create well-defined, circumspect services that can carry out those tasks. It is also important to design an API that supports the new architecture as well as the old monolithic program.
This should be a good start towards a successful migration to microservice-based functionality.