Check out our previous article on microservices and testing them as well!
Most of the information available on microservices explains why you should use them, but doesn’t explain how a microservice should be built. You may be asking: Exactly how do you build a particular microservice? As with most any new subject, it’s important to understand the fundamentals. Surely you’ll want to avoid building a new microservice only to have it looking like the same old application with a new paint job.
In this article, we provide several design elements that are necessary before your microservices can adequately function in a distributed application architecture.
Sensible microservice functionality partitioning
A common concern among developers that are new to microservices is that they will excessively partition the functionality and the result will be a cumbersome collection of microservices. Not to worry. In most designs, such overpartitioning is rarely the concern. Rather, it’s much more common to find too much functionality in each microservice. One way to sensibly delineate scope is to partition each service according to logical functionality. If, for example, you already have a data lookup function in a monolithic app that many other functions can call, then that lookup function is a very good candidate to be set apart as its own service.
In his book, Building Microservices, Sam Newman recommends another solid approach. Keep the size of the code within a single microservice small enough such that it could be rebuilt by a development team within a two-week period. Restricting microservices size ensures that your team avoids microservices bloat.
Exposing an API for microservice communication
After partitioning a monolithic application into a collection of cooperating services, it’s important to think next about how each of the services should communicate. This is often done with REST API calls (though other transport mechanisms are available).
For your microservices to execute properly, each service must reliably send and receive data. An API exposes one or more services at known locations. Each service must have a specific format that a client service can access. An example of such a service can be found in the Twitter REST API by which you can retrieve the latest tweets, where you can provide a search query (or a hashtag) that will return the results in JSON format.
You can find the full specification and example response in the Twitter API docs.
It’s strongly recommended that you delay coding of the API until you take time with your team to whiteboard ideas and converge on a definition for what each specific service should expose to achieve proper operation. That being said, it’s likely to require several sessions to fully elaborate an API that is adequate for exposing each service and managing the calls from multiple types of clients.
Let’s compare and contrast so that we can think about this a bit more deeply. To establish communication structures between different processes, many apps place a significant amount of intelligence into the communication mechanism itself. This is the design of an Enterprise Service Bus (ESB), which includes elaborate facilities for message routing, management, transformation, and business rules application.
Microservices developers favor a much different approach: smart endpoints and dumb pipes. Managed messaging queue systems from leading cloud vendors, such as Google Cloud PubSub or AWS SQS and open source alternatives like RabbitMQ or Kafka can help facilitate this, as you build out your microservice architecture. Applications that are built with microservices aim for a high degree of decoupling while maintaining a high degree of cohesion. Solid microservice architecture manages its own domain logic and functions more like a collection of filters, in the classical Unix sense. The flow is to receive a request, apply commensurate logic, and generate a response. The choreography is done with simple REST protocols rather than orchestration by a central tool.
In a monolithic application, the components execute in-process. Communication between components is done either through method invocations or function calls. As you consider moving to microservices, it’s important to keep in mind that the biggest challenge in converting a monolith to a microservice design centers around all of the changes that are necessary to the communication patterns.
Efficient traffic management
Many applications have been built—knowingly or not—with bottlenecks. These include slow-running services that have very long responses times, services that get overwhelmed with calls, or there is a clear lack of processing power to support adequate responses. The worst of all is a service that suddenly terminates because of a software or hardware crash.
Good microservice design anticipates these potential problems, and provides a means for calling and callable services to coordinate traffic efficiently and communicate status.
The calling service should always track its calls and be ready to terminate if the response delay becomes excessive. The design of any target service should include an ability to send an overload response. This back-pressure response indicates that the calling service should reduce its load on the target service.
Calling services should have a sufficient means for handling a non-responsive called service. A calling service should anticipate that the called service might not respond, yet continue to serve up useful information—though perhaps incomplete. Good microservice architecture also includes functionality for spawning and killing new service instances, as necessary.
The fluctuations, variations, and erratic traffic of microservice systems ensure that there will be significant churn in the number and type of individual services that are available. Managing this volatility is even more challenging if the underlying infrastructure is unreliable. Virtual machines or cloud infrastructure may crash infrequently, stop responding, or strain under intensive load and thereby be performing very little useful work.
Though individual services instances within a cluster behind a microservice may be transient, the overall service system must remain available and fully operational for users that continuously require data from the application. This need for continuous operation is quite different from conventional monolithic applications, which often fail if its supporting infrastructure experiences significant failure.
One way to ensure that users will maintain continuity when an instance fails in one of their sessions is to provide offload storage by migrating user-specific data from the service instances into a sharable, highly-redundant storage system that is accessible from all instances. A good example of this design principle in the public cloud is to use AWS S3 or Google Cloud Storage – each with their own availability guarantees. This design provides mitigation, so that no single instance crash will interfere with user interaction.
One especially nice enhancement to storage offloading is to establish a memory-based shared cache between a given service and the storage used by that service. The result will be faster data access and application performance improvements. The caching system becomes yet another service within the microservice architecture. However, as with microservices in general, the additional complexity is likely to result in significant improvements to overall user satisfaction.
Microservices are worth the additional burden
Microservice architecture is the logical evolutionary response to the serious disadvantages that are inherent in monolithic applications. A microservice architecture permits much more functional flexibility and significant increases in performance. But such designs are non-trivial. The additional burden is worth the larger benefits that your development team will enjoy.