Communication between microservices requires the choice of an appropriate solution based on business requirements and architecture, and HTTP-based REST APIs are the most common choice, but not the only option, and trade-offs in complexity, performance, scalability, and so on need to be considered. My Favorite Interservice Communication Patterns for Microservices[1]

Microservices are interesting to help us create scalable, efficient architectures, so almost all of the major platforms today are based on microservices architecture systems. Without microservices, there would be no Netflix, Facebook, or Instagram today.
however, breaking down the business logic into smaller units and deploying them in a distributed manner is only the first step. we must also understand how to make better communication between services. that’s right, microservices aren’t just externally oriented — or, in other words, serve external customers — many times they’re also customers of other services in the same system.
SO, HOW DO YOU MAKE TWO SERVICES COMMUNICATE WITH EACH OTHER? THE SIMPLE SCENARIO IS TO CONTINUE USING APIS PRESENTED TO EXTERNAL CUSTOMERS. FOR EXAMPLE, IF OUR EXTERNAL CUSTOMER-FACING APIS ARE REST HTTP APIS, INTERNAL SERVICES CAN ALSO INTERACT THROUGH THOSE APIS.
that’s a reasonable design, but let’s see if there are any other improvements.
Note: Communication is based on agreed protocols, both between microservices and between services and customers, and one way to always ensure protocol consistency is to share code describing these protocols between these decoupled code bases, which can be classes, types, simulated data objects, etc., Bit[2]it is the tool that helps to achieve this goal.
Bits control TS/JS modules independently from the source, maintaining dependencies between them even when they are deployed to separate remote hosts, enabling updates to a module to trigger continuous integration of all of its dependent modules.
HTTP API
Http APIs are very efficient designs after all, so let’s start here. HTTP APIs essentially mean making a service respond like a browser or Postman[3]such a desktop client sends information that way.
THE HTTP API IS BASED ON THE CS PATTERN, WHICH MEANS THAT COMMUNICATION CAN ONLY BE INITIATED BY THE CLIENT. THIS IS ALSO A TYPE OF SYNCHRONOUS COMMUNICATION, WHICH MEANS THAT ONCE THE COMMUNICATION IS INITIATED BY THE CLIENT, IT WILL NOT END UNTIL THE SERVER RETURNS A RESPONSE.

CLASSIC CS MICROSERVICES COMMUNICATION
BECAUSE THIS IS CONSISTENT WITH THE WAY WE ACCESS THE INTERNET, THIS METHOD IS VERY POPULAR. HTTP IS THE BACKBONE OF THE INTERNET, SO ALL PROGRAMMING LANGUAGES SUPPORT HTTP FUNCTIONALITY IN SOME WAY, MAKING IT A VERY POPULAR APPROACH.
but this way is not perfect, let’s analyze it.
merit
- Easy to implement. The HTTP protocol is not difficult to implement, and all major programming languages already provide native support for it, developers have little to worry about how it works internally, and complexity is hidden and abstracted by class libraries.
- Can be standardized. If something like REST is added to HTTP (properly implemented), a standard API is created that allows any client to quickly learn how to communicate with our business logic.
- Technology neutral. Because HTTP acts as a data transmission channel between the client and the server, it is independent of the implementation technology on both ends. You can implement the server side with Node .js, and the client (or other service) in JAVA or C#, and can communicate with each other as long as they follow the same HTTP protocol.
shortcoming
- Additional latency. As part of the HTTP protocol, there are several steps to ensure that the data is sent correctly, so HTTP is very reliable. However, this also adds latency to the communication (extra steps mean extra time). So, consider a scenario where 3 or more microservices need to exchange data with each other before the last microservice is complete. In other words, you need to have A send data to B so that B can send data to C, and then C can send a response. In addition to the processing time for each service, you must also consider the latency added by establishing 3 HTTP channels between them.
- timeout. although you can configure a timeout period in most scenarios, by default, if the server takes up too long, it will cause the client to close the connection. how long is “too long”? it depends on the configuration and the current service, but there will always be such a time. this adds an extra constraint to the business logic: it needs to execute quickly, or it will fail.
- Failures are difficult to resolve. Resolving server failures is not impossible, but additional infrastructure is required. By default, if the server is down, the client will not be notified. Clients will only realize this when they try to access the server, but it’s too late. There are ways to mitigate this, such as using a load balancer or API Gateway, but you need to do extra work on top of the CS traffic to make it more reliable.
THEREFORE, IF OUR BUSINESS LOGIC IS FAST AND RELIABLE AND NEEDS TO BE ACCESSED BY MANY DIFFERENT CLIENTS, THE HTTP API IS A GOOD SOLUTION. THIS CAN BE USEFUL WHEN MULTIPLE TEAMS WORK ON DIFFERENT CLIENTS AND CAN COMMUNICATE BASED ON A STANDARD, CONSISTENT INTERFACE.
IF MULTIPLE SERVICES NEED TO INTERACT WITH EACH OTHER, OR IF THE BUSINESS LOGIC IN SOME OF THEM TAKES A LOT OF TIME TO COMPLETE, DON’T USE THE HTTP API.
Asynchronous Messaging
this pattern also includes a message broker between the message producer and the receiver.
this is definitely one of my favorite ways to communicate between multiple services, especially when we need to scale out the processing power of the platform.

asynchronous communication between microservices
this pattern often requires the introduction of a message broker, so it adds additional complexity. however, the benefits of this go far beyond abstraction.
merit
- easy to scale. a major problem with direct communication between the client and the server is that in order for the client to be able to send messages, the server needs to have idle processing power, but this is limited by the amount of parallel processing that a single service can perform. if the client needs to send more data, the service needs to scale and have more processing power. this can sometimes be solved by scaling the infrastructure deployed by the service, using better processors or more memory, but there will always be an upper limit. instead, we can continue to use lower-spec infrastructure and have multiple replicas work in parallel. the message agent can distribute received messages to multiple destination services, allowing replicas to receive the same data or different messages as needed.
- easy to add new services. creating a new service, subscribing to the type of message you want to receive, and connecting a new service to a workflow are all simple. producers don’t need to know about the new service, they just need to know what kind of messages need to be sent.
- simple retry mechanism. if the delivery of a message fails due to a server outage, the message agent can automatically continue to try as long as it wishes, without writing special logic.
- event-driven. asynchronous messaging helps us create event-driven architectures, which are one of the most efficient ways for microservices to interact. rather than having a single service blocked by waiting for a synchronous response, or worse, having it constantly poll a storage medium for a response, write service code to notify the data when it’s ready. when it is time to wait for a response, the service can handle other things (such as processing the next incoming request). this architecture enables faster data processing, more efficient use of resources, and a better overall communication experience.
shortcoming
- Difficulty debugging. With no explicit data flow, just a promise that messages will be processed as quickly as possible, debugging the data flow and data processing path can be a nightmare. This is why it is often necessary to generate a unique ID when a message is received, so that the path of the message in the internal system can be traced through the log.
- There is no explicit direct response. Considering the asynchronous nature of this pattern, once a request is received from the client, the only possible response is “OK, received, once ready, I’ll let you know”. For invalid requests, you can also send a 400 error. The problem is that the client cannot directly access the output returned by the execution logic on the server side, but instead needs to request it separately. As an alternative, the client can subscribe to the response message type. In this way, the client will be notified as soon as the response message arrives.
- the proxy becomes a single point of failure. if the message agent is not configured correctly, it can become a problem with the schema. while you don’t have to put up with the unstable services you write, you’re forced to maintain a message broker that you barely know how to use.
this is definitely an interesting model and offers a lot of flexibility. if the producer side needs to generate a large number of messages, having a buffer-like structure between the producer and the consumer will increase the stability of the system.
while processing can be slow, with buffers, scaling becomes much easier.
Direct socket connection
Sometimes instead of relying on the old HTTP to send and receive messages, we can take some completely different paths, using some faster techniques, such as sockets.

opens a socket channel for microservice communication
At first glance, socket-based communication looks a lot like the client-server model implemented in HTTP, however, if you look closely, there are some differences:
- FOR STARTERS, THE PROTOCOL IS MUCH SIMPLER, WHICH MEANS MUCH FASTER TOO. OF COURSE, IF YOU WANT TO PROVIDE RELIABLE COMMUNICATION, YOU NEED TO WRITE MORE CODE TO IMPLEMENT IT, BUT THE EXTRA LATENCY ADDED BY HTTP IS GONE HERE.
- communication can be initiated by any one-party participant, not just the client. once the socket channel is opened, it remains in this state until it is closed. think of it as an ongoing phone call where anyone can start a conversation, not just the person calling.
having said that, let’s take a look at the pros and cons of this approach:
shortcoming
- There is no real standard. Socket-based communication seems a bit confusing compared to HTTP, without any structured standards (such as SOAP and REST). Therefore, the implementer is required to define the communication structure. This, in turn, makes it a bit difficult to create and implement new clients. However, if you can interact with each other just for your own services, you are actually implementing a custom protocol.
- it is easy to overload the receiver. if one service produces too many messages for another to process, it can cause the second service to be overwhelmed and crash. that’s what the previous model solved. here, the latency between sending and receiving messages is very small, which means that throughput can be higher, but it also means that the receiving service must be fast enough to handle everything.
merit
- Lightweight. Implementing basic socket communication requires very little work and setup. Of course, it depends on the programming language used, but some of them, for example, come with Socket.io[4]Node .js, which enables the communication of two services in a few lines of code.
- very optimized communication process. because there is a channel between the two services that is open for a long time, both parties are able to react when a message arrives. unlike pulling a database to fetch new messages, this is a reactive approach, and there is no faster way to do it.
Socket-based communication is a very efficient way for services to communicate with each other. For example, when deployed as a cluster, Redis uses this method to automatically detect failed nodes and remove them from the cluster. This is possible due to the fast and low cost of communication (meaning there is almost no additional latency and requires very little network resources).
this approach can be used if you can control the amount of information exchanged between services and don’t mind defining your own standard protocol.
Lightweight events
THIS MODE MIXES THE FIRST TWO MODES. ON THE ONE HAND, IT PROVIDES A WAY FOR MULTIPLE SERVICES TO COMMUNICATE WITH EACH OTHER THROUGH A MESSAGE BUS, ALLOWING ASYNCHRONOUS COMMUNICATION. ON THE OTHER HAND, SINCE IT ONLY SENDS VERY LIGHTWEIGHT PAYLOADS THROUGH THAT CHANNEL AND REQUIRES REST APIS THAT CALL THE CORRESPONDING SERVICE TO COMBINE ADDITIONAL INFORMATION WITH THE PAYLOAD.

A HYBRID OF LIGHTWEIGHT EVENTS AND APIS IN MICROSERVICE COMMUNICATIONS
this mode of communication is handy when we want to control network traffic as much as possible, or when message queuing has packet size limits. in this case, it’s best to keep things as simple as possible and then ask for extra information only when needed.
merit
- Best of both worlds. Because 80-90% of the data is sent through a buffer-like structure, this approach provides the advantages of an asynchronous communication pattern and requires only a fraction of the network traffic to be done through a less efficient but standard, API-based approach.
- focus on optimizing the most common scenarios. if we know that in most cases we do not need to use additional information to populate events, keeping them to a minimum will help optimize network traffic and keep the demand for message brokers at a very low level.
- Basic buffers. In this way, the extra details of each event are kept secret and away from the buffer. This, in turn, eliminates the coupling that might be possible in cases where you need to define a schema for these messages. The “dumb” of the holding buffer makes it easier to interact with other systems, especially in situations where migration or scaling is required (such as migrating from RabbitMQ to AWS SQS).
shortcoming
- There may be too many API requests. If you don’t implement this pattern for an unsuitable use case, you end up with the overhead of API requests, which adds additional latency to the response service, not to mention the additional network traffic added by all HTTP requests sent between services. If you are faced with such a scenario, consider switching to a communication model based entirely on async.
- Twice the communication interface. The service must provide two different means of communication. On the one hand, you need to implement the asynchronous model required for Message Queuing, but on the other hand, you must also have an API-like interface. Considering the differences used by the two methods, this can become difficult to maintain.
this is a very interesting blending mode, and considering the need to mix the two methods together, it takes some effort to write the code.
this can be a very good network optimization technique, ensuring that load mixing requests for applications only occur at a rate of about 10 – 20%, otherwise the benefits will not be worth writing additional code for it.
the best way to communicate between microservices is to provide something we want, whether it’s performance, reliability, or security, and we have to know what we want and then choose the best pattern based on that information.
silver bullet without a communication pattern, even if i prefer one of them as much as i do, realistically, you still have to find a pattern that adapts to the current use case.
References:
[1] My Favorite Interservice Communication Patterns for Microservices: https://blog.bitsrc.io/my-favorite-interservice-communication-patterns-for-microservices-d746a6e1d7de
[2] bit: https://github.com/teambit/bit
[3] Postman: https://www.postman.com/
[4] Socket.io: https://socket.io/