What is a Service Mesh? - the breakout area chat between an Account manager and a Solutions Engineer

❓ “Who is this new kid in town people are talking about, Service Mesh?” — John asked in a very confused state.

🤔 “What’s with service mesh? You look anxious man, what happened?”

VJ asked.

banner service mesh

John is an “Account Manager” and VJ is a “Solutions Engineer”!

John: Things are changing fast, first it used to be those simple applications and I used to tell them to use number of virtual machines. Now, it gets difficult when the engineers talk in client architecture review calls!”

VJ: ok 😄

John: What was that, this is serious man?

VJ: ok-ok, I’ll explain what is a service-mesh- but I need to start from the time of those Simple applications; as you have mentioned.

John: Go ahead, I’m listening!

VJ: Ok, I’ll draw something for you!

monovsmicro Source

- The software built using a monolithic approach is self-contained; 
its components are interconnected and interdependent. 

- If developers want to make any changes or updates to a monolithic 
system, they need to build and deploy the entire stack at once. 

- It’s the same thing with scalability: the entire system, not just the 
modules in it, is scaled together. 

- With the monolithic architecture it can be difficult to adopt a new 
technology stack, and in case you want to use a new platform or 
framework, you’ll have to rewrite the entire solution.
- The microservice software architecture allows a system to be divided 
into a number of smaller, individual and independent services. 

- Each service is flexible, robust, composable and complete. They run as 
autonomous processes and communicate with one another through APIs. 

- Each microservice can be implemented in a different programming 
language on a different platform. Almost any infrastructure can run in a 
container which holds services encapsulated for operation. 

- Since these containers can be operated in parallel, the existing 
infrastructure is easier to maintain.

VJ: Although, when you have independent services or units there are a couple of things that you need to take care.

Challenges with the microservices architecture:

Service discovery
Load balancing
Fault tolerance
Distributed tracing
Security (mTLS, policies, patches)
Independently releasable
Service contracts

John: Yeah, I know that. Ok, not all of it 😄 but I know, and isn’t the container based implementations like Kubernetes solves them?

VJ: Oh, that’s impressive man 😉

Yes, that’s correct. Kubernetes is a very capable platform that has been well proven in production deployments of container applications. It provides a rich networking layer that brings together service discovery, load balancing, health checks, and access control in order to support complex distributed applications.

These capabilities are more than enough for simple applications and for well‑understood, legacy applications that have been containerized. They allow you to deploy applications with confidence, scale them as needed, route around unexpected failures, and implement simple access control.

John: Ok, that makes sense. So, if I’m running a simple application or even a complex application, Kubernetes is more than enough to handle things.

VJ: Partially yes! So only when the applications become more complex individually, it puts a overall burden on the application developer and operations team to accommodate it!

[Security, monitoring, and traffic management]

Your code should just focus on business logic, and not the infrastructure part!
Let discuss a few of the challenges:

Service discovery:

Say, now you have moved your Monolithic into Microservices. Each microservice would be running on a different virtual machine (VM) with its own application and named with an IP address and a port number.

How do you think service discovery will happen? You can’t hardcode the values, as in a cloud environment that’s not a permanent state. A simple machine restart or VM failure might make that not so optimum solution.

Hence, you need something like a service registry to take care of this problem. In simple terms a service registry would be the boss and every service when it comes up has to register itself to the registry.

service discovery eureka

In the following example, say if the Driver Management service wants to talk to Passenger Management service it’s gonna send a “lookup” request to “service registry”.

service discovery3

1. Service registry (Server): is a standalone application now, 
and needs your attention to manage it- potentially across 
availability zones (AZs).

2. Single point of failure. 

3. You will have to add libraries to all the services and 
write code for them.

Canary/Rolling deploys:

Rolling deploys:

You want to roll out a new version update of your service. In this case you will make sure you have two identical production environments working in parallel, with different versions.

At the right time here, you want to switch traffic from v1.0 to v.1.1. If there is an issue after v1.1 becomes live, traffic can be routed back to v1.0. All the time you are making sure that, services are up and behind the scenes you are doing an update.


Canary deploys:

It works on very similar principles, and goes as- let’s say we have updated a feature and want to test it only for 5% of the clients.

Jason Skowronski on dev.to explains:

With canary deployment, you deploy a new application code in a small part of the production infrastructure. Once the application is signed off for release, only a few users are routed to it. This minimizes any impact.

With no errors reported, the new version can gradually roll out to the rest of the infrastructure.

  • How do you ensure services are talking on HTTPS? How do you make sure certificates rotation: renewing/replacing after 30 days is taken care.
  • Network policies/whitelisting
  • Transparent patching
- So, to rephrase it, each challenge described above puts a burden 
on the application developer and operations team to accommodate it! 

- Individually, the burdens are light because the solutions are well 
understood, but the weight accumulates. 

- Eventually, organizations running large‑scale, complex applications 
might reach a tipping point where enhancing the application 
service-by-service becomes too difficult to scale.

John: That makes complete sense! So, can I call Service Mesh as:

Do it for me and reduce the overhead on my application developers?

VJ: You are fast man! 🚀

In a nutshell, the answer is yes.

At a high level, a service mesh ensures communication between containerized application infrastructures. It provides features such as traffic routing, load balancing, service discovery, encryption, authentication, and authorization.

Brian “Redbeard” Harrington, principal product manager at Red Hat, describes service mesh as a mash-up of several better-known technologies:

  • A service mesh is a set of software components which act as the “glue” for a set of independent applications. The goal of the mesh is to guarantee secure communications between each application and be able to redirect traffic in the event of failures.

Often the features of a service mesh look like a mash-up between a load balancer, a web application firewall, and an API gateway.


VJ: Let’s say initially we had services (Service A and Service B) directly talking to each other. Now, a program called as Sidecar is installed with each service, and they sit where the services sit- can be a container or a VM.

sidecar service mesh

Service A will now talk to sidecar and will act as a proxy or hiding factor for service A. It’s the sidecar’s responsibility now to talk to other service’s sidecars.

All these sidecars will be managed by a program called the Control Tower. It is responsible for sending all those instructions about service discovery, networking policies, load balancing, etc.

service mesh main

Hence, control tower and sidecar implementation takes care of all those heavy lifting and offloads all the burden from your “application code”- and that’s the logic behind Service Mesh.

Let’s look at some actual implementation of Service Mesh. The available projects in market are:

Istio, Linkerd and Consul

This is how an actual very close architecture of Istio- looks like.


John: Until now it was fine, although that architecture! 😕

VJ: (Interrupting)

Na, don’t get confused- the logic is same.

Envoy: is a project that’s actually the Sidecar we talked about.

Control Plane: is just divided into 3 different services:

Istio-Manager ||  Mixer || Istio-Auth

You can read about the full architecture, and what each of components do here!

Istio’s main features are:
-- Automatic load balancing for HTTP, gRPC, and TCP traffic.

-- Fine-grained control of traffic behavior with rich routing rules.

-- Traffic encryption, service-to-service authentication and strong 
identity assertions.

-- Fleet-wide policy enforcement.

-- In-depth telemetry and reporting.

John: That was a lot of information man, however I’m glad that I’ve got something here 😄.

In a nutshell again:

- Your code should just focus on business logic, and not the infrastructure part!

- Eventually, organizations running large‑scale, complex applications might reach a tipping point where enhancing the application 
service-by-service becomes too difficult to scale, that's where 
service mesh can help you!

VJ: Correct, you got it man!

Now, have your coffee and let me go- I’ve a architecture review call with one of our potential clients. 😉

John: Ok, talk to their non-tech folks like this and they would love you!

VJ: 😊 I’ll see. Although, you look happy now! 😄

Note: This was just me story-telling, and representing this in a form which is easier to understand. This is in no way a full implementation or covers how to use service mesh.

I will continue to share insights in a phased-out manner.



References and Motivation:







Written by@[Sachin Jha]
An old-school guy seeking new adventures :) When I’m not building cloud solutions @DigitalOcean, you can find me exploring the mountains. I love spending time outdoors. I’m a photographer and high altitude trekker.