What is a service mesh?

The rise of the containerized software environment has left archaic, monolithic application structures behind. Cloud-native applications running via a range of containers, or individual, self-contained software packages, are now the norm. Platforms and tools like Kubernetes and Docker allow developers to create apps that work irrespective of device or operating systems, vastly reducing time to market and increasing potential user base numbers. All applications developed in this way need a reliable method by which their disparate components can communicate. That’s where the service mesh comes in.

What is a service mesh?

A service mesh is a layer of the network infrastructure that allows different services within the network to relay requests to each other. While containers work in isolation from each other, each is part of the whole, meaning messages and vital instructions often need to travel between containers. Rather than needing to code each container to have a separate function for sending and receiving instructions, a service mesh inserts this functionality at the platform level of the software, allowing all aspects of the app the ability to communicate.

What is a service mesh for DevOps teams? This service mesh layer adds transparency and observability for anyone monitoring app development, quickly tracking failures and better understanding where changes and upgrades might be needed. A service mesh creates a way to control and monitor service-to-service traffic and communication.

The service mesh is a relative newcomer in the landscape of software development, although tech ezine The New Stack suggests that the concept originated in the early 2010s with the rise of web-based organizations.

How do service meshes work?

A service mesh consists of two primary aspects: the data plane and the control plane. The data plane is a network of proxies, also called sidecars, that intercept and manage communication between apps and services. The control plane tells those sidecar proxies what they need to do and provides DevOps teams with an interface for managing the service mesh. Security and network management also occur in the control plane.

A sidecar will attach to a microservice, container, or virtual machine (VM), handling all communication instances and providing a single point of access to monitor comms and traffic.

A common alternative to the service mesh is the API gateway. An API gateway can also handle protocol transactions but is less scalable because the gateway must be manually updated whenever a service is added or removed. A service mesh exists parallel to services, making it far more scalable and flexible.

What are service meshes used for?

The data plane is essentially the part of the service mesh that does all the hard work. The sidecar proxies within the data plane have any number of functions, including:

  •          Setup retries
  •          Adding mTLS
  •          Load balancing
  •          Fault injection
  •          Circuit breaking
  •          Authentication
  •          Certificate management
  •          Observability and tracing
  •          Traffic splits
  •          Security

By providing secure communication without overloading individual services, the service mesh improves the reliability of software and potentially creates a more flexible release process by helping support troubleshooting and testing.

Service meshes and microservices

What is a service mesh in terms of microservices? Most developers or DevOps teams will have used various methods over the years, but the service mesh is becoming one of the most popular frameworks for microservice architecture thanks to the additional observability it provides.

Microservices have taken over from monolithic software architecture for many reasons, not least that this type of stack is much easier to move into the cloud. Microservice architecture provides scalability, is easier to deploy across a range of environments, and is generally more accessible. That’s not to say software development with microservices isn’t without its challenges. One of the key challenges for most software packages is effective communication between components.

An online retail store is a great example of various microservices that have to communicate with each other. You have your product catalog, recommendations, customer contact forms, email subscription confirmations, shopping carts, and shipping services, all linked and connected in some way. If you look at the last section, you’ll see many of the ways in which these services need to communicate. If adding something to a shopping cart doesn’t work, or a confirmation email fails to send, the relevant service should retry. Security and authentication ensure only valid requests go to the correct services. Traffic splits protect the overall software environment by preventing any one service from becoming overloaded.

If each of the services within the online store has these capabilities, this makes coding each individual service a much bulkier task for developers. Instead, the service mesh takes each aspect of functionality away from the individual microservices, making them more streamlined and agile to improve communication within the whole app.

Why do I need a service mesh for observability?

As well as improved flexibility and agility, one of the primary reasons for using a service mesh is to improve observability. Observability means the ability to understand or observe how a system is performing internally by examining the outputs of the system. How observable a system depends on what access the DevOps or monitoring team has to information from within the system, app, or software package.

Within a microservice architecture, each individual service needs to be observable. At times, these services might be fairly different, disparate, or require different types of monitoring. Some apps might comprise hundreds of separate microservices. Monitoring this volume of services separately is neither sustainable nor scalable.  A service mesh brings information from all the various services into one single layer, which DevOps teams can access via the control plane of the service mesh.

Using a service mesh means developers can add more services as and when they want without adjusting the coding that manages communication and monitoring. This provides unparalleled observability with complete scalability. No matter how the piece of software grows or changes, the control plane always provides DevOps teams with a way to gain critical insights into the overall health and behavior of individual services.

Service meshes make it simpler to aggregate telemetry, such as how components interact, lag or latency, and distributed tracing. Integrating with other systems or tools that analyze system data should be simpler, too, as data management teams don’t need to connect each microservice individually. They can simply use the control plane of the service mesh to create the connection instances required.

Conclusion: the benefits of a service mesh for today’s development landscape

The importance of getting your app to market quickly is more important than ever as the software landscape becomes increasingly crowded. A service mesh is one method of taking some of the strain off the DevOps team. It does this by increasing situational awareness of the microservice environment without increasing the workload of having to code individual instances of functionality into each service. An effective service mesh provides complete end-to-end observability, helping to quickly identify problems and get to the root cause of errors and failures.

The biggest win of the service mesh is that it works independently of the services or containers, meaning new and different types of services can be added at any time without disrupting the effectiveness of the mesh. Developers can create apps using whichever methods they prefer or suit the project best, with the peace of mind that a functional service mesh will provide visibility, transparency, and full observability no matter how the app grows or changes. Essentially, the service mesh is a future-proof layer of software architecture, empowering DevOps teams to focus on what matters: making their apps the best they can be.

At LogicMonitor, we help companies transform what’s next to deliver extraordinary employee and customer experiences. Want to learn more? Let’s chat.