Introduction to Service Mesh and Istio Deployment (Part 1)

Introduction to Service Mesh and Istio Deployment (Part 1)

After the announcement of Service Mesh in 2016, the importance of it was quickly recognized by companies and their products by the end of 2017. Both Linkerd and Envoy joined CNCD in January and September 2017 respectively. Major companies like Google and IBM also actively contributed to the Service Mesh Project: Istio with Lyft Envoy. Due to the importance of this, this article will introduce a variety of topics. Topics include an introduction to Service Mesh, Istio, how to deploy Istio on Kubernetes, Istio sample test, the relationship between Service Mesh and container security, and finally compare Istio to Linkerd.

Service Mesh

Service Mesh was a new technical term that was introduced with Linkerd on September 29,2016 to address network/security issues that were arising from microservices architecture/container services.

Imagine the following three scenarios:

  1. When a developer develops a containerized service and deploys it on a container management tool such as K8S, how do you ensure that each Pod is connected to the correct Pod? For example, in order to launch a new version of a service, different deployment methods may be required to switch the old version to the newer one. As an example, Canary deployment will let some users connect to the new Pod first, then after checking everything is alright, it will migrate the rest of the users over. So how does the service provider ensure that the connection being received is on the correct version of the Pod, whether it is during the test phase or after the connection?
  2. Under certain circumstances, how would a specific Pod be able to decide when to provide services or interrupt services depending on the traffic volume?
  3. How would developers ensure that each Pod/Microservice would be able to discover the corresponding Pod/microservice when developing a containerized service?

Before we go into 4 possible solutions for the problems above, we should first discuss the problems that traditional architecture services have encountered under decentralized architecture. This is important due to the history of the evolution of microservice architecture goes from single to decentralized, which is the current situation.

As shown in Figure 1 and Figure 2 above, communication between different computers was allowed under decentralized architecture. However, as shown in Figure 3 below, it was necessary to rely on the developer’s code to achieve program communication in decentralized architecture.

As you can see, when the architecture is larger, the developer’s main efforts may not be in the function itself, but just writing the correct connection code. Therefore, in the 1980s, TCP/IP (as shown in Figure 4) appeared, isolating the flow control of the network. This allowed developers to focus on the development of the program itself, greatly reducing connection problems that had to be debugged.

We can explore the first situation above, which mainly contains two points:

  1. How to connect Pods
  2. How to deal with complex connections

Regarding the first point, TCP/IP functions can be implemented on K8S through different CNI Plugins such as Calico, Canal, Flannel, Romana or Weave, but the complicated connection conditions may require modification of the source code or setting the YAML file. If it is necessary to frequently change the connection conditions of related services, it will take a lot of time to modify conditions, and depending on the overall complexity of the service, the more complicated the more error prone it will be. The situation shown in Figure 3 can be reproduced under microservice architecture.

Besides connection conditions, there are other requirements for the network under microservice architecture in order to function. For example, the circuit breakers in the second scenario and  service discovery in the third scenario. In general, the network requirements are insufficient for supporting current demand, much of the flow control is dependent on the developer’s written code for the connections. It is difficult for the developer to adapt to the complex and dynamic environment, so similar to the past scenarios of TCP/IP, we will need something similar to the enhanced version of the TCP/IP tool, which will help us solver more network problems on containerized services. This will allow developers to focus on functional development, and isolating the needs of various networks. This is the original intention of Service Mesh.

Service Mesh will not only provide for the above scenarios, but for different projects (Istio for example), it will also provide functions such as traffic control, Monitor Dashboard, and architecture topology for graphical services.

So how is Service mesh achieved? The basic architecture can be seen in Figure 5. In the K*S example, each Pod will be injected with a role like a reverse proxy. All Pod traffic must pass through the proxy, which is called Sidecar. It’s main function is to assist Service Mesh’s Control Plane to achieve other network requirements such as traffic management, fuse mechanism, and service discovery. The overall Service Mesh can be seen in Figure 6. All connections between Pods must be connected via Sidecar, and the traffic information obtained by Sidecar is controlled by Service Mesh’s control mechanism that also controls the connection between Pods.

Reference:Pattern: Service Mesh


Istio in Greek means sailing, and Kubernetes means helmsman in Greek. It can be easily seen that both of these products will have a complementary trend in the future, while also being supported by Google. Currently, the main platform supported by Istio is K8S, but in the future Istio will support other container orchestration tools. This is why in the K8S Service Mesh test article uses Istio. The main focus of Service Mesh comprised of two parts:

  1. Control Plane
  2. Sidecar

Hence, Istio is also divided into two part, a control plane and a data place, which are the first and second parts respectively. The architecture is shown as follows in Figure 7:

As can be seen in the Istio architecture, corresponding to Figure 5, Sidecar’s functionality is achieved by Envoy, while Control Plane consists of Pilot, Mixer, and Istio-Auth, as seen below:


As Istio’s bottom layer, it plays the role of Sidecar and includes the following functions:

  1. Dynamic service discovery
  2. Load balancing
  3. TLS termination
  4. HTTP/2 & gRPC proxying
  5. Circuit breakers
  6. Health checks
  7. Staged rollouts with %-based traffic split
  8. Fault injection
  9. Rich metrics

Envoy and Service Pod are deployed in the same Pod, so you can get traffic behavior and other various information like source/destination IP, horname, domaine, etc. For details, you can refer to the official page. Through the various data it obtains, it will be able to perform other functions through the Control Plane, like Routing Path or assisting Envoy


All traffic and other information via Envoy are received by Mixer, and Access Control or Execution Policy is implemented for the Pods according to the Mixer Configuration settings set by the service provider. As shown in Figure 8, it also supports multiple backends such as GCP, AWS, Prometheus, Heapster and so forth.


Pilot mainly controls the lifecycle of Envoy instances through the Expose Envoy API and converts it into Envoy’s operational model. It does this through the data obtained by Mixer and combines with the rules set by Mixer to achieve several Envoy functions. Three examples are shown below.

  1. Service discovery
  2. Traffic management
  3. Resiliency(timeouts, retries, circuit breakers, etc.)

Service registration is used to discover services, and Mixer rules are converted to enable Envoy to intelligently route traffic management. The lifecycle of Envoy instances are controlled by the API to conditionally connect or operate Pods.

In addition, Pilot Provides a Platform adapter to allow various platforms to operate or modify information about containers/Pods on the Pilot, such as the Pods’ registration information, Ingress resources, etc. on K8S.


Istio Auth mainly provides Service-to-Service and End-User authentication and provides a secure connection. For the above purposes, Istio-Auth requires 3 main components.

  1.  Identity
  2. Key Management
  3. Communication Security

Take Figure 10 as an example. It is a Service-to-Service TLS connection for different environments.  There are two services, naimly the frontend of the K8S environment and the backend of the virtual machine environment. The frontend service account is the frontend team, and the backend’s service account is the backend-team.

To establish a two-way TLS connection, the first thing to do is Identity. Istio Auth uses the service account to identify the service that needs to be connected to the TLS. Then Istio-Auth will send the keys/certs to the K8S container through the Istio CA’s Key Management.. If the VM/bare-metal doesn’t have a Key or Certification, it will need to deploy Node Agent onto the node to generate them. It will then need to send the CSR to the Istio CA to issue the certificate. After both sides have the correct credentials, the two way TLS can be connected.

Image Reference:Pattern: Service Mesh

Extended Readings:
Introduction to Service Mesh and Istio Deployment (Part 2)
Introduction to Service Mesh and Istio Deployment (Part 3)

Written By 呂威廷 迎棧科技工程師


Select list(s)*