This article is a rework of previously written content and is included in the Istio Handbook of the ServiceMesher community . Other chapters are still being compiled.
People who have just heard of Service Mesh and tried Istio may have the following questions:
In this section, we will try to guide you through the internal connections between Kubernetes, the xDS protocol, and Istio Service Mesh. In addition, this section will also introduce the load balancing methods in Kubernetes, the significance of the xDS protocol for Service Mesh, and why Istio is needed in time for Kubernetes.
Using Service Mesh is not to say that it will break with Kubernetes, but that it will happen naturally. The essence of Kubernetes is to perform application lifecycle management through declarative configuration, while the essence of Service Mesh is to provide traffic and security management and observability between applications. If you have built a stable microservice platform using Kubernetes, how do you set up load balancing and flow control for calls between services?
The xDS protocol created by Envoy is supported by many open source software, such as Istio , Linkerd , MOSN, etc. Envoy’s biggest contribution to Service Mesh or cloud native is the definition of xDS. Envoy is essentially a proxy. It is a modern version of proxy that can be configured through APIs. Based on it, many different usage scenarios are derived, such as API Gateway, Service Mesh. Sidecar proxy and Edge proxy in.
This section contains the following
If you want to know everything in advance, here are some of the key points from this article:
kube-proxy
supporting components, micro-services closer to abstract the application layer by, for traffic between management services, security and observability.The following figure shows the service access relationship between Kubernetes and Service Mesh (one sidecar per pod mode).
Traffic forwarding
Each node of the cluster Kubernetes a deployed kube-proxy
assembly Kubernetes API Server may communicate with the cluster acquired service information, and then set iptables rules, sends a request for a service directly to the corresponding Endpoint (belonging to the same group service pod).
Service discovery
Istio Service Mesh can use the service in Kubernetes for service registration. It can also connect to other service discovery systems through the platform adapter of the control plane, and then generate the configuration of the data plane (using CRD statements, stored in etcd), a transparent proxy for the data plane. (Transparent proxy) is deployed in the sidecar container in each application service pod. These proxy need to request the control plane to synchronize the proxy configuration. The reason why is a transparent proxy, because there is no application container fully aware agent, the process kube-proxy components like the need to block traffic, but kube-proxy
that blocks traffic to Kubernetes node and sidecar proxy that blocks out of the Pod For more information, see Understanding Route Forwarding by the Envoy Sidecar Proxy in Istio Service Mesh .
Disadvantages of Service Mesh
Because each node on Kubernetes many runs Pod, the original kube-proxy
routing forwarding placed in each pod, the distribution will lead to a lot of configuration, synchronization, and eventual consistency problems. In order to perform fine-grained traffic management, a series of new abstractions will be added, which will further increase the user’s learning costs. However, with the popularization of technology, this situation will gradually ease.
Advantages of Service Mesh
kube-proxy
The settings are globally effective, and fine-grained control of each service cannot be performed. Service Mesh uses sidecar proxy to extract the control of traffic in Kubernetes from the service layer, which can be further expanded.
In Kubernetes cluster, each Node to run a kube-proxy
process. kube-proxy
Responsible for the Service
realization of a VIP (virtual IP) form. In Kubernetes v1.0, the proxy is implemented entirely in userspace. Kubernetes v1.1 adds the iptables proxy mode , but it is not the default operating mode. As of Kubernetes v1.2, the iptables proxy is used by default. In Kubernetes v1.8.0-beta.0, the ipvs proxy mode was added . More about kube-proxy component description please refer kubernetes Description: service and kube-proxy principle and use IPVS achieve Kubernetes inlet flow load balancing .
The disadvantages of kube-proxy :
First, if forwarded pod can not provide normal service, it does not automatically try another pod, of course, this can
liveness probes
be solved. Each pod has a health check mechanism. When there is a problem with the health of the pod, kube-proxy will delete the corresponding forwarding rule. In addition,nodePort
types of services cannot add TLS or more sophisticated message routing mechanisms.
Kube-proxy implements load balancing of traffic among multiple pod instances of the Kubernetes service, but how to fine-grained control the traffic between these services, such as dividing the traffic into different application versions by percentage (these applications belong to the same service , But on a different deployment), do canary release and blue-green release? Kubernetes community gives the method using the Deployment do canary release , essentially by modifying the pod of the method label different pod to be classified into the Deployment of Service.
Speaking above kube-proxy
the flow inside the only route Kubernetes clusters, and we know that Pod Kubernetes cluster located CNI outside the network created, external cluster is unable to communicate directly with, so Kubernetes created in the ingress of this resource object, which is located by the Kubernetes edge nodes (such nodes can be many or a group) are driven by the Ingress controller, which is responsible for managing north-south traffic . Ingress must be connected to various ingress controllers, such as nginx ingress controller and traefik . Ingress is only applicable to HTTP traffic, and its usage is also very simple. It can only route traffic by matching limited fields such as service, port, and HTTP path, which makes it unable to route TCP traffic such as MySQL, Redis, and various private RPCs. To directly route north-south traffic, you can only use Service’s LoadBalancer or NodePort. The former requires cloud vendor support, while the latter requires additional port management. Some Ingress controllers support exposing TCP and UDP services, but they can only be exposed using Services. Ingress itself does not support it, such as the nginx ingress controller . The exposed port of the service is configured by creating a ConfigMap.
Istio Gateway is similar to Kubernetes Ingress in that it is responsible for north-south traffic to the cluster. Gateway
The load balancer described by Istio is used to carry connections in and out of the edge of the mesh. The specification describes a series of open ports and the protocols used by these ports, SNI configuration for load balancing, and so on. Gateway is a CRD extension . It also reuses the capability of sidecar proxy. For detailed configuration, please refer to Istio official website .
You may have seen the following picture when you understand Service Mesh. Each block represents an instance of a service, such as a Pod in Kubernetes (which contains a sidecar proxy). The xDS protocol controls all traffic in Istio Service Mesh. The specific behavior is to link the squares in the figure below.
The xDS protocol was proposed by Envoy . The original xDS protocols in the Envoy v2 API refer to CDS (Cluster Discovery Service), EDS (Endpoint Discovery Service), LDS (Listener Discovery Service), and RDS (Route Discovery Service). Later, in the v3 version, Scoped Route Discovery Service (SRDS), Virtual Host Discovery Service (VHDS), Secret Discovery Service (SDS), and Runtime Discovery Service (RTDS) were developed. See the xDS REST and gRPC protocol for details .
Let’s take a look at the xDS protocol with a service with two instances each.
The arrow in the figure above is not the path or route after the traffic enters the proxy, nor is it the actual sequence. It is an imagined xDS interface processing sequence. In fact, there are cross references between xDS.
Agents that support the xDS protocol dynamically discover resources by querying files or managing servers. In summary, the corresponding discovery service and its corresponding API are called xDS. Envoy by subscription (subscription) to get the resources the way, there are three ways to subscribe:
path
parameter.ApiConfigSource
to point to the cluster address of the corresponding upstream management server.For details of the above xDS subscription methods, please refer to the xDS protocol analysis . Istio uses gRPC streaming subscriptions to configure sidecar proxy for all data planes.
The article introduces the overall architecture of the Istio pilot, the generation of proxy configuration, the function of the pilot-discovery module, and the CDS, EDS, and ADS in the xDS protocol. For details on ADS, please refer to the official Envoy documentation .
Finally, summarize the main points about the xDS protocol:
Envoy is the default sidecar in Istio Service Mesh. Based on Enovy, Istio has extended its control plane in accordance with Envoy’s xDS protocol. Before talking about the Envoy xDS protocol, we need to be familiar with the basic terms of Envoy. The following lists the basic terms and data structure analysis in Envoy. For a detailed introduction to Envoy , please refer to the official Envoy document . As for how Envoy works as a forwarding proxy in Service Mesh (not limited to Istio), please refer to NetEase Cloud Liu Chao this in-depth interpretation of the technical details behind the Service Mesh and understanding Istio Service Mesh Envoy agent in Sidecar injection and traffic hijacking , in which the article refers to some of the points, the details will not be repeated.
Here are the basic terms in Enovy you should know:
Envoy can set multiple Listeners, and each Listener can also set a filter chain, and the filters are extensible, which can make it easier for us to manipulate traffic behavior, such as setting encryption, private RPC, and so on.
The xDS protocol was proposed by Envoy and is now the default sidecar proxy in Istio. However, as long as the xDS protocol is implemented, it can theoretically be used as a sidecar proxy in Istio, such as the open source proxy MOSN by Ant Group .
Istio is a very feature-rich Service Mesh, which includes the following functions:
Istio defined as the CRD to help users perform traffic management:
DestinationRule
The defined policy determines the access policy of the traffic after routing processing. Simply put, it defines how the traffic is routed. These policies can define load balancing configurations, connection pool sizes, and external detection (used to identify and evict unhealthy hosts in a load balancing pool) configuration.EnvoyFilter
object describes filters for proxy services that can customize the proxy configuration generated by Istio Pilot. This configuration is rarely used by beginning users.ServiceEntry
can add additional entries to the service registry inside Istio, so that services automatically discovered in the mesh can access and route to these manual Joined services.After the reading of the above Kubernetes kube-proxy
after abstraction component, and XDS Istio in traffic management, we will take you far as the traffic management aspect of comparison components corresponding to the three / protocol (note, not completely three equivalents).
Governors | xDS | Istio Service Mesh |
---|---|---|
Endpoint | Endpoint | - |
Service | Route | VirtualService |
kube-proxy | Route | DestinationRule |
kube-proxy | Listener | EnvoyFilter |
Ingress | Listener | Gateway |
Service | Cluster | ServiceEntry |
If you say that the objects managed by Kubernetes are Pods, then the objects managed by Service Mesh are Service. Therefore, it is a natural thing to apply Service Mesh after using Kubernetes to manage microservices. If you do n’t want to manage even the Service, use serverless platforms like knative, but that’s what comes next.
The function of Envoy/MOSN is not just for traffic forwarding. The above concepts are just the tip of the iceberg in Istio’s new layer of abstraction over Kubernetes. This will be the beginning of the book.