This article will explain to you:
- Automatic sidecar injection process in Istio
- The init container startup process in Istio
- Startup process for a Pod with Sidecar auto-injection enabled
The following figure shows the components of the Pod in the Istio data plane after starting up.
Istio Data Plane Pod Internals
sidecar injection in Istio
Istio provides the following two sidecar injection methods:
- Manual injection using
istioctl
. - Kubernetes-based mutating webhook addmission controller
The automatic sidecar injection method.
Whether it is manual injection or automatic injection, the injection process of sidecar needs to follow the following steps:
- Kubernetes needs to know the Istio cluster to which the sidecar to be injected is connected and its configuration;
- Kubernetes needs to understand the configuration of the sidecar container to be injected, such as image address, startup parameters, etc.;
- Kubernetes fills the configuration parameters of the sidecar according to the sidecar injection template and the above configuration, and injects the above configuration into one side of the application container;
The sidecar can be injected manually using the command below.
istioctl kube-inject -f ${ YAML_FILE } | kuebectl apply -f -
This command will use Istio’s built-in sidecar configuration for injection. Please refer to Istio official website for detailed configuration of Istio below.
.
After the injection is complete, you will see that initContainer
has injected the initContainer and sidecar proxy related configurations into the original pod template.
Init container
An init container is a specialized container that runs before the application container starts to contain some utilities or installation scripts that are not present in the application image.
Multiple Init containers can be specified in a Pod. If multiple Init containers are specified, the Init containers will run in sequence. The next Init container can run only after the current Init container must run successfully. When all the Init containers are running, Kubernetes initializes the Pod and runs the application container.
Init containers use the Linux Namespace and therefore have a different view of the filesystem than application containers. Therefore, they can have access to the Secret, while the application container cannot.
During Pod startup, Init containers are started sequentially after network and data volume initialization. Each container must exit successfully before the next container starts. If the container fails to start if it exits due to runtime or failure, it will retry according to the policy specified by the Pod’s restartPolicy
. However, if the Pod’s restartPolicy
is set to Always, the RestartPolicy
will be used when the Init container fails.
The Pod will not become Ready
until all Init containers have succeeded. The ports of the Init container will not be aggregated in the Service. A Pod that is being initialized is in the Pending
state, but should have the Initializing
state set to true. After the Init container is finished running, it will automatically terminate.
For more information about Init containers, please refer to Init Containers – Kubernetes Chinese Guide / Cloud Native Application Architecture Practice Manual
.
Init container resolution
The Init container injected by Istio in the pod is named istio-init
. We saw in the YAML file after Istio injection above that the start command of the container is:
istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i '*' -x "" -b '*' -d 15090,15020
Let’s check the Dockerfile of the container again
See how ENTRYPOINT
determines the command to execute at startup.
# 前面的内容省略 # The pilot-agent will bootstrap Envoy. ENTRYPOINT [ "/usr/local/bin/pilot-agent" ]
We see that the entry of the istio-init
container is the /usr/local/bin/istio-iptables
command line, and the location of the code of the command line tool is tools/istio-iptables in the Istio source code repository
content.
Note: In Istio 1.1, the isito-iptables.sh
command line is still used to operate IPtables.
Init container startup entry
The startup entry of the Init container is the istio-iptables
command line. The usage of the command line tool is as follows:
$ istio-iptables [ flags ] -p: 指定重定向所有TCP 流量的sidecar 端口(默认为$ENVOY_PORT = 15001) -m: 指定入站连接重定向到sidecar 的模式,“REDIRECT” 或“TPROXY”(默认为$ISTIO_INBOUND_INTERCEPTION_MODE ) -b: 逗号分隔的入站端口列表,其流量将重定向到Envoy(可选)。使用通配符“*” 表示重定向所有端口。为空时表示禁用所有入站重定向(默认为$ISTIO_INBOUND_PORTS ) -d: 指定要从重定向到sidecar 中排除的入站端口列表(可选),以逗号格式分隔。使用通配符“*” 表示重定向所有入站流量(默认为$ISTIO_LOCAL_EXCLUDE_PORTS ) -o:逗号分隔的出站端口列表,不包括重定向到Envoy 的端口。 -i: 指定重定向到sidecar 的IP 地址范围(可选),以逗号分隔的CIDR 格式列表。使用通配符“*” 表示重定向所有出站流量。空列表将禁用所有出站重定向(默认为$ISTIO_SERVICE_CIDR ) -x: 指定将从重定向中排除的IP 地址范围,以逗号分隔的CIDR 格式列表。使用通配符“*” 表示重定向所有出站流量(默认为$ISTIO_SERVICE_EXCLUDE_CIDR )。 -k:逗号分隔的虚拟接口列表,其入站流量(来自虚拟机的)将被视为出站流量。 -g:指定不应用重定向的用户的GID。 (默认值与-u param 相同) -u:指定不应用重定向的用户的UID。通常情况下,这是代理容器的UID(默认值是1337,即istio-proxy 的UID)。 -z: 所有进入pod/VM 的TCP 流量应被重定向到的端口(默认$INBOUND_CAPTURE_PORT = 15006)。
The parameters passed in above will be reassembled into iptables
Rules, for the detailed usage of this command, please visit tools/istio-iptables/pkg/cmd/root.go
.
The purpose of this container is to allow the sidecar proxy to intercept all traffic in and out of the pod. All inbound (inbound) traffic except port 15090 (used by Mixer) and port 15092 (Ingress Gateway) is redirected to port 15006 (sidecar), and then The outbound traffic of the intercepted application container is processed by the sidecar (listening on port 15001) before going out. For the usage of ports in Istio, please refer to the official Istio documentation
.
command parsing
What this startup command does is:
- Forward all traffic of the application container to port 15006 of the sidecar.
- Run as the
istio-proxy
user with UID 1337, which is the user space where the sidecar is located, which is also the default user of theistio-proxy
container, see therunAsUser
field in the YAML configuration. - Use the default
REDIRECT
mode to redirect traffic. - Redirect all outbound traffic to the sidecar proxy (via port 15001).
Because the Init container is automatically terminated after initialization, because we cannot log in to the container to view iptables information, but the initialization results of the Init container will be retained in the application container and sidecar container.
Pod startup process
The Pod startup process with Sidecar auto-injection enabled is as follows:
- The Init container is started first, and iptables rules are injected into the Pod for transparent traffic interception.
- Subsequently, Kubernetes will start the containers in sequence according to the declaration order of the containers in the Pod Spec, but this is non-blocking, and there is no guarantee that the first container will be started before the next one is started.
istio-proxy
container is started, thepilot-agent
will be the process with PID 1. It is the first process in the Linux user space and is responsible for pulling up other processes and dealing with zombie processes. Thepilot-agent
will generate the Envoy bootstrap configuration and start theenvoy
process; the application container is started almost at the same time as theistio-proxy
container. In order to prevent the container in the Pod from receiving external traffic before it is started, the readiness probe is Comes in handy. Kubernetes will perform a readiness check on port 15021 of theistio-proxy
container, and the kubelet will not route traffic to the Pod until theisito-proxy
is started. - After the Pod is started,
pilot-agent
will become a daemon process to monitor other processes in the system. In addition, this process also provides Envoy with Bootstrap configuration, certificates, health checks, configuration hot loading, identity support, and process lifecycle management, etc. .
Pod content container startup sequence issue
There is a problem with the container startup sequence during the startup of the Pod. Suppose that the application container starts first and requests other services. At this time, the istio-proxy
container has not been started yet, then the request will fail. If your application’s Insufficient robustness may even cause the application container to crash and the Pod to restart. The solution for this situation is:
- Modify the application to increase the timeout and retry.
- Increase the startup delay of the process in the application container, such as increasing the
sleep
time. - Add a
postStart
to the application container
Configuration, to detect whether the application process has been started, and only when the detection is successful, Kubernetes will mark the status of the Pod asRunning
.
Summarize
This article leads you to understand the Pod startup process in the Istio data plane, as well as the problems caused by the startup order of the containers in the Pod.
refer to
This article is reprinted from https://jimmysong.io/blog/istio-pod-process-lifecycle/
This site is for inclusion only, and the copyright belongs to the original author.