Kubernetes: poda life

Note trans. : this small (but capacious!) article written by Michael Hausenblas from the OpenShift team at Red Hat was so “liking" for us that almost immediately after its discovery it was added to our internal knowledge base on Kubernetes. And since the information presented in it will obviously be useful for the wider Russian-speaking IT community, we are pleased to post its translation.



As you might have guessed, the title of this publication is a reference to the Pixar animated film of 1998, “A Bug's Life” (in the Russian box office it was called “The Adventures of Flick” or “The Life of an Insect” - approx. Transl. ) , And indeed: between an ant— Kubernetes has a lot of similarities in working and living. We take a close look at the full life cycle of the hearth from a practical point of view - in particular, the ways in which you can influence the behavior during startup and shutdown, as well as the right approaches to checking the status of the application.

Regardless of whether you created it yourself or, better, through a controller like Deployment , DaemonSet or StatefulSet , it can be in one of the following phases:


When performing kubectl get pod , note that the STATUS column can show other (except these five) messages — for example, Init:0/1 or CrashLoopBackOff . This happens for the reason that the phase is only part of the general state of the hearth. A good way to find out exactly what happened is to run kubectl describe pod/$PODNAME and look at the Events: entry below. It displays a list of relevant actions: that the image of the container was received, it was planned under, the container is in the “problematic” (unhealthy) state.

Now let's take a look at a specific example of the life cycle of the hearth from beginning to end, shown in the following diagram:



What happened here? The steps are as follows:

  1. This is not shown in the diagram, but at the very beginning a special infra-container is launched and sets up the namespaces to which the other containers join.
  2. The first user-defined container that starts is the init-container ; It can be used for initialization tasks.
  3. Then the main container and the post-start hook are started simultaneously; in our case it happens after 4 seconds. Hooks are defined for each container.
  4. Then, at the 7th second, liveness and readiness tests come into play, again for each container.
  5. On the 11th second, when it is killed, the pre-stop hook is triggered and the main container is killed after a non-coercive (grace) period. Please note that in reality the process of completing the work of the hearth is somewhat more complicated.

How did I come to the above sequence and its timing? To do this, use the following Deployment , created specifically to track the order of events (in itself it is not very useful):

 kind: Deployment apiVersion: apps/v1beta1 metadata: name: loap spec: replicas: 1 template: metadata: labels: app: loap spec: initContainers: - name: init image: busybox command: ['sh', '-c', 'echo $(date +%s): INIT >> /loap/timing'] volumeMounts: - mountPath: /loap name: timing containers: - name: main image: busybox command: ['sh', '-c', 'echo $(date +%s): START >> /loap/timing; sleep 10; echo $(date +%s): END >> /loap/timing;'] volumeMounts: - mountPath: /loap name: timing livenessProbe: exec: command: ['sh', '-c', 'echo $(date +%s): LIVENESS >> /loap/timing'] readinessProbe: exec: command: ['sh', '-c', 'echo $(date +%s): READINESS >> /loap/timing'] lifecycle: postStart: exec: command: ['sh', '-c', 'echo $(date +%s): POST-START >> /loap/timing'] preStop: exec: command: ['sh', '-c', 'echo $(date +%s): PRE-HOOK >> /loap/timing'] volumes: - name: timing hostPath: path: /tmp/loap 

Note that for the forced shutdown of the hearth at the moment when the main container was working, I executed the following command:

 $ kubectl scale deployment loap --replicas=0 

We looked at a specific sequence of events in action and are now ready to move on - to practices in the field of life cycle management. They are:


This publication does not cover initializers (some details about them can be found at the end of this material - approx. Transl. ) . This is a completely new concept introduced in Kubernetes 1.7. Initializers work inside the control plane (API Server) instead of being in the context of a kubelet , and can be used to “enrich” the pods, for example, with sidecar containers or enforcing security policies. In addition, PodPresets , which can later be replaced by a more flexible initializer concept, have not been considered.

PS from translator


Read also in our blog:

Source: https://habr.com/ru/post/415393/


All Articles