Kubernetes workloads
Deploy containerized workloads on Kubernetes

A workload is an application running as a distributed system on a K8s cluster, inside a set of Pods.
- A pod is the smallest deployable unit of computing that can be created and managed in a cluster.
- It is a group of containers that :
- Share configuration.
- Share network and storage resources.
- Share execution context and environment variables.
- Are co-located : run on the same cluster node.
- Are co-scheduled : run at the same time.
- For simplicity's sake, we will assume that each pod will run a single container which covers the vast majority of use cases.
- The rationale behind that is that K8s manages a container as well as its configuration, resources and execution context as a pod.
- Diverting from this policy is necessary only if multiple processes are tightly coupled.
- Also, this rationale stems from the fact that pods are considered ephemeral, interchangeable and disposable.
- An individual pod will be placed by
kube-scheduler
, then started bykubelet
and will run until either :- The pod process exits.
- The pod object is deleted from the cluster state.
- The pod is evicted for lack of resources.
- The cluster node fails.
Notes :
- Case 2 can occur for instance when the desired
PodTemplate
changes for the current set of pods. - It is unnecessary as well as strongly discouraged to manage pods directly in the vast majority of cases.
- If an individual pod is not significant and if the workload it contains has to scale horizontally, it's better to think of workloads as being inherently scalable. K8s provides these capabilities through objects that represent sets of pods (like
Deployment
).
- Ancillary containers can exist in a pod along with the workload container while still conforming to the above model.
- They are described in the
Pod
orDeployment
manifest along with the workload container and extend its functionalities.
- An ordered sequence of containers that have to complete execution and exit with 0 (success) before the next init container starts.
- Once the last declared init container has successfully exited,
kubelet
starts the workload container. - Init containers offers the following benefits :
- Block or delay workload container startup until conditions are met (this allows ordered startup of interdependent
Services
). - Offload any processing that happens only once at startup from the workload container image to the init container image.
- Offer a safe environment for any processing that deals with confidential data or unsecured code execution.
- Block or delay workload container startup until conditions are met (this allows ordered startup of interdependent
- By construction, init containers do not support :
- Container restart policies : the pod restart policy applies instead.
- Container probes : healthchecks are not performed on a pod that has not started yet.
Note : a pod will never be added to the resource pool of a service before its workload container is started.
- An init container that does not need to exit with 0 and keeps running even after the workload container starts.
- It is described in the manifest as an
initContainer
that hasrestartPolicy
set toAlways
and supports container probes as well. - Their lifecycle is independant from workload or init containers in the same pod.
-
The lifecycle of a pod involves actions by
kube-scheduler
,kubelet
andcontainer-runtime
and is stored as aPodStatus
object. -
For observability purposes, it is represented as a succession of
phases
:phase description Pending Setup and execution of init containers Running Workload container has started Succeeded Workload container exited with 0 Failed Workload container exited with 1 Unknown Failed to read the current pod state -
However, the sequence of successive states a pod will be in during its lifecycle is represented as an array of
PodConditions
:pod condition usage PodScheduled
Pod scheduled to a node by kube-scheduler
PodReadyToStartContainers
Pod execution environment and resources set up ContainersReady
All pod containers are ready (image downloaded etc) Initialized
All init containers exited with 0 Ready
Workload container started successfully -
K8s handles container failures within pods using a restart policy defined in the
PodSpec
. -
When a workload container exits, its wrapping pod will be restarted or marked for deletion depending on that configuration.
-
While
kube-controller-manager
can create new pods as existing pods are deleted, restarting containers in existing pods may provide faster recovery, resource efficiency, and operational simplicity. -
Newly created replacement pods may not be scheduled to the same node as the original failed (or succeeded) pod.
-
kubelet
performs healthchecks on pods running on the current node using container probes. -
Different types of probes exist that more or less match the pod's lifecycle phases.
-
A probe performs a check on a pod and results in an outcome of
Success
,Failure
orUnknown
:check action exec
Runs a command inside the container, expects 0 grpc
Performs a gRPC call, expects status SERVING httpGet
Sends a HTTP GET, expects status 200-399 tcpSocket
Expects specified TCP port to be open
- Static pods are pods directly managed by
kubelet
on a specific node and not by the control plane components. - They exist outside of the decisions made by
kube-controller-manager
andkube-scheduler
. -
kubelet
will create mirror pods in the cluster state to provide observability for static pods throughkube-apiserver
. - The main use case for static pods is to run a self hosted control plane, noticeably when using
kubeadm
.