Kubernetes features

Kubernetes cluster features and components

view on github

"Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the desired state."


Table of contents

  1. Cluster features
  2. Cluster architecture
  3. Control plane components
  4. Node components
  5. Addons

Cluster features

Core

  • Declarative specification of the desired cluster state.
  • Updating from current to desired state at a controlled rate.
  • Declarative CPU and RAM allocation to containerized processes.
  • Optimized allocation of each cluster node's resources.
  • Options for auto mounting different storage systems.
  • DNS name resolution of containers inside the cluster.
  • Advertising of healthy containers only.
  • Load balancing of traffic to individual containers.
  • Self healing of containers (healthchecks, restart policy etc).
  • Metrics collection and exports.

Ancillary

  • Horizontal scaling of workloads (auto, CLI, GUI)
  • Run CI workloads as well as application workloads.
  • Support for IPV4/IPV6.
  • Integration with application level services through the open service broker API.

Cluster architecture

  • A Kubernetes cluster includes a control plane and one or more worker nodes.
  • The control plane more or less assumes the role of the swarm manager in docker swarm.
  • It makes global decisions about the cluster, detects and responds to cluster events.
  • The control plane components can run on any machine in a cluster.
  • However, they usually run on a dedicated machine that does not run actual workloads.
  • The node components run on every worker node, provide the K8s runtime environment and manage running pods.

K8s is different than docker swarm since the control plane components are not inherently part of the cluster itself, as opposed to the swarm manager in swarm mode.

cluster architecture


Control plane components

component usage
kube-apiserver Exposes the K8s HTTP API
etcd Highly available key-value store for cluster data
kube-scheduler Handles the placement and execution of pods across nodes
kube-controller-manager Handles the execution of cluster controllers
  • kube-controller-manager is responsible for :
    • Reading the current cluster state from kube-apiserver.
    • Running controllers that will attempt to reconcile the current cluster state with its desired state.

Notes :

  • All controllers follow the "infinite loop" pattern of the replication controller.
  • All controllers are combined into a single binary that executes in a single process.
  • cloud-controller-manager only runs controllers that interact with cloud vendor features, thus is optional.
  • The control plane components can be made highly available by scaling horizontally like actual workloads.
  • In such a context, replicated kube-scheduler and kube-controller-manager processes will adopt a leader election policy.
  • This guarantees that an unique and identified process is responsible for updating the cluster state at a given time in both cases.

Node components

component usage
kubelet Handles the execution and monitoring of pods on the current node
container-runtime Handles the execution and lifecycle of individual containers
kube-proxy Configures cluster-wide network rules at the node level
  • kubelet is responsible for :

    • Reading the set of PodSpecs for the current node from kube-apiserver following placement decisions by kube-scheduler.
    • Translating PodSpecs into healthy pods running on the node and start observing their status (health, resource usage etc).
    • Updating the cluster current state regularly through kube-apiserver with the status of the pods.
  • kube-proxy is responsible for :

    • Reading the Service definitions for the current node from kube-apiserver.
    • Translating Service definitions into a set of layer 3 / 4 network rules for the current node.

Notes :

  • Service allow cluster-wide mapping of a set of pods to a single ip address and port.
  • Multiple options are available for container-runtime (for instance, docker uses containerd)
  • Docker actually ships with containerd but the other docker features are useless when working with K8s.
  • As a result, it's best to eliminate the use of docker past the image build step.
  • containerd can be installed and configured directly or another container-runtime can be selected to use with K8s.
  • For the record, kubelet can also run in standalone mode, ie. without interacting with a control plane.

  • Compliant to its community-driven approach, K8s maintains specifications, not implementations of critical cluster features.
  • Implementations are provided by third parties as K8s addons and "composed" with core components to form a working cluster.
  • Addons are also referred to as "out-of-tree" implementations as opposed to K8s core features.