Kubernetes Pod Management: Static Pods vs Mirror Pods vs DaemonSets

|
Published:
|
|
Static Pods vs Mirror Pods vs DaemonSets

This tutorial serves as a guide to demystify different type of Kubernetes pods, in essence, Static Pods vs Mirror Pods vs DaemonSets. In Kubernetes, a pod is the fundamental unit of deployment, representing a logical collection of one or more containers that share resources resources such as storage volumes, network namespace, and IP addresses within the Kubernetes ecosystem. Pods serve as the basic building blocks for applications and services, encapsulating containers with shared storage/networking and a specification for how to run them.

Static Pods vs Mirror Pods vs DaemonSets

In a Kubernetes ecosystem, what exactly is the difference between static pods, mirror pods and daemonsets?

Kubernetes Static Pods

What is a static pod in Kubernetes?

A static pod is a type of pod in Kubernetes that is managed directly by the kubelet on a specific node, without the involvement of the Kubernetes control plane (kube-apiserver). They are neither managed by controllers like Deployment or ReplicaSet.

Static Pods’ configuration manifest files are typically located in a directory watched by the kubelet (e.g., /etc/kubernetes/manifests). The kubelet monitors this directory and starts or stops these pods as necessary. Depending on your container runtime interface, you can watch these pods status by listing the running containers and checking their respective containers.

e.g

sudo crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
b44d8e7641002       3861cfcd7c04c       23 hours ago        Running             etcd                        0                   ed7b93674545c       etcd-master-02
...
0c0e38d096416       53c535741fb44       25 hours ago        Running             kube-proxy                  2                   607aed66a0eef       kube-proxy-46hnc
0535cad22c9a1       56ce0fd9fb532       27 hours ago        Running             kube-apiserver              1                   e4bdd152fae58       kube-apiserver-master-02
20e4134130f4d       e874818b3caac       27 hours ago        Running             kube-controller-manager     1                   74e6d07084302       kube-controller-manager-master-02
ad611bac53b83       7820c83aa1394       27 hours ago        Running             kube-scheduler              1                   52f0cab4cfeb3       kube-scheduler-master-02

The main Kubernetes control plane components (apiserver, scheduler, etcd, controller-manager) usually run as static pods. In a Kubeadm cluster, their manifests YAML configuration files reside under /etc/kubernetes/manifests.

ls /etc/kubernetes/manifests -1
etcd.yaml
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml

Sample etcd.yaml contents.

sudo cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.122.59:2379
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://192.168.122.59:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --experimental-initial-corrupt-check=true
    - --experimental-watch-progress-notify-interval=5s
    - --initial-advertise-peer-urls=https://192.168.122.59:2380
    - --initial-cluster=master-02=https://192.168.122.59:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://192.168.122.59:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://192.168.122.59:2380
    - --name=master-02
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: registry.k8s.io/etcd:3.5.12-0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health?exclude=NOSPACE&serializable=true
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: etcd
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /health?serializable=false
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/etcd
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}

Kubelet service maybe configured to use a different static pod path. Therefore, to find out the current static pod path, you can check the value of the staticPodPath parameter in the Kubelet configuration file, /var/lib/kubelet/config.yaml.

grep staticPodPath /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/manifests

Static Pods usually have the hostname of the node under which they are running suffixed on their names.

See example below;

kubectl get nodes -n kube-system
NAME                                READY   STATUS    RESTARTS      AGE
...
etcd-master-01                      1/1     Running   1 (27h ago)   3d2h
etcd-master-02                      1/1     Running   0             23h
etcd-master-03                      1/1     Running   0             23h
kube-apiserver-master-01            1/1     Running   1 (27h ago)   3d2h
kube-apiserver-master-02            1/1     Running   1 (27h ago)   3d2h
kube-apiserver-master-03            1/1     Running   1 (27h ago)   3d2h
kube-controller-manager-master-01   1/1     Running   1 (27h ago)   3d2h
kube-controller-manager-master-02   1/1     Running   1 (27h ago)   3d2h
kube-controller-manager-master-03   1/1     Running   1 (27h ago)   3d2h
...
...
kube-scheduler-master-01            1/1     Running   1 (27h ago)   3d2h
kube-scheduler-master-02            1/1     Running   1 (27h ago)   3d2h
kube-scheduler-master-03            1/1     Running   1 (27h ago)   3d2h

For examples on how to create Static Pods, refer to the documentation.

Kubernetes Mirror Pods

Mirror Pods are virtual representations of Static Pods. They provide a mechanism for local visibility of Static Pods within the Kubernetes API server. Mirror pods make it easy to discover and retrieve information about Static Pods using kubectl commands on the API server. See example of getting static pods above.

In essence, Mirror Pods are a byproduct of Static Pods. The kubelet creates a Mirror Pod for every detected Static Pod manifest file. However, any changes made to a Static Pod manifest only affect the actual Static Pod on the node it is running.

Kubernetes DaemonSets

A DaemonSet on the other hand is not a pod itself, but rather a resource that manages pods. So what is it exactly? A DaemonSet is a Kubernetes resource that ensure that all (or some) nodes run a copy of a specific pod. They automatically add or remove pods as nodes are added or removed from the cluster.

Common use cases include;

  • Logging and Monitoring: DaemonSets are perfect for deploying logging agents like Fluentd or monitoring tools like Prometheus Node Exporter on every node in a cluster.
  • Node-Specific Utilities: You can use DaemonSets to deploy utilities specific to each node, such as local volume storage provisioners.

So, what mechanism does the DaemonSets use to ensure that all eligible nodes run a copy of a specific Pod?

  • Node Selector: DaemonSets use a node selector to determine which nodes are eligible to run the DaemonSet pod. A node selector is a field in the DaemonSet specification (spec.template.spec.nodeSelector) that specifies a set of key-value pairs. These pairs match labels assigned to nodes. Only nodes that match these labels are considered eligible to run the DaemonSet pod.
  • Node Affinity: In addition to node selectors, DaemonSets can also use node affinity settings (spec.template.spec.affinity.nodeAffinity) to further refine node selection. Node affinity allows DaemonSets to specify more complex rules based on node labels, such as required or preferred nodes for running the DaemonSet pod.
  • Taints and Tolerations: DaemonSets can also utilize Kubernetes’ taints and tolerations mechanism (spec.template.spec.tolerations) to ensure that DaemonSet pods can tolerate specific node conditions (taints). This allows DaemonSets to schedule pods on nodes that have specific taints applied, ensuring flexibility in node selection based on cluster requirements.

To check available DaemonSets in a Kubernetes cluster, simply run the command;

kubectl get daemonsets --all-namespaces

Sample output;

NAMESPACE       NAME              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-system   calico-node       6         6         6       6            6           kubernetes.io/os=linux   3d2h
calico-system   csi-node-driver   6         6         6       6            6           kubernetes.io/os=linux   3d2h
kube-system     kube-proxy        6         6         6       6            6           kubernetes.io/os=linux   3d2h

To get more details about a deamonset;

kubectl describe daemonset kube-proxy -n kube-system
Name:           kube-proxy
Selector:       k8s-app=kube-proxy
Node-Selector:  kubernetes.io/os=linux
Labels:         k8s-app=kube-proxy
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 6
Current Number of Nodes Scheduled: 6
Number of Nodes Scheduled with Up-to-date Pods: 6
Number of Nodes Scheduled with Available Pods: 6
Number of Nodes Misscheduled: 0
Pods Status:  6 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           k8s-app=kube-proxy
  Service Account:  kube-proxy
  Containers:
   kube-proxy:
    Image:      registry.k8s.io/kube-proxy:v1.30.2
    Port:       <none>
    Host Port:  <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
  Volumes:
   kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
   xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
   lib-modules:
    Type:               HostPath (bare host directory volume)
    Path:               /lib/modules
    HostPathType:       
  Priority Class Name:  system-node-critical
  Node-Selectors:       kubernetes.io/os=linux
  Tolerations:          op=Exists
Events:                 <none>

Read more about DaemonSets on the documentation page.

Conclustion

In summary;

  • DaemonSet: Ensures a specific Pod runs on every node (or a subset) in your cluster. Ideal for deploying ubiquitous services like logging agents or monitoring tools across all nodes.
  • Static Pod: A dedicated pod deployed directly on a specific node, managed by the kubelet service.
  • Mirror Pod: A virtual representation of a Static Pod residing within the Kubernetes API server. Provides limited visibility and management capabilities for the actual Static Pod running on specific nodes.

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
Kifarunix
Linux Certified Engineer, with a passion for open-source technology and a strong understanding of Linux systems. With experience in system administration, troubleshooting, and automation, I am skilled in maintaining and optimizing Linux infrastructure.

Leave a Comment