Setup Kubernetes Cluster on Ubuntu 22.04/20.04

|
Last Updated:
|
|
Setup Kubernetes Cluster on Ubuntu

How can you setup Kubernetes cluster on Ubuntu? In this tutorial, you will learn how to install and setup Kubernetes Cluster on Ubuntu 22.04/Ubuntu 20.04. Kubernetes, according to kubernetes.io is an open-source production-grade container orchestration platform. It facilitates automated deployment, scaling and management of containerized applications.

Install Kubernetes Cluster on Ubuntu 22.04/20.04

Kubernetes Cluster Architecture

In this tutorial, we are going install and setup a three node Kubernetes cluster.

A Kubernetes cluster is composed a Master node which hosts the control plane and a Worker node which hosts Pods.

Check our guide on a high-level overview of Kubernetes cluster to understand more on this.

Kubernetes Architecture: A High-level Overview of Kubernetes Cluster Components

Below are our node details.

NodeHostnameIP AddressvCPUsRAM (GB)OS
Mastermaster.kifarunix-demo.com192.168.56.16122Ubuntu 20.04/22.04
Worker 1wk01.kifarunix-demo.com192.168.57.6222Ubuntu 20.04/22.04
Worker 2wk02.kifarunix-demo.com192.168.58.5322Ubuntu 20.04/22.04
Worker 3wk03.kifarunix-demo.com192.168.59.4822Ubuntu 20.04/22.04

Run System Update

To begin with, ensure that your system packages are up-to-date;

apt update

Disable Swap

Running Kubernetes requires that you disable swap.

Check if swap is enabled.

swapon --show
NAME      TYPE SIZE USED PRIO
/swap.img file   2G   0B   -2

If there is no output, then swap is not enabled. If it is enabled as shown in the output above, run the command below to disable it.

swapoff -v /swap.img

Or simply

swapoff -a

To permanently disable swap, comment out or remove the swap line on /etc/fstab file.

sed -i '/swap/s/^/#/' /etc/fstab

or Simply remove it;

sed "-i.bak" '/swap.img/d' /etc/fstab

Configure Required Kubernetes Networking

Enable Kernel IP forwarding on Cluster Nodes

In order to permit the communication between Pods across different networks, the system should able to route traffic between them. This can be achieved by enabling IP forwarding. Without IP forwarding, containers won’t be able to communicate with resources outside of their network namespace, which would limit their functionality and utility.

To enable IP forwarding, set the value of net.ipv4.ip_forward to 1.

echo "net.ipv4.ip_forward=1" >>  /etc/sysctl.conf

Apply the changes;

sysctl -p

Load overlay and br_netfilter Kernel Modules on Cluster Nodes

overlay module provides support for the overlay filesystem. OverlayFS is type of union filesystem used by container runtimes to layer the container’s root filesystem over the host filesystem.

br_netfilter module provides support for packet filtering in Linux bridge networks based on various criteria, such as source and destination IP address, port numbers, and protocol type.

Check if these modules are enabled/loaded;

lsmod | grep -E "overlay|br_netfilter"
br_netfilter           32768  0
bridge                307200  1 br_netfilter
overlay               151552  9

If not loaded, just load them as follows;

echo 'overlay
br_netfilter' > /etc/modules-load.d/kubernetes.conf
modprobe overlay
modprobe br_netfilter

Similarly, enable Linux kernel’s bridge netfilter to pass bridge traffic to iptables for filtering. This means that the packets that are bridged between network interfaces can be filtered using iptables/ip6tables, just as if they were routed packets.

tee -a /etc/sysctl.conf << 'EOL'
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOL

Apply the changes;

sysctl -p

Install Docker Container Runtime on Ubuntu 22.04/Ubuntu 20.04

Kubernetes uses container runtime to run containers in Pods. It supports multiple container runtimes including Docker Engine, containerd, CRI-O, Mirantis Container Runtime.

Install Containerd Runtime on all Cluster Nodes

In this demo, we will use containerd runtime. Therefore, on all nodes, master and workers, you need to install containerd runtime.

You can install containerd using official binaries or from the Docker Engine APT repos. We will use the later in this guide, thus;

apt install apt-transport-https ca-certificates curl \
gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
gpg --dearmor > /etc/apt/trusted.gpg.d/docker.gpg
echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -sc) stable" > \
/etc/apt/sources.list.d/docker-ce.list
apt update

Install containerd;

apt install -y containerd.io

The kubelet automatically detects the container runtime present on the node and uses it to run the containers.

Configure Cgroup Driver for ContainerD

Cgroup (control groups) is a Linux kernel feature that allows for the isolation, prioritization, and monitoring of system resources like CPU, memory, and disk I/O for a group of processes. Kubernetes (kubelet and container runtime such as containerd) uses cgroup drivers to interface with control groups in order to manage and set limit for the resources allocated to the containers.

Kubernetes support three types of Cgroup drivers;

  • cgroupfs (control groups filesystem): This is the default cgroup driver used by Kubernetes kubelet to manage resources for containers.
  • systemd: This is the default initialization system and service manager in some Linux systems. it offers functions such as starting of daemons, keeping track of processes using Linux cgroups etc.

For systems that use Systemd as their default Init system, it is recommended to use systemd cgroup driver for Kubernetes instead of cgroupfs.

The default configuration file for containerd is /etc/containerd/config.toml. When containerd is installed from Docker APT repos, this file is created with little configs. If installed from the official binaries, the containerd confguration file is not created.

Either way, update the containerd configuration file by executing the command below;

containerd config default > /etc/containerd/config.toml

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.k8s.io/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0

Once you generate the default config, you need to enable systemd cgroup for the containerd low-level container runtime, runc by changing the value of SystemdCgroup from false to true.

sed -i '/SystemdCgroup/s/false/true/' /etc/containerd/config.toml

Start and enable containerd to run on system boot;

systemctl enable --now containerd

Confirm the status;

systemctl status containerd

● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2023-04-29 19:13:46 UTC; 1s ago
       Docs: https://containerd.io
    Process: 4843 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 4844 (containerd)
      Tasks: 9
     Memory: 12.1M
        CPU: 73ms
     CGroup: /system.slice/containerd.service
             └─4844 /usr/bin/containerd

Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.862483393Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.862616586Z" level=info msg=serving... address=/run/containerd/containerd.sock
Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.862780610Z" level=info msg="containerd successfully booted in 0.022699s"
Apr 29 19:13:46 master.kifarunix-demo.com systemd[1]: Started containerd container runtime.
Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.865481875Z" level=info msg="Start subscribing containerd event"
Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.865684317Z" level=info msg="Start recovering state"
Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.865869390Z" level=info msg="Start event monitor"
Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.865968972Z" level=info msg="Start snapshots syncer"
Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.866048742Z" level=info msg="Start cni network conf syncer for default"
Apr 29 19:13:46 master.kifarunix-demo.com containerd[4844]: time="2023-04-29T19:13:46.866153415Z" level=info msg="Start streaming server"

Install Kubernetes on Ubuntu 22.04/Ubuntu 20.04

There are a number of node components, required to provide Kubernetes runtime environment that needs to be installed on each node. These include:

  • kubelet: runs as an agent on each worker node and ensures that containers are running in a Pod.
  • kubeadm: Bootstraps Kubernetes cluster
  • kubectl: Used to run commands against Kubernetes clusters. 

These components are not available on the default Ubuntu repos. Thus, you need to install Kubernetes repos to install them.

Install Kubernetes Repository GPG Signing Key

Run the command below to install Kubernetes repo GPG key.

apt install gnupg2 -y
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
gpg --dearmor > /etc/apt/trusted.gpg.d/k8s.gpg

Install Kubernetes Repository on Ubuntu 22.04/Ubuntu 20.04

Next install the Kubernetes repository;

echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kurbenetes.list

Install Kubernetes components on all the nodes

apt update
apt install kubelet kubeadm kubectl -y

You can hold the packages to avoid automatic updates and main the same cluster version.

apt-mark hold kubelet kubeadm kubectl

Initialize Kubernetes Cluster on Control Plane using Kubeadm

After the installation of the container runtime as well the Kubernetes components, it is time to initialize the Kubernetes Cluster on the master node. The Kubernetes master is responsible for maintaining the desired state for your cluster.

While bootstrapping a Kubernetes cluster, there are quite a number of options/arguments in which you can pass to the kubeadm init command;

kubeadm init <args>

Some of the common arguments/options include;

  • --apiserver-advertise-address: Defines the IP address the API Server will advertise it’s listening on. If not set the default network interface will be used. An example usage is --apiserver-advertise-address=192.168.56.10.
  • --pod-network-cidr: Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node. You use this to define your preferred network range if there is gonna be a collision between your network plugin’s preferred Pod network addon and some of your host networks. e.g--pod-network-cidr=10.100.0.0/16.
  • --control-plane-endpoint: Specifies the hostname and port that the API server will listen on. This is recommended over the use of --apiserver-advertise-address because it enables you to define a shared endpoint such as load balance DNS name or an IP address that can be used when you upgrade single master node to highly available node. For example, --control-plane-endpoint=cluster.kifarunix-demo.com:6443.

Since we are just running a single master node Kubernetes cluster in this guide, with no plans to upgrade to highly available cluster, then we will specify just the IP address of the control plane while bootstrapping our cluster.

Thus, run the command below on the master node to bootstrap the Kubernetes control-plane node.

kubeadm init --apiserver-advertise-address=192.168.56.161 --pod-network-cidr=10.100.0.0/16

The command will start by pre-pulling (kubeadm config images pull) the required container images for a Kubernetes cluster before initializing the cluster.

Once the initialization is done, you should be able to see an output similar to the one below;


[init] Using Kubernetes version: v1.27.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0429 20:11:35.029953    8238 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
W0429 20:12:54.458889    8238 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.kifarunix-demo.com] and IPs [10.96.0.1 192.168.56.161]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master.kifarunix-demo.com] and IPs [192.168.56.161 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master.kifarunix-demo.com] and IPs [192.168.56.161 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0429 20:14:48.698969    8238 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.002695 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master.kifarunix-demo.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master.kifarunix-demo.com as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: y9yrgx.v3pls4s7atyazot6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.161:6443 --token y9yrgx.v3pls4s7atyazot6 \
	--discovery-token-ca-cert-hash sha256:a407dfc21c579766d188dfc6a800df0a6a7f538aed9c1fd516a2eafbf0afa5a5

As suggested on the output above, you need to run the commands below on the master node to start using your cluster.

Be sure to run the commands as regular user (recommended), with sudo rights.

Thus, if you are root, then switch to regular user (kifarunix is our regular, it could be a different user for you)

su - kifarunix

Next, create a Kubernetes cluster directory.

mkdir -p $HOME/.kube

Copy Kubernetes admin configuration file to the cluster directory created above.

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

Set the proper ownership for the cluster configuration file.

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify the status of the Kubernetes cluster;

kubectl get nodes
NAME                        STATUS     ROLES           AGE     VERSION
master.kifarunix-demo.com   NotReady   control-plane   4m20s   v1.27.1

As you can see, the cluster is not ready yet.

You can also get the address of the control plane and cluster services;

kubectl cluster-info
Kubernetes control plane is running at https://192.168.56.161:6443
CoreDNS is running at https://192.168.56.161:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Install Pod Network Addon on Master Node

A Pod is a group of one or more related containers in a Kubernetes cluster. They share the same lifecycle, storage/network. For Pods to communicate with one another, you must deploy a Container Network Interface (CNI) based Pod network add-on.

There are multiple Pod network addons that you can choose from. Refer to Addons page for more information.

To deploy a CNI Pod network, run the command below on the master node;

kubectl apply -f [podnetwork].yaml

Where [podnetwork].yaml is the path to your preferred CNI. In this demo, we will use Calico network plugin.

Install Calico Pod network addon Operator by running the command below. Execute the command as the user with which you created the Kubernetes cluster.

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/master/manifests/tigera-operator.yaml

namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

Next, download the custom resources necessary to configure Calico. The default network for Calico plugin is 192.168.0.0/16. If you used custom pod CIDR as above (10.100.0.0/16), download the custom resource file and modify the network to match your custom one.

wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml

The network section will now look like;

    - blockSize: 26
      cidr: 192.168.0.0/16

Update the network subnet to match your subnet.

sed -i 's/192.168/10.100/' custom-resources.yaml

Apply the changes

kubectl create -f custom-resources.yaml

Sample output;

installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

Get Running Pods in the Kubernetes cluster

Once the command completes, you can list the Pods in the namespaces by running the command below;

kubectl get pods --all-namespaces

NAMESPACE         NAME                                                READY   STATUS              RESTARTS       AGE
calico-system     calico-kube-controllers-789dc4c76b-f982l            0/1     ContainerCreating   0              3m47s
calico-system     calico-node-n226l                                   1/1     Running             0              3m47s
calico-system     calico-typha-7cf6b85898-bb8kd                       1/1     Running             0              3m47s
calico-system     csi-node-driver-srgkh                               0/2     ContainerCreating   0              3m47s
kube-system       coredns-5d78c9869d-4gbtm                            0/1     ContainerCreating   0              20m
kube-system       coredns-5d78c9869d-6jp5n                            0/1     ContainerCreating   0              20m
kube-system       etcd-master.kifarunix-demo.com                      1/1     Running             0              20m
kube-system       kube-apiserver-master.kifarunix-demo.com            1/1     Running             0              20m
kube-system       kube-controller-manager-master.kifarunix-demo.com   1/1     Running             1 (102s ago)   20m
kube-system       kube-proxy-plpr7                                    1/1     Running             0              20m
kube-system       kube-scheduler-master.kifarunix-demo.com            1/1     Running             1 (102s ago)   20m
tigera-operator   tigera-operator-549d4f9bdb-24f7s                    0/1     Evicted             0              45s

You can list Pods on specific namespaces;

kubectl get pods -n calico-system

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-789dc4c76b-f982l   1/1     Running   0          5m38s
calico-node-n226l                          1/1     Running   0          5m38s
calico-typha-7cf6b85898-bb8kd              1/1     Running   0          5m38s
csi-node-driver-srgkh                      2/2     Running   0          5m38s

As can be seen, all Pods on calico-system namespace are running.

Open Kubernetes Cluster Ports on Firewall

If firewall is running on the nodes, then there are some ports that needs to be opened on the firewall;

Control Plane ports;

ProtocolDirectionPort RangePurposeUsed By
TCPInbound6443Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10259kube-schedulerSelf
TCPInbound10257kube-controller-managerSelf

So the ports that should be open and accessible from outside the node are:

  • 6443 – Kubernetes API Server (secure port)
  • 2379-2380 – etcd server client API
  • 10250 – Kubelet API
  • 10251 – kube-scheduler
  • 10252 – kube-controller-manager

In my setup, I am using UFW. Hence, you only need to open the ports below on Master/Control Plane;

for i in 6443 2379:2380 10250:10252; do ufw allow from any to any port $i proto tcp; done

You can restrict access to the API from specific networks/IPS.

Worker Nodes;

ProtocolDirectionPort RangePurposeUsed By
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound30000-32767NodePort ServicesAll

On each Woker node, open the Kubelete API port;

ufw allow from any to any port 10250 proto tcp comment "Open Kubelet API port"

You can restrict access to the API from specific networks/IPS.

Add Worker Nodes to Kubernetes Cluster

You can now add Worker nodes to the Kubernetes cluster using the kubeadm join command as follows.

Before that, ensure that container runtime is installed, configured and running.

Once you have confirmed that, get the cluster join command that was output during cluster boot strapping and execute on each node.

Note that this command is displayed after initializing the control plane above and it should be executed as root user.

kubeadm join 192.168.56.161:6443 --token y9yrgx.v3pls4s7atyazot6 \
	--discovery-token-ca-cert-hash sha256:a407dfc21c579766d188dfc6a800df0a6a7f538aed9c1fd516a2eafbf0afa5a5

If you didn’t save the Kubernetes Cluster joining command, you can at any given time print using the command below on the Master or control plane;

kubeadm token create --print-join-command

Once the command runs, you will get an output similar to below;


[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

On the Kubernetes control plane (master, as the regular user with which you created the cluster as), run the command below to verify that the nodes have joined the cluster.

kubectl get nodes
NAME                        STATUS   ROLES           AGE     VERSION
master.kifarunix-demo.com   Ready    control-plane   56m     v1.27.1
wk01.kifarunix-demo.com     Ready    <none>          15m     v1.27.1
wk02.kifarunix-demo.com     Ready    <none>          13m     v1.27.1
wk03.kifarunix-demo.com     Ready    <none>          7m36s   v1.27.1

There are different node stati;

  1. NotReady: The node has been added to the cluster but is not yet ready to accept workloads.
  2. SchedulingDisabled: The node is not able to receive new workloads because it is marked as unschedulable.
  3. Ready: The node is ready to accept workloads.
  4. OutOfDisk: Indicates that the node is running out of disk space.
  5. MemoryPressure: Indicates that the node is running out of memory.
  6. PIDPressure: indicates that there are too many processes on the node
  7. DiskPressure: Indicates that the node is running out of disk space.
  8. NetworkUnavailable: Indicates that the node is not reachable via the network.
  9. Unschedulable: Indicates that the node is not schedulable for new workloads.
  10. ConditionUnknown: Indicates that the node status is unknown due to an error.

Role of the Worker nodes may show up as <none>. This is okay. No role is assigned to the node by default. It is only until the control plane assign a workload on the node then it shows up the correct role.

You can however update this ROLE using the command;

kubectl label node <worker-node-name> node-role.kubernetes.io/worker=true

As you can see, we now have a cluster. Run the command below to get cluster information.

kubectl cluster-info
Kubernetes control plane is running at https://192.168.56.161:6443
CoreDNS is running at https://192.168.56.161:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You are now ready to deploy an application on Kubernetes cluster.

Remove Worker Nodes from Cluster

You can gracefully remove a node from Kubernetes cluster as described in the guide below;

Gracefully Remove Worker Node from Kubernetes Cluster

Further Reading

Getting Started with Kubernetes

Other Tutorials

You can as well check our other tutorials

Configure Highly Available HAProxy with Keepalived on Ubuntu 20.04

Install and Setup HAProxy on Ubuntu 20.04

Install and Configure Filebeat on CentOS 8

Install and Configure NXLog CE on Ubuntu 20.04

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
koromicha
I am the Co-founder of Kifarunix.com, Linux and the whole FOSS enthusiast, Linux System Admin and a Blue Teamer who loves to share technological tips and hacks with others as a way of sharing knowledge as: "In vain have you acquired knowledge if you have not imparted it to others".

Leave a Comment