Sunday, May 10, 2026

Installing minimal K0s Kubernetes cluster

I need to get some hands-on experience with Kubernetes (aka K8s). Unfortunately, it comes also into my kingdom :-) I have decided to create the minimal K8s cluster, which is 3-node cluster hosting control plane but also the data plane (aka workers). The minimal consolidated K8s architecture is depicted on drawing below.

K0s 3-node consolidated Kubernetes cluster
I have VMware Workstation, so I created three Ubuntu servers (minimal installation) within Host-Only network and one FreeBSD server working as an internet gateway and eventually the ingress load balancer.  

3 Ubuntu servers dedicated for K0s have following addresses

  • 192.168.0.1/24 (k0s1.example.com) - this is my Control Node and Worker Node  #1
  • 192.168.0.2/24 (k0s2.example.com) - this is my Control Node and Worker Node  #2
  • 192.168.0.3/24 (k0s3.example.com) - this is my Control Node and Worker Node  #3

There are two other supporting servers. 

  • 192.168.0.253/24 (nas.example.com) - this is my Network Attached Storage for persistent volumes 
  • 192.168.0.254/24 (r1.example.com) - this is my Network Gateway (aka default router)

First K0s node installation (k0s1) 

K0s is a single binary, therefore installation is very easy. To download k0s you run the k0s download script to download the latest stable version of k0s and make it executable from /usr/local/bin/k0s.

curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh

 dpasek@k0s1:~$ curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh  
 Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.35.3+k0s.0/k0s-v1.35.3+k0s.0-arm64  
 k0s is now executable in /usr/local/bin  
 You can use it to complete the installation of k0s on this node,   
 see https://docs.k0sproject.io/stable/install/ for more information.  
 dpasek@k0s1:~$   

Now install controller ...

sudo k0s install controller  --enable-worker --no-taints -e ETCD_UNSUPPORTED_ARCH=arm 

Now we can start k0s as a service ...

sudo k0s start

We can also check the K0s status on a single node (k0s1) ...

 dpasek@k0s1:~$ sudo k0s status  
 Version: v1.35.3+k0s.0  
 Process ID: 10234  
 Role: controller  
 Workloads: true  
 SingleNode: false  
 Kube-api probing successful: true  
 Kube-api probing last error:   
 dpasek@k0s1:~$   

That's it for first K80 node.

Installation of Kubectl

Kubectl has few prerequisites we have to install and its own repository. We must do few steps before before kubectl installation.   

Make keyrings directory

sudo mkdir -p /etc/apt/keyrings 

Adding Kubernetes signing key

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Add repository

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list 

Update software packages 

sudo apt update 
 
Upgrade software packages 
sudo apt upgrade -y 

Install of Kubectl binary tool

sudo apt install -y kubectl 

And now we can use kubectl to get nodes status ....

 dpasek@k0s1:~/.ssh$ sudo k0s kubectl get nodes  
 NAME  STATUS  ROLES      AGE   VERSION  
 k0s1  Ready  control-plane  3h17m  v1.35.3+k0s  
 dpasek@k0s1:~/.ssh$   

We do see just a single node, which is obvious as other nodes we will deploy later and join them into the cluster.

I would recommend install kubectl binary to all cluster controller nodes improve high availability of the cluster manageability. 

K0s Config File

Let's continue and create K0s config.

sudo sh -c 'k0s config create > /etc/k0s/k0s.yaml'

K0s Join Tokens

Prepare token for joining additional controller cluster nodes.

sudo sh -c 'k0s token create --role=controller > /etc/k0s/controller.token' 

Prepare token for joining additional worker cluster nodes. 

sudo sh -c 'k0s token create --role=worker > /etc/k0s/worker.token'

Note: Changing read/write privileges only to root user is the security best practice. I will not do it as this is just a lab environment.

Second K0s node installation (k0s2)

Now, let's install the second K0s node on already existing Ubuntu server.

curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh 

 dpasek@k0s2:~$ curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh  
 [sudo: authenticate] Password:       
 Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.35.3+k0s.0/k0s-v1.35.3+k0s.0-arm64  
 k0s is now executable in /usr/local/bin  
 You can use it to complete the installation of k0s on this node,   
 see https://docs.k0sproject.io/stable/install/ for more information.  
 dpasek@k0s2:~$   

I used secure copy (scp *.token k0s2:/tmp) to copy tokens from k0s1 to the new cluster node. Below is the screenshot of my environment where I do a copy of all (two) tokens from k0s1 to k0s2. Two tokens are copied to /tmp folder.

 dpasek@k0s1:/etc/k0s$ ls -la  
 total 28  
 drwxr-xr-x  3 root root 4096 May 10 08:34 .  
 drwxr-xr-x 111 root root 4096 May 9 20:22 ..  
 drwxr-xr-x  2 root root 4096 May 9 17:11 containerd.d  
 -rw-r--r--  1 root root 362 May 10 08:10 containerd.toml  
 -rw-rw-rw-  1 root root 1737 May 10 08:33 controller.token  
 -rw-------  1 root root 1541 May 10 06:41 k0s.yaml  
 -rw-rw-rw-  1 root root 1733 May 10 08:34 worker.token  
 dpasek@k0s1:/etc/k0s$ scp *.token k0s2:/tmp  
 dpasek@k0s2's password:   
 controller.token               100% 1737   4.4MB/s  00:00    
 worker.token                   100% 1733   6.3MB/s  00:00    
 dpasek@k0s1:/etc/k0s$   

Install k0s2 node as a controller and worker together 

sudo k0s install controller --token-file /tmp/controller.token --enable-worker 

Start k0scontroller service 

sudo systemctl start k0scontroller

And now you can check K0s status on node k0s2 ...

 dpasek@k0s2:~$ sudo k0s status  
 Version: v1.35.3+k0s.0  
 Process ID: 4760  
 Role: controller  
 Workloads: true  
 SingleNode: false  
 Kube-api probing successful: true  
 Kube-api probing last error:   
 dpasek@k0s2:~$   

We do see it works as a controller.

We can also check Kubernetes cluster status ...

 dpasek@k0s1:/etc/k0s$ sudo k0s kubectl get nodes -o wide  
 NAME  STATUS  ROLES          AGE    VERSION      INTERNAL-IP  EXTERNAL-IP  OS-IMAGE           KERNEL-VERSION    CONTAINER-RUNTIME  
 k0s1  Ready   control-plane  15h    v1.35.3+k0s  192.168.0.1  <none>       Ubuntu 26.04 LTS   7.0.0-15-generic  containerd://1.7.30  
 k0s2  Ready   <none>         5m28s  v1.35.3+k0s  192.168.0.2  <none>       Ubuntu 26.04 LTS   7.0.0-15-generic  containerd://1.7.30  
 dpasek@k0s1:/etc/k0s$   

All is good only k02s is displayed as <none> instead of control-plane. This is just a cosmetic issue which can be solved by labeling.

On k8s2 node use command ...

sudo k0s kubectl label node k0s2 node-role.kubernetes.io/control-plane=

... and the cosmetic issue should be fixed,

Below is new listing ...

 dpasek@k0s1:/etc/k0s$ sudo k0s kubectl get nodes -o wide  
 NAME  STATUS  ROLES          AGE  VERSION      INTERNAL-IP  EXTERNAL-IP  OS-IMAGE          KERNEL-VERSION    CONTAINER-RUNTIME  
 k0s1  Ready   control-plane  16h  v1.35.3+k0s  192.168.0.1  <none>       Ubuntu 26.04 LTS  7.0.0-15-generic  containerd://1.7.30  
 k0s2  Ready   control-plane  37m  v1.35.3+k0s  192.168.0.2  <none>       Ubuntu 26.04 LTS  7.0.0-15-generic  containerd://1.7.30  
 dpasek@k0s1:/etc/k0s$  

Another benefit is the possibility to control K0s cluster also from second cluster node (k0s2) as is visible on another listing below.

 dpasek@k0s2:~$ sudo k0s kubectl get nodes -o wide  
 NAME  STATUS  ROLES         AGE  VERSION      INTERNAL-IP  EXTERNAL-IP  OS-IMAGE          KERNEL-VERSION    CONTAINER-RUNTIME  
 k0s1  Ready  control-plane  16h  v1.35.3+k0s  192.168.0.1  <none>       Ubuntu 26.04 LTS  7.0.0-15-generic  containerd://1.7.30  
 k0s2  Ready  control-plane  40m  v1.35.3+k0s  192.168.0.2  <none>       Ubuntu 26.04 LTS  7.0.0-15-generic  containerd://1.7.30  
 dpasek@k0s2:~$   

That's it for second Kubernetes node (k0s2) and we can do the same for the third node.

Third K0s node installation (k0s3)

Now, let's install the last (third) K0s node and join it into 3-node consolidated Kubernetes Cluster. The procedure is absolutely same as on second cluster node with hostname k0s2.

Using kubectl Without sudo in K0s Kubernetes

By default, many administrators initially use the following command to manage a K0s Kubernetes cluster:

sudo k0s kubectl get nodes -o wide

This works because K0s internally uses its own administrative kubeconfig stored under:

/var/lib/k0s/pki/admin.conf

However, using sudo for every Kubernetes operation is not very convenient.
A better approach is to configure standard user access via the default Kubernetes kubeconfig location.

Create the ~/.kube Directory

mkdir -p ~/.kube

Export the K0s Admin kubeconfig

sudo k0s kubeconfig admin > ~/.kube/config

Set Correct File Permissions

chmod 600 ~/.kube/config

Verify kubectl Access

You can now use kubectl directly without sudo:

kubectl get nodes -o wide

Example output:

 dpasek@k0s3:~$ kubectl get nodes -o wide  
 NAME  STATUS ROLES          AGE    VERSION      INTERNAL-IP  EXTERNAL-IP  OS-IMAGE          KERNEL-VERSION    CONTAINER-RUNTIME  
 k0s1  Ready  control-plane  20h    v1.35.3+k0s  192.168.0.1  <none>       Ubuntu 26.04 LTS  7.0.0-15-generic  containerd://1.7.30  
 k0s2  Ready  control-plane  4h44m  v1.35.3+k0s  192.168.0.2  <none>       Ubuntu 26.04 LTS  7.0.0-15-generic  containerd://1.7.30  
 k0s3  Ready  control-plane  9m55s  v1.35.3+k0s  192.168.0.3  <none>       Ubuntu 26.04 LTS  7.0.0-15-generic  containerd://1.7.30   
 dpasek@k0s3:~$  

How It Works

The standard kubectl binary automatically looks for kubeconfig in:

~/.kube/config

By exporting the K0s administrative kubeconfig into this location, the current Linux user gains direct access to the Kubernetes API server without requiring elevated privileges.

Recommendation

In small lab or homelab environments, it is practical to configure this on all controller nodes so the Kubernetes cluster can be managed from any control-plane node. This is especially useful in minimal multi-controller K0s deployments such as k0s1.example.com, k0s2.example.com, k0s3.example.com,  where all nodes act both as Kubernetes control-plane nodes and also as Kubernetes worker nodes.

 

 

 

No comments:

Post a Comment

Kubernetes and Helm Chart Applications

A Helm chart is a packaging format for Kubernetes, acting as a blueprint to define, install, and upgrade complex applications. It bundles m...