Deploy K8s Cluster on Hetzner Using Antrea and CRI-O

Abdeldjallil HADJI
4 min readSep 25, 2021

--

In this tutorial we will go through setting up a Kubernetes cluster on Hetzner Cloud servers. The resulting cluster will not be complete (won’t be able to setup an ingress to serve websites on a public IPv4 ,or a LoadBalancer) but even though it will support creating persistent volumes (Hetzner Block Storage Volumes).

I had the following resources to depend on when writing this post:

- Christian Beneke: https://community.hetzner.com/tutorials/install-kubernetes-cluster

- https://github.com/bkalem/kubernetes/tree/master/k8S_CRI-O_Antrea

Step 1 — Install the`hcloud`cli

Since i’m on Mac i used the following command to install `hcloud`:

`brew install hcloud`

For other OS check the following link : https://github.com/hetznercloud/cli

Configure `hcloud` using environment variables :

You can use the following environment variables to configure `hcloud`:

- HCLOUD_TOKEN- HCLOUD_CONTEXT- HCLOUD_CONFIG

You can either set the `HCLOUD_TOKEN` using `export HCLOUD_TOKEN=$token`, or to create a context using `hcloud context create $context_name` ,a prompt will appear to enter then token.

Tokens are created under API TOKENS in security section.

Use hcloud server list to list servers you have on your account.

Step 2 — Create resources (network and nodes)

We first need to create a nodes network using the following command :

hcloud network create — name kubernetes-learning — ip-range 10.96.0.0/16hcloud network add-subnet kubernetes-learning — network-zone eu-central — type server — ip-range 10.96.1.0/24

P.S : you can use any private range you want.

Next we will create some nodes (a master and 2 worker nodes), i will be using a CX21 server will for the master node and the CX11 for workers :

hcloud server create — type cx21 — name k8s-learning-master-1 — image ubuntu-20.04 — ssh-key <ssh_key_id> — network <network_id>hcloud server create — type cx11 — name k8s-learning-worker-1 — image ubuntu-20.04 — ssh-key <ssh_key_id> — network <network_id>hcloud server create — type cx11 — name k8s-learning-worker-2 — image ubuntu-20.04 — ssh-key <ssh_key_id> — network <network_id>

You can grab you `ssh_key_id` and `network_id` using the following commands respectively :

hcloud ssh-key listhcloud network list

You will find the `ids` to use.

Now you can ssh to the server through `hcloud server ssh server_id` (you can also use the server’s name).

Use the following commands to update the servers after creation (Optional):

apt-get updateapt-get dist-upgradereboot

Step3 : Install CRI-O and Kubernetes on each node:

Install CRI-O:

As said before we will be using Cri-o as runtime to run container (make sure to execute commands as root):

Execute the following script on all nodes.

echo “TASK1 overlay netfilter ########”# Create the .conf file to load the modules at bootupcat <<EOF | sudo tee /etc/modules-load.d/crio.confoverlaybr_netfilterEOFmodprobe overlaymodprobe br_netfilterecho “[TASK2] sysctl bridge ########”# Set up required sysctl params, these persist across reboots.cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOFsysctl — systemecho “[TASK3] install CRI-O ########”apt-get install -y curl jq tarwget -O /tmp/crio-get-script.sh https://raw.githubusercontent.com/cri-o/cri-o/master/scripts/getchmod +x /tmp/crio-get-script.shbash /tmp/crio-get-script.shecho “[TASK4] start CRI-O ########”systemctl daemon-reloadsystemctl enable — now crio

Install Kubernetes:

Execute the following script on all nodes.

echo “[TASK1] sysctl bridge ########”cat > /etc/sysctl.d/kubernetes.conf <<EOFnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOFsysctl — systemecho “[TASK2] installation Kubernetes ########”apt-get install kubeadm kubelet kubectl -ymkdir /usr/lib/systemd/system/kubelet.service.dsudo cat <<EOF >> /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.confEnvironment=”KUBELET_EXTRA_ARGS= — feature-gates=’AllAlpha=false,RunAsGroup=true’ — container-runtime=remote — cgroup-driver=systemd — container-runtime-endpoint=’unix:///var/run/crio/crio.sock’ — runtime-request-timeout=5m”Environment=”KUBELET_CGROUP_ARGS= — cgroup-driver=systemd”EOFecho “[TASK5] Cri-o comme CRIs ########”export CONTAINER_RUNTIME_ENDPOINT=’/var/run/crio/crio.sock’export CGROUPE_DRIVER=systemdecho “[TASK6] start kubelet ########”sudo systemctl daemon-reloadsudo systemctl enable — now kubelet

Install OpenvSwitch to enable POD communication through the overlay network:

Execute the following script on all nodes.

apt-get install openvswitch-switchsystemctl enable — now openvswitch-switch

Now after installing almost all needed components let’s initiate our cluster.

Execute the following command only on the master node.

kubeadm init — cri-socket=/var/run/crio/crio.sock — pod-network-cidr=10.20.0.0/16 — ignore-preflight-errors=NumCPU

the — cri-socket argument is used to force using `crio.sock` to avoid Found multiple CRI sockets error if any order CRI sockets are detected. `ignore-preflight-errors=NumCPU` flag is to avoid [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2, when you have only 1 CPU on your master server.

When the initialization is complete make sure the save the outputted command that you will be used by workers to join the cluster , it has the following format

kubeadm join $IP:6443 — token $token \— discovery-token-ca-cert-hash $sha256_hash

Prepare your user to administrate the cluster :

You can use your master to issue `kubectl` commands, but usually the config file is copied to your machine.

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

Step4 install Antrea Container Network Interface:

kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.ym

And Voila, now you can start creating kubernetes objects.

LinkedIn : Abdeldjallil HADJI

--

--