Change docker cgroup driver to systemd

Change docker cgroup driver to systemd


  • detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”
  • gridscale Tutorials
  • Linux: Coming Soon To M1 Macbooks
  • Rock Pi 4 Tutorials | Getting Started with Rock Pi 4
  • detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”

    Save, sync and reboot. Now you boot into the new kernel. Available kernel packages belows are the lists of kernels packages. Download it from SourceForge and install it. Often this will open the file explorer showing you the contents of part of your SD card, which should look something like this: This is not the whole of the contents of your SD card — only part of the card contents is readable by Windows.

    The rest is not readable by your PC. Once you open Win32 Disk Imager, use the blue folder icon to choose the location and the name of the backup you want to take, and then choose the drive letter for your SD card.

    Click on the Read button. The card will then be backed up to your PC. If you have a problem with your SD card and it becomes unreadable for any reason e.

    In fact, you can write it back to another SD card of the same size, giving you an exact duplicate. Open Terminal and enter the following command to locate your SD Card: diskutil list All your disks will be listed, and will look something like below: Look for your SD card by looking for a disk of the right size and name. Next, in Terminal, enter the following command to create a disc image.

    Again, if you corrupt your SD card or need to make a copy at any time, you can restore it by following the same approach as above to locate your SD card. To restore the image, do exactly the same again to discover which device is your SD card. If there is more than one partition on the device, you will need to repeat the umount command for all partition numbers. If you get any warnings, then change this to 1M instead, but that will take a little longer to write.

    Again, wait while it completes.

    Our Kubernetes setup will have 3 master nodes control plane and 3 worker nodes, making it a highly available cluster. The astute reader will point out that all of the VMs are running on a single host so making this cluster HA is kind of pointless. Yes to a point I would agree with you, but making the cluster HA is better for two reasons, firstly, it will enable zero downtime upgrades to the control plane and secondly it gives us experience in making a HA cluster which for a production use cases we would want to do.

    To setup Kubernetes I have decided to go with kubeadm. I know this is cheating a tiny bit versus installing all of the components ourselves, but even though kubeadm does a bit of heavy lifting for us, we will still need to understand a bit about what is going on to get the cluster working. I have added this entry to every machine in our setup. Cilium uses eBPF to super power your cluster and provides the kube-proxy functionality by using eBPF instead of iptables.

    I got a warning saying that swap in enabled and kubeadm should be run with swap off. To turn off swap run sudo swapoff -a. After turning off swap, I ran the the kubeadm init command again. This time command timed out. From the error message it states that the kubelet cannot be contacted.

    The kubelet is a service that runs on each node in our cluster. Running the command systemctl status kubelet reveals that the kubelet service is not running. To fix this we have to switch either kubelet or docker to use the same cgroup driver. This time we get: Your Kubernetes control-plane has initialized successfully!

    To be able to contact the cluster from our machine we need to add the cluster config to our local kube config file. When you install a new kubernetes cluster it does not come with a CNI. The container CNI is responsible for handling the IP addresses for pods that get created amongst other things.

    As stated earlier we are going to use Cilium as our CNI of choice so we can super power our cluster.

    I got the port of kube api server by describing the kube-api server pod using describe pod kube-apiserver-master The first control plane node is now fully working! Lets not rest on our laurels, the next job is to setup the two other control plane nodes nodes. This flag stores the control plane certificates inside a kubernetes secret, meaning the joining control plane nodes can simply download them. Unfortunately, I did not do that so I had to copy the certificates manually. Then we need to copy the etcd-ca.

    Once those changes are in place I had to change docker to use systemd and turn off swap on each node by the steps we saw earlier : sudo -i swapoff -a systemctl restart docker systemctl daemon-reload With those changes in place we can now run the kubeadm join command that we got given when we successfully ran kubeadm init: sudo kubeadm join skycluster --token xxx --discovery-token-ca-cert-hash xxx --control-plane After running this on both nodes we see a message saying that the node joined successfully.

    Now when we check the status of the nodes we see: master01 Ready control-plane,master 24h v1. To install, simply download the release for your OS and copy the binary to your path. We now have a fully working Kubernetes cluster ready for action.

    gridscale Tutorials

    This time command timed out. From the error message it states that the kubelet cannot be contacted. The kubelet is a service that runs on each node in our cluster. Running the command systemctl status kubelet reveals that the kubelet service is not running. To fix this we have to switch either kubelet or docker to use the same cgroup driver. This time we get: Your Kubernetes control-plane has initialized successfully! To be able to contact the cluster from our machine we need to add the cluster config to our local kube config file.

    When you install a new kubernetes cluster it does not come with a CNI. The container CNI is responsible for handling the IP addresses for pods that get created amongst other things.

    As stated earlier we are going to use Cilium as our CNI of choice so we can super power our cluster. Open Terminal and enter the following command to locate your SD Card: diskutil list All your disks will be listed, and will look something like below: Look for your SD card by looking for a disk of the right size and name.

    Next, in Terminal, enter the following command to create a disc image.

    Linux: Coming Soon To M1 Macbooks

    Again, if you corrupt your SD card or need to make a copy at any time, you can restore it by following the same approach as above to locate your SD card. To restore the image, do exactly the same again to discover which device is your SD card.

    If there is more than one partition on the device, you will need to repeat the umount command for all partition numbers. If you get any warnings, then change this to 1M instead, but that will take a little longer to write.

    If you decide to go with a different Linux distribution, then please check the required installation steps at Install Docker Engine.

    If we do not, there is a risk of a version skew occurring that can lead to unexpected, buggy behaviour.

    Rock Pi 4 Tutorials | Getting Started with Rock Pi 4

    However, one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version. For example, the kubelet running 1. This is because kubeadm and Kubernetes require special attention to upgrade.

    This is required to allow containers to access the host filesystem, which for example is needed by pod networks. After starting the kubelet process, it is going to restart every few seconds, as it waits in a crashloop for kubeadm to tell it what to do.

    This crashloop is expected and normal. After you initialize your control-plane, the kubelet runs normally. How can you configure it? Well, kubeadm allows you to pass a KubeletConfiguration structure during kubeadm init.

    This KubeletConfiguration can include the cgroupDriver field which controls the cgroup driver of the kubelet. Although we want to use systemd in our case and there is no need to explicitly define it, we show below how you can add it.


    thoughts on “Change docker cgroup driver to systemd

    Leave a Reply

    Your email address will not be published. Required fields are marked *