Learn how to install Kubernetes on Rocky Linux 9 with our comprehensive step-by-step guide. Set up a robust container orchestration platform and manage your containerized applications efficiently with this detailed tutorial tailored for seamless integration on your Linux environment. #centlinux #linux #k8s #docker
Table of Contents
What is Kubernetes?
Kubernetes (commonly stylized as K8s) is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project.
Kubernetes works with Docker, Containerd, and CRI-O. Originally, it interfaced exclusively with the Docker runtime through a “Dockershim”; however, from November 2020 up to April 2022, Kubernetes has deprecated the shim in favor of directly interfacing with the container through Containerd, or replacing Docker with a runtime that is compliant with the Container Runtime Interface (CRI). With the release of v1.24 in May 2022, “Dockershim” has been removed entirely.
Kubernetes defines a set of building blocks (“primitives”) that collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, memory or custom metrics. Kubernetes is loosely coupled and extensible to meet different workloads. The internal components as well as extensions and containers that run on Kubernetes rely on the Kubernetes API. The platform exerts its control over compute and storage resources by defining resources as Objects, which can then be managed as such.
Kubernetes follows the primary/replica architecture. The components of Kubernetes can be divided into those that manage an individual node and those that are part of the control plane.
The Kubernetes master node handles the Kubernetes control plane of the cluster, managing its workload and directing communication across the system. The Kubernetes control plane consists of various components, each its own process, that can run both on a single master node or on multiple masters supporting high-availability clusters.
What is Kubernetes used for?
Kubernetes provides a robust framework for running distributed systems resiliently, with a focus on operational concerns such as deployment, scaling, and monitoring. Here are some key uses and features of Kubernetes:
- Container Orchestration: Kubernetes automates the deployment, management, scaling, and networking of containers. It manages the lifecycle of containers, ensuring they are deployed in the right configurations and with the necessary resources.
- Scaling and Load Balancing: Kubernetes can scale applications up or down based on demand, ensuring optimal resource utilization. It automatically distributes network traffic to maintain a balanced load across containers, enhancing application performance and reliability.
- Self-Healing: Kubernetes automatically monitors the health of containers and nodes. If a container fails or becomes unresponsive, Kubernetes restarts it, reschedules it, or replaces it as needed, ensuring high availability and minimal downtime.
- Service Discovery and Load Balancing: Kubernetes provides built-in mechanisms for service discovery and load balancing, allowing applications to dynamically discover and communicate with each other. This simplifies the process of managing microservices and their interactions.
- Storage Orchestration: Kubernetes can automatically mount and manage storage volumes, whether they are local storage, network-attached storage (NAS), or cloud-based storage solutions. This enables persistent storage for stateful applications running in containers.
- Automated Rollouts and Rollbacks: Kubernetes can automate the rollout of updates to applications, ensuring that updates are applied in a controlled manner. If something goes wrong, Kubernetes can automatically roll back changes to a previous stable state.
- Configuration Management: Kubernetes allows for the management of application configuration separately from the application code. This enables seamless updates and changes to configurations without altering the application codebase.
- Resource Management: Kubernetes efficiently manages resources such as CPU, memory, and storage across containers and nodes. It ensures that applications have the necessary resources while optimizing the utilization of the underlying infrastructure.
- Extensibility: Kubernetes is highly extensible, supporting custom resources and APIs. Developers can extend Kubernetes functionality using custom controllers, operators, and plugins to meet specific application needs.
- Multi-Cloud and Hybrid Cloud Deployments: Kubernetes is platform-agnostic, allowing for deployment across various environments, including on-premises, public clouds, and hybrid cloud setups. This flexibility enables consistent application management across diverse infrastructure environments.
Overall, Kubernetes is used for orchestrating and managing containerized applications, providing a robust, scalable, and flexible platform that simplifies the complexities of deploying and maintaining distributed systems. It is widely adopted in modern DevOps and cloud-native environments to achieve greater efficiency, reliability, and scalability in application deployment and management.
Read Also:
Environment Specification
We are using a minimal Rocky Linux 9 virtual machine with following specifications.
- CPU – 3.4 Ghz (2 cores)
- Memory – 2 GB
- Storage – 20 GB
- Operating System – Rocky Linux release 9.0 (Blue Onyx)
- Hostname – kubemaster-01.centlinux.com
- IP Address – 192.168.116.132/24
New Amazon Fire HD 8 Kids tablet, ages 3-7 | 3GB memory, ad-free content with parental controls included, 13-hr battery, 32GB, Grape, (2024 release)
$79.99 (as of November 6, 2024 05:04 GMT +00:00 – More infoProduct prices and availability are accurate as of the date/time indicated and are subject to change. Any price and availability information displayed on [relevant Amazon Site(s), as applicable] at the time of purchase will apply to the purchase of this product.)Set Hostname and Name Resolution
By using a ssh client, connect with your Linux server as root user.
Set a proper FQDN (Fully Qualified Domain Name) for your Kubernetes server. Also use the Local DNS resolver (/etc/hosts) for name resolution of your hostname.
# hostnamectl set-hostname kubemaster-01.centlinux.com
# echo 192.168.116.131 kubemaster-01.centlinux.com kubemaster-01 >> /etc/hosts
Update Rocky Linux Server
Refresh your cache of enabled yum repositories.
# dnf makecache --refresh Rocky Linux 9 - BaseOS 85 kB/s | 1.7 MB 00:20 Rocky Linux 9 - AppStream 193 kB/s | 6.0 MB 00:31 Rocky Linux 9 - Extras 870 B/s | 2.9 kB 00:03 Metadata cache created.
Execute following dnf command to update your Rocky Linux server.
# dnf update -y
Linux Kernel packages may be updated by the above command. Therefore, reboot your Linux server before moving forward.
# reboot
After reboot, check the Linux Kernel and operating system versions that are being used in this configuration guide.
# cat /etc/rocky-release Rocky Linux release 9.0 (Blue Onyx) # uname -r 5.14.0-70.26.1.el9_0.x86_64
Switch SELinux to Permissive Mode
Kubernetes doesn’t provide a SELinux policy, therefore you can either switch SELinux target to Permissive mode or manually set the File Context of various Kubernetes files and directories.
For the scope of this article, we recommend that you should set permissive target for SELinux. However, if you are configuring a Kubernetes cluster for production then you should identify the files and set the File Context for them, or you can even create your own SELinux policy.
Execute following commands at Linux bash to set SELinux permissive mode.
# setenforce 0 # sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Load K8s required Kernel Modules
Kubernetes requires “overlay” and “br_netfilter” Kernel modules. Therefore, you can use following group of commands to permanently enable them.
# modprobe overlay # modprobe br_netfilter # cat > /etc/modules-load.d/k8s.conf << EOF > overlay > br_netfilter > EOF
Set following Kernel parameter as required by Kubernetes software.
# cat > /etc/sysctl.d/k8s.conf << EOF > net.ipv4.ip_forward = 1 > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF
Reload Kernel parameter configuration files with above changes.
# sysctl --system * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... * Applying /usr/lib/sysctl.d/50-coredump.conf ... * Applying /usr/lib/sysctl.d/50-default.conf ... * Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ... * Applying /usr/lib/sysctl.d/50-pid-max.conf ... * Applying /usr/lib/sysctl.d/50-redhat.conf ... * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... * Applying /etc/sysctl.conf ... kernel.yama.ptrace_scope = 0 kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h kernel.core_pipe_limit = 16 fs.suid_dumpable = 2 kernel.sysrq = 16 kernel.core_uses_pid = 1 net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.ens33.rp_filter = 2 net.ipv4.conf.lo.rp_filter = 2 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.ens33.accept_source_route = 0 net.ipv4.conf.lo.accept_source_route = 0 net.ipv4.conf.default.promote_secondaries = 1 net.ipv4.conf.ens33.promote_secondaries = 1 net.ipv4.conf.lo.promote_secondaries = 1 net.ipv4.ping_group_range = 0 2147483647 net.core.default_qdisc = fq_codel fs.protected_hardlinks = 1 fs.protected_symlinks = 1 fs.protected_regular = 1 fs.protected_fifos = 1 net.core.optmem_max = 81920 kernel.pid_max = 4194304 kernel.kptr_restrict = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.ens33.rp_filter = 1 net.ipv4.conf.lo.rp_filter = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
Disable Swap in Linux
To disable Swap for your current session you can use swapoff command. However, for permanently disable the Swap storage you have to comment (#) the respective directive in /etc/fstab file.
Execute following commands to do the same.
# swapoff -a # sed -e '/swap/s/^/#/g' -i /etc/fstab
Verify the usage of Swap storage on your Linux server.
# free -m total used free shared buff/cache available Mem: 1748 269 1143 8 335 1302 Swap: 0 0 0
Install Containerd on Rocky Linux 9
Containerd is not available in standard yum repositories, therefore, you may need to install Docker Official Yum Repository to install Container runtime.
# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
Build yum cache for Docker yum repository.
# dnf makecache Docker CE Stable - x86_64 6.1 kB/s | 12 kB 00:01 Rocky Linux 9 - BaseOS 440 B/s | 3.6 kB 00:08 Rocky Linux 9 - AppStream 1.4 kB/s | 3.6 kB 00:02 Rocky Linux 9 - Extras 1.5 kB/s | 2.9 kB 00:01 Metadata cache created.
Now, you can easily install Containerd runtime by using dnf command.
# dnf install -y containerd.io
After installation, backup the original containerd configuration file and generate a new file as follows.
# mv /etc/containerd/config.toml /etc/containerd/config.toml.orig # containerd config default > /etc/containerd/config.toml
Edit Containerd configuration file by using vim text editor.
# vi /etc/containerd/config.toml
Locate and set SystemdCgroup parameter in this file, to enable the systemd cgroup driver for Containerd runtime.
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Enable and start Containerd service.
# systemctl enable --now containerd.service Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
Check the status of Containerd service for any errors.
# systemctl status containerd.service
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendo>
Active: active (running) since Thu 2022-11-03 12:31:07 CDT; 25s ago
Docs: https://containerd.io
Process: 8454 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SU>
Main PID: 8455 (containerd)
Tasks: 8
Memory: 16.1M
CPU: 172ms
CGroup: /system.slice/containerd.service
└─8455 /usr/bin/containerd
Nov 03 12:31:07 rockylinux9-01.centlinux.com containerd[8455]: time="2022-11-03>
Nov 03 12:31:07 rockylinux9-01.centlinux.com systemd[1]: Started containerd con>
Configure Linux Firewall
Kubernetes uses following service ports at Master node.
Port | Protocol | Purpose |
6443 | TCP | Kubernetes API server |
2379-2380 | TCP | etcd server client API |
10250 | TCP | Kubelet API |
10251 | TCP | kube-scheduler |
10252 | TCP | kube-controller-manager |
Therefore, you need to allow these service ports in Linux firewall.
# firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp success # firewall-cmd --reload success
Install Kubernetes on Rocky Linux 9
To install container orchestration software, you have to add the Kubernetes Official Yum Repository.
The following command will add the Kubernetes repository in your Linux server.
# cat > /etc/yum.repos.d/k8s.repo << EOF > [kubernetes] > name=Kubernetes > baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch > enabled=1 > gpgcheck=1 > gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg > exclude=kubelet kubeadm kubectl > EOF
Build the yum cache for Kubernetes yum repository.
# dnf makecache Docker CE Stable - x86_64 1.8 kB/s | 3.5 kB 00:01 Kubernetes 36 kB/s | 158 kB 00:04 Rocky Linux 9 - BaseOS 1.3 kB/s | 3.6 kB 00:02 Rocky Linux 9 - AppStream 1.8 kB/s | 3.6 kB 00:02 Rocky Linux 9 - AppStream 593 kB/s | 6.0 MB 00:10 Rocky Linux 9 - Extras 1.5 kB/s | 2.9 kB 00:01 Metadata cache created.
Now, you can install Kubernetes packages by using dnf command.
# dnf install -y {kubelet,kubeadm,kubectl} --disableexcludes=kubernetes
Enable and start kubelet.service. It is the main Kubernetes service that waits for any events when you initialize the cluster or join a node to this cluster.
# systemctl enable --now kubelet.service Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
Verify the status of kubelet.service.
# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor p>
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2022-11-03 13:30:03 CDT; 128ms ago
Docs: https://kubernetes.io/docs/
Main PID: 10967
Tasks: 1 (limit: 10948)
Memory: 15.3M
CPU: 100ms
CGroup: /system.slice/kubelet.service
Nov 03 13:30:03 kubemaster-01.centlinux.com systemd[1]: Started kubelet: The Ku>
Enable Bash Completion for Kubernetes Commands
If you plan to manage your Kubernetes cluster from Linux CLI then, the bash completion will be very helpful for you.
To enable automatic completion of kubectl commands, you have to execute the script provided by kubectl command itself. You must ensure that bash-completion package is already installed on your Rocky Linux server.
# source <(kubectl completion bash) # kubectl completion bash > /etc/bash_completion.d/kubectl
Install Flannel Kubernetes CNI Plugin
Kubernetes supports various CNI (Container Network Interface) plugins, such as AWS VPC, Azure CNI, Cilium, Calico, Flannel, and many more.
In this configuration guide, we are using Flannel Kubernetes CNI plugin. Ensure that this plugin must be installed on each K8s node.
Create a directory and download flanneld file therein.
# mkdir /opt/bin # curl -fsSLo /opt/bin/flanneld https://github.com/flannel-io/flannel/releases/download/v0.20.1/flannel-v0.20.1-linux-amd64.tar.gz
Grant execution permissions to flanneld file to make it an executable.
# chmod +x /opt/bin/flanneld
Initialize Kubernetes Control Plane
Execute the following command to download container images, that are required to create Kubernetes Cluster.
# kubeadm config images pull [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.3 [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.3 [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.3 [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.3 [config/images] Pulled registry.k8s.io/pause:3.8 [config/images] Pulled registry.k8s.io/etcd:3.5.4-0 [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3
After successful downloading of images, execute the following command to initialize the Kubernetes cluster on kubemaster-01.centlinux.com server. This nod will be selected as the Kubernetes control plane because it is the first node in the cluster.
# kubeadm init
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubemaster-01.centlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.116.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubemaster-01.centlinux.com localhost] and IPs [192.168.116.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubemaster-01.centlinux.com localhost] and IPs [192.168.116.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 79.030431 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubemaster-01.centlinux.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kubemaster-01.centlinux.com as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: pfejvw.wybridg116yi8taa
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.116.131:6443 --token pfejvw.wybridg116yi8taa
--discovery-token-ca-cert-hash sha256:9db6c88584ab5c3f7987631514960fed83caa834d9faf5435dfb6c5bcf3fe74b
Note down the command to add Kubernetes Worker node for later use.
Execute following command to set KUBECONFIG variable for all sessions.
# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile.d/k8s.sh
Execute following commands as a user, that is being used to manage your Kubernetes cluster.
# mkdir -p $HOME/.kube # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # chown $(id -u):$(id -g) $HOME/.kube/config
Execute kubectl commands to check the status of your Kubernetes cluster.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster-01.centlinux.com NotReady control-plane 2m26s v1.25.3
# kubectl cluster-info
Kubernetes control plane is running at https://192.168.116.131:6443
CoreDNS is running at https://192.168.116.131:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
After Kubernetes Control Plane is started, run the following command to install the Flannel Pod network plugin. This command will automatically run the “flanneld” binary file and start some flannel pods.
# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml namespace/kube-flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
Check the list of running pods on your Kubernetes cluster.
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-79t24 0/1 Init:1/2 0 61s
kube-system coredns-565d847f94-mq59j 0/1 Pending 0 5m27s
kube-system coredns-565d847f94-xngbz 0/1 Pending 0 5m27s
kube-system etcd-kubemaster-01.centlinux.com 1/1 Running 0 5m29s
kube-system kube-apiserver-kubemaster-01.centlinux.com 1/1 Running 0 5m29s
kube-system kube-controller-manager-kubemaster-01.centlinux.com 1/1 Running 0 5m29s
kube-system kube-proxy-fh9v6 1/1 Running 0 5m27s
kube-system kube-scheduler-kubemaster-01.centlinux.com 1/1 Running 0 5m29s
Your Kubernetes master node has been installed successfully.
Video Tutorial: How to install Kubernetes on Linux
Final Thoughts
Congratulations on successfully learning how to install Kubernetes on Rocky Linux 9! With Kubernetes up and running, you now have a powerful platform for automating the deployment, scaling, and management of your containerized applications. This setup will enable you to harness the full potential of Kubernetes’ orchestration capabilities, ensuring your applications are resilient, scalable, and efficiently managed. Dive into container orchestration with confidence and take your application deployment to the next level. Happy orchestrating! To start managing your Cluster, you should read The Kubernetes Book: 2022 Edition (PAID LINK) by Nigel Poulton or attend following online training: Kubernetes and Docker Containers in Practice
Hello there!
Awesome tutorial!!! Just great!!! Just one thing to consider…
before the:
# systemctl enable –now kubelet.service
I had to perform these two commands:
# kubeadm config images pull
# kubeadm init
Only then I was able to startup the kubelet.service!
I also change the flannel-v0.20.1-linux-amd64.tar.gz to flannel-v0.21.4-linux-amd64.tar.gz
By the way, I've just followed your tutorial using a fresh Rocky Linux 9.1
You rock! I'll buy you a coffee!! 🙂
Following the guide, no matter what I try, I always get this output when checking the pods in all namespaces:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-slqtt 0/1 Error 1 (5s ago) 11s
kube-system coredns-5d78c9869d-jjkjg 0/1 Pending 0 6m17s
kube-system coredns-5d78c9869d-vw42v 0/1 Pending 0 6m17s
kube-system etcd-kubemaster-01-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 1/1 Running 0 6m23s
kube-system kube-apiserver-kubemaster-01-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.com 1/1 Running 0 6m23s
kube-system kube-controller-manager-kubemaster-01xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 1/1 Running 0 6m23s
kube-system kube-proxy-b6mkx 1/1 Running 0 6m17s
kube-system kube-scheduler-kubemaster-01-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 1/1 Running 0 6m23s
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-slqtt 0/1 Error 1 (5s ago) 11s
kube-system coredns-5d78c9869d-jjkjg 0/1 Pending 0 6m17s
kube-system coredns-5d78c9869d-vw42v 0/1 Pending 0 6m17s
kube-system etcd-kubemaster-01-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 1/1 Running 0 6m23s
kube-system kube-apiserver-kubemaster-01-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.com 1/1 Running 0 6m23s
kube-system kube-controller-manager-kubemaster-01xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 1/1 Running 0 6m23s
kube-system kube-proxy-b6mkx 1/1 Running 0 6m17s
kube-system kube-scheduler-kubemaster-01-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 1/1 Running 0 6m23s
Check the error logs for troubleshooting. You can use following command.
# kubectl logs [POD-NAME]
You need to specify the pod-network-cidr example: kubeadm init –pod-network-cidr 10.10.0.0/22, before you install the flannel
It isn't necessary, unless you want to use a custom CIDR.
[root@kubemaster me]# kubeadm config images pull
failed to pull image "registry.k8s.io/kube-apiserver:v1.28.4": output: time="2023-11-28T08:59:07+11:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
[root@kubemaster me]# kubeadm config images pull
failed to pull image "registry.k8s.io/kube-apiserver:v1.28.4": output: time="2023-11-28T08:59:07+11:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
Hi,
1. Check the status of containerd service.
2. Also, if you have disabled_plugins = ["cri"] in /etc/containerd/config.toml, you have to remove it.
Pls let me know, if the above workarounds rectify your problem.