pixel

How to setup Kubernetes Cluster on CentOS 7

Share on Social Media

Learn how to setup Kubernetes cluster on CentOS 7 with our detailed, step-by-step guide. Follow expert instructions for a successful Kubernetes installation and configuration. #centlinux #linux #k8s

What is Kubernetes?

Kubernetes or k8s is an open-source container orchestration system for automated application deployment, management and scaling across clusters of hosts. Kubernetes is initially developed by Google, but now maintained by Cloud Native Computing Foundation. Kubernetes requires a container runtime interface (CRI) for orchestration. Kubernetes supports different CRIs including Docker, containerd and cri-o.

How to setup Kubernetes Cluster on CentOS 7
How to setup Kubernetes Cluster on CentOS 7

In our previous article, we have configured a Docker Swarm Cluster on CentOS 7 for container orchestration. Now, in this article, we are installing a two node Kubernetes / K8s cluster with Docker CE on CentOS 7.

This article is about the installation and configuration of Kubernetes Cluster on CentOS 7 and it doesn’t addresses the technical details about Kubernetes architecture and components. Therefore, if you are interested to read more about Kubernetes you should join:

Recommended Training: Certified Kubernetes Administrator (CKA) with Practice Tests from Mumshad Mannambeth, KodeKloud Training

2301254 26c8 7

System Specification

We have two CentOS 7 virtual machines with following specifications.

Hostname:kubemaster-01kubenode-01
IP Address:192.168.116.160/24192.168.116.161/24
Cluster Role:K8s masterK8s node
CPU:3.4 Ghz (2 cores) *3.4 Ghz (2 cores) *
Memory:2 GB2 GB
Storage:40 GB40 GB
Operating System:CentOS 7.6CentOS 7.6
Docker version:18.09.518.09.5
Kubernetes version:1.14.11.14.1

* We must have at least 2 cores on each node to install Kubernetes.

Make sure the hostnames are resolvable on all nodes. You can either setup a Private DNS Server or use Local DNS Resolver for this purpose.

Install Docker CE on CentOS 7

We are configuring Docker CE as Kubernetes CRI (Container Runtime Interface). Other choices for Kubernetes CRI are containerd, cri-o and frakti.

Connect with Kubernetes master kubemaster-01.centlinux.com using ssh as root user.

Install Docker CE prerequisite packages using yum command.

yum install -y device-mapper-persistent-data lvm2 yum-utils

Add Docker yum repository as follows:

yum-config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

Output:

Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

Build yum cache for Docker repository.

yum makecache fast

Output:

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.xeonbd.com
* extras: mirror.xeonbd.com
* updates: mirror.xeonbd.com
base | 3.6 kB 00:00
docker-ce-stable | 3.5 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
(1/2): docker-ce-stable/x86_64/primary_db | 27 kB 00:00
(2/2): docker-ce-stable/x86_64/updateinfo | 55 B 00:01
Metadata Cache Created

Install Docker CE using yum command.

yum install -y docker-ce

Configure Docker service for use by Kubernetes.

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

Start and enable Docker service.

systemctl enable docker.service
systemctl start docker.service

Docker CE has been installed. Repeat the above steps to install Docker CE on kubenode-01.centlinux.com.

Install Kubernetes on CentOS 7

Set following Kernel parameter as required by Kubernetes.

cat > /etc/sysctl.d/kubernetes.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Reload Kernel parameter configuration files.

modprobe br_netfilter
sysctl --system

Output:

* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/kubernetes.conf ...
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

Turn off Swap for Kubernetes installation.

swapoff -a
sed -e '/swap/s/^/#/g' -i /etc/fstab

Kubernetes uses following services ports on Master node.

PortProtocolPurpose
6443TCPKubernetes API server
2379-2380TCPetcd server client API
10250TCPKubelet API
10251TCPkube-scheduler
10252TCPkube-controller-manager

Allow Kubernetes service ports on kubemaster-01.centlinux.com in Linux firewall.

firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp
firewall-cmd --reload

Kubernetes uses following service ports on Worker node.

PortProtocolPurpose
10250TCPKubelet API
30000-32767TCPNodePort Services

Allow Kubernetes service ports on kubenode-01.centlinux.com in Linux firewall.

firewall-cmd --permanent --add-port={10250,30000-32767}/tcp
firewall-cmd --reload

Switch SELinux to Permissive mode using following commands.

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Add Kubernetes yum repository as follows.

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Build yum cache for kubernetes repository.

yum makecache fast

Install Kubernetes packages using yum command.

yum install -y kubelet kubeadm kubectl

To enable automatic completion of kubectl commands, we have to execute the script provided by kubectl command itself. You must ensure that bash-completion package is installed.

source <(kubectl completion bash)

For making it persistent, we have to add the script in Bash Completion directory.

kubectl completion bash > /etc/bash_completion.d/kubectl

Kubernetes has been installed. Repeat above steps to install Kubernetes on kubenode-01.centlinux.com.

Configure Kubelet Service on Master Node

Use kubeadm command to pull images that are required to configure kubelet service.

kubeadm config images pull

Output:

[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.1
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.1
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.3.10
[config/images] Pulled k8s.gcr.io/coredns:1.3.1

Initialize and configure the kubelet service as follows:

kubeadm init

Output:

[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubemaster-01.centlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.116.160]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubemaster-01.centlinux.com localhost] and IPs [192.168.116.160 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubemaster-01.centlinux.com localhost] and IPs [192.168.116.160 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 42.152638 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node kubemaster-01.centlinux.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubemaster-01.centlinux.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mm20xq.goxx7plwzrx75tv3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.116.160:6443 --token mm20xq.goxx7plwzrx75tv3
--discovery-token-ca-cert-hash sha256:00065886b183ea9cc2e9fbb68ff2a82b52574c2ab5ad8868c4fd6c2feb006d6f

Execute following commands as suggested by above command.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Start and enable Kubelet Service.

systemctl enable kubelet.service
systemctl start kubelet.service

Add a node to Kubernetes Cluster

Execute status of nodes in the Kubernetes cluster.

kubectl get nodes

Output:

NAME                        STATUS     ROLES    AGE   VERSION
kubemaster-01.centlinux.com NotReady master 50m v1.14.1

Add another node to Kubernetes cluster by executing the command provided by kubeadm init command.

kubeadm join 192.168.116.160:6443 --token mm20xq.goxx7plwzrx75tv3 --discovery-token-ca-cert-hash sha256:00065886b183ea9cc2e9fbb68ff2a82b52574c2ab5ad8868c4fd6c2feb006d6f

Output:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

If you experience network errors, then you have to install a non-default network like Flannel on all nodes.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Output:

podsecuritypolicy.extensions/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 unchanged
daemonset.extensions/kube-flannel-ds-arm64 unchanged
daemonset.extensions/kube-flannel-ds-arm unchanged
daemonset.extensions/kube-flannel-ds-ppc64le unchanged
daemonset.extensions/kube-flannel-ds-s390x unchanged

Check status of nodes in Kubernetes cluster again.

kubectl get nodes

Output:

NAME                        STATUS   ROLES    AGE   VERSION
kubemaster-01.centlinux.com Ready master 45m v1.14.1
kubenode-01.centlinux.com Ready <none> 43m v1.14.1

We have successfully setup Kubernetes cluster of two nodes on CentOS 7.

Frequently Asked questions (FAQs)

What are the basic requirements for setting up a Kubernetes cluster on CentOS 7?
You’ll need at least two CentOS 7 machines (one master and one worker), 2+ CPU cores per machine, 2GB+ RAM, and stable networking between nodes. Swap must be disabled, and firewall rules configured properly.

Do I need to install any specific tools before setting up Kubernetes cluster?
Yes, you’ll need kubeadm, kubelet, and kubectl for cluster management, along with docker or another container runtime like containerd.

How do I initialize the Kubernetes master node?
After installing prerequisites, run kubeadm init on the master node. This generates a join command for worker nodes to connect to the cluster.

What networking solution should I use for my Kubernetes cluster?
Kubernetes requires a Container Network Interface (CNI) plugin. Popular choices include Flannel, Calico, or Weave Net for pod networking.

How do I verify if my Kubernetes cluster is working correctly?
Use kubectl get nodes to check node status. All nodes should show “Ready” once the setup is complete. Also, test deploying a sample app to confirm functionality.

Final Thoughts

Setting up a Kubernetes cluster on CentOS 7 involves careful preparation, from configuring system prerequisites to installing and initializing Kubernetes components like kubeadm, kubelet, and kubectl. By following these steps, you now have a functional cluster that can scale to meet your application and service deployment needs.

While CentOS 7 remains a stable platform, keep in mind Kubernetes evolves rapidly, so staying up to date with best practices and security patches is crucial. With your cluster ready, you can now focus on deploying containerized applications, setting up CI/CD pipelines, and optimizing your infrastructure for high availability and performance.

Looking for a Linux server expert? I provide top-tier administration, performance tuning, and security solutions for your Linux systems. Explore my Fiverr profile for details!

Thank you for following along, and best of luck with your Kubernetes cluster setup!

Looking for something?

9 responses to “How to setup Kubernetes Cluster on CentOS 7”

  1. Ahmer M Avatar

    Execute following commands and try again.

    # mkdir -p $HOME/.kube
    # sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/conf
    # sudo chown $(id -u):$(id -g) $HOME/.kube/config

  2. Anonymous Avatar
    Anonymous

    this step doesn't work – Start and enable Docker service-
    CentOS Linux release 7.6.1810 (Core)
    if I remove /etc/docker directory the service start well

  3. Ahmer M Avatar

    Please look for a possible typo error in /etc/docker/daemon.json.

  4. Unknown Avatar

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/conf
    or
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    ?

  5. Ahmer M Avatar

    Thank you for the correction. The article has been updated accordingly.

  6. Mikhail Avatar

    Good day! The article is excellent. Please advise how to add a second master.

  7. Unknown Avatar

    how can i run “kubeadm join“ if I installed only Docker on kubenode?

  8. Ahmer M Avatar

    Yes, you can.
    Because we are installing kubernetes on kubenode-01.example.com as well.

Leave a Reply