pixel

How to install Kubernetes offline on CentOS 7

Share on Social Media

Learn how to install Kubernetes offline on CentOS 7 with our comprehensive guide. Follow step-by-step instructions for a smooth and efficient offline Kubernetes setup. #centlinux #linux #k8s

What is Kubernetes?

Kubernetes or K8s uses a containerization platform like Docker, containerd, etc. and requires a Registry to download and use Docker images. Docker Hub is the global public registry that serves the purpose. However, there are situations, when we want to use Kubernetes (K8s) in a private network. In such a situation, we cannot access Docker Hub, therefore, we must configure a Private Docker Registry for our Kubernetes cluster.

In this article, we will install Kubernetes offline on CentOS 7. We are not configuring a Private Docker Registry here, but you can read our following articles to configure it by yourself.

How to install Kubernetes offline on CentOS 7
How to install Kubernetes offline on CentOS 7

System Specification

We have configured two CentOS 7 virtual machines.

Hostname:docker-online.example.comdocker-offline.example.com
Operating System:CentOS 7.6CentOS 7.6
Internet:YesNo
Docker Version:Docker CE 18.09Docker CE 18.09

Install Docker offline on CentOS 7

We have already written a complete article Install Docker Offline on CentOS 7. Therefore, it is advised that, you should follow that article to install Docker CE before Kubernetes (K8s) installation on both machines.

We are also required to install Docker CE on docker-online.example.com, because we will pull required images from Docker Hub using docker command.

Connect with docker-offline.example.com using ssh as root user.

After installing Docker CE, we must configure it to use with Kubernetes (K8s).

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

Restart docker.service.

systemctl restart docker.service

Read Also: How to install Kubernetes on Rocky Linux 9

Download Packages/Images for Offline Installation

Connect with docker-online.example.com using ssh as root user.

Add Kubernetes (K8s) yum repository as follows:

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Build yum cache.

yum makecache fast

Output:

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 6.7 kB 00:00
* base: mirrors.ges.net.pk
* epel: mirror1.ku.ac.th
* extras: mirrors.ges.net.pk
* updates: mirrors.ges.net.pk
base | 3.6 kB 00:00
docker-ce-nightly | 3.5 kB 00:00
docker-ce-stable | 3.5 kB 00:00
extras | 3.4 kB 00:00
kubernetes/signature | 454 B 00:00
Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg
Importing GPG key 0xA7317B0F:
Userid : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
kubernetes/signature | 1.4 kB 00:06 !!!
updates | 3.4 kB 00:00
kubernetes/primary | 47 kB 00:03
kubernetes 342/342
Metadata Cache Created

Create a directory to download required Kubernetes (K8s) packages.

mkdir ~/k8s
cd ~/k8s

Download Kubernetes (K8s) packages using yumdownloader.

yumdownloader --resolve kubelet kubeadm kubectl

List downloaded files.

ls

Output:

53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm
548a0dcd865c16a50980420ddfa5fbccb8b59621179798e6dc905c9bf8af3b34-kubernetes-cni-0.7.5-0.x86_64.rpm
5c6cb3beb5142fa21020e2116824ba77a2d1389a3321601ea53af5ceefe70ad1-kubectl-1.14.1-0.x86_64.rpm
9e1af74c18311f2f6f8460dbd1aa3e02911105bfd455416381e995d8172a0a01-kubeadm-1.14.1-0.x86_64.rpm
conntrack-tools-1.4.4-4.el7.x86_64.rpm
e1e8f430609698d7ec87642179ab57605925cb9aa48d406da97dedfb629bebf2-kubelet-1.14.1-0.x86_64.rpm
libnetfilter_cthelper-1.0.0-9.el7.x86_64.rpm
libnetfilter_cttimeout-1.0.0-6.el7.x86_64.rpm
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm

Download Docker images from Docker Hub, as required by Kubernetes (K8s) for node initialization.

docker pull k8s.gcr.io/kube-apiserver:v1.14.1
docker pull k8s.gcr.io/kube-controller-manager:v1.14.1
docker pull k8s.gcr.io/kube-scheduler:v1.14.1
docker pull k8s.gcr.io/kube-proxy:v1.14.1
docker pull k8s.gcr.io/pause:3.1
docker pull k8s.gcr.io/etcd:3.3.10
docker pull k8s.gcr.io/coredns:1.3.1

List Docker images.

docker image ls -a

Output:

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy v1.14.1 20a2d7035165 2 weeks ago 82.1MB
k8s.gcr.io/kube-apiserver v1.14.1 cfaa4ad74c37 2 weeks ago 210MB
k8s.gcr.io/kube-controller-manager v1.14.1 efb3887b411d 2 weeks ago 158MB
k8s.gcr.io/kube-scheduler v1.14.1 8931473d5bdb 2 weeks ago 81.6MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 3 months ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 4 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 16 months ago 742kB

Export Kubernetes (K8s) related Docker images to individual tar files.

docker save k8s.gcr.io/kube-apiserver:v1.14.1 > ~/k8s/kube-apiserver.tar
docker save k8s.gcr.io/kube-controller-manager:v1.14.1 > ~/k8s/kube-controller-manager.tar
docker save k8s.gcr.io/kube-scheduler:v1.14.1 > ~/k8s/kube-scheduler.tar
docker save k8s.gcr.io/kube-proxy:v1.14.1 > ~/k8s/kube-proxy.tar
docker save k8s.gcr.io/pause:3.1 > ~/k8s/pause.tar
docker save k8s.gcr.io/etcd:3.3.10 > ~/k8s/etcd.tar
docker save k8s.gcr.io/coredns:1.3.1 > ~/k8s/coredns.tar

List tar files.

ls ~/k8s/*.tar

Output:

/root/k8s/coredns.tar                  /root/k8s/kube-proxy.tar
/root/k8s/etcd.tar /root/k8s/kube-scheduler.tar
/root/k8s/kube-apiserver.tar /root/k8s/pause.tar
/root/k8s/kube-controller-manager.tar

Download Flannel Network script.

cd ~/k8s
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

We have successfully downloaded all required files for Kubernetes (K8s) installation.

Transfer the directory ~/k8s from docker-online to docker-offline.

Configure Kubernetes Prerequisites

Connect with docker-offline.example.com using ssh as root user.

Set Kernel parameter as required by Kubernetes (K8s).

cat > /etc/sysctl.d/kubernetes.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Reload Kernel parameter configuration files.

modprobe br_netfilter
sysctl --system

Output:

* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/kubernetes.conf ...
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

Turn off Swap for Kubernetes (K8s) installation.

swapoff -a
sed -e '/swap/s/^/#/g' -i /etc/fstab

Allow Kubernetes (K8s) service ports in Linux firewall.

For Master node

firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp

For Worker nodes

firewall-cmd --permanent --add-port={10250,30000-32767}/tcp

Reload firewall configurations.

firewall-cmd --reload

Switch SELinux to Permissive mode using following commands.

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Install Kubernetes offline on CentOS 7

Now, install Kubernetes (K8s) packages from ~/k8s directory using rpm command.

rpm -ivh --replacefiles --replacepkgs ~/k8s/*.rpm

Output:

warning: 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:socat-1.7.3.2-2.el7 ################################# [ 10%]
2:libnetfilter_queue-1.0.2-2.el7_2 ################################# [ 20%]
3:libnetfilter_cttimeout-1.0.0-6.el################################# [ 30%]
4:libnetfilter_cthelper-1.0.0-9.el7################################# [ 40%]
5:conntrack-tools-1.4.4-4.el7 ################################# [ 50%]
6:kubelet-1.14.1-0 ################################# [ 60%]
7:kubernetes-cni-0.7.5-0 ################################# [ 70%]
8:kubectl-1.14.1-0 ################################# [ 80%]
9:cri-tools-1.12.0-0 ################################# [ 90%]
10:kubeadm-1.14.1-0 ################################# [100%]

Enable bash completion for kubectl.

source <(kubectl completion bash)
kubectl completion bash > /etc/bash_completion.d/kubectl

Import the tar files of docker images into Docker.

docker load < ~/k8s/coredns.tar
docker load < ~/k8s/kube-proxy.tar
docker load < ~/k8s/etcd.tar
docker load < ~/k8s/kube-scheduler.tar
docker load < ~/k8s/kube-apiserver.tar
docker load < ~/k8s/pause.tar
docker load < ~/k8s/kube-controller-manager.tar

Now, we have all the required Docker images in local registry. Therefore, we can configure this CentOS 7 machine either as Kubernetes (K8s) master or worker node.

We don’t have configured any master node. Therefore, we have to configure a master node first.

kubeadm init

Output:

I0427 20:36:47.088327   18015 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0427 20:36:47.090078 18015 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [docker-offline.example.com localhost] and IPs [192.168.116.159 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [docker-offline.example.com localhost] and IPs [192.168.116.159 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [docker-offline.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.116.159]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 27.538706 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node docker-offline.example.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node docker-offline.example.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6e4ntu.a5r1md9vuqex4pe8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.116.159:6443 --token 6e4ntu.a5r1md9vuqex4pe8
--discovery-token-ca-cert-hash sha256:19f4d9f6d433cc12addb70e2737c629213777deed28fa5dcc33f9d05d2382d5b

Execute suggested script.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Start and enable kubelet.service.

systemctl enable kubelet.service
systemctl start kubelet.service

Add Flannel network.

kubectl apply -f ~/k8s/kube-flannel.yml

Output:

podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

List nodes in Kubernetes (K8s) cluster.

kubectl get nodes

Output:

NAME                         STATUS     ROLES    AGE    VERSION
docker-offline.example.com NotReady master 5m9s v1.14.1

Recommended Training: Certified Kubernetes Administrator (CKA) with Practice Tests from Mumshad Mannambeth, KodeKloud Training

2301254 26c8 7

Final Thoughts

Installing Kubernetes offline on CentOS 7 allows you to deploy a powerful container orchestration platform even in environments without internet access. In this guide, we covered downloading all necessary packages, setting up local repositories, configuring Kubernetes components, and initializing the cluster.

With Kubernetes now installed offline, you have a fully functional and self-contained environment ready for managing containerized applications. To maintain cluster stability and security, regularly update your offline resources, monitor system health, and plan for scaling as your needs grow.

Searching for a skilled Linux admin? From server management to security, I ensure seamless operations for your Linux systems. Find out more on my Fiverr profile!

Thank you for following along, and good luck with your Kubernetes installation!

Looking for something?

21 responses to “How to install Kubernetes offline on CentOS 7”

  1. Unknown Avatar

    I have a issue after i loaded image flannel and run "kubectl apply -f ~/k8s/kube-flannel.yml". the status pod flannel is "CrashLoopbackoff". Can you tell me reason of this error and how can I fix it? Thanks you so much

  2. Ahmer M Avatar

    It looks like, we have to manually define CIDR here.

    Add following commands in /etc/kubernetes/manifests/kube-controller-manager.yaml
    – –allocate-node-cidrs=true
    – –cluster-cidr=10.244.0.0/16

    systemctl restart kubelet.service

    Check again.

  3. Unknown Avatar

    I checked that those two lines already exist in the file. When I restarted, the error still occurred when I checked the event I received the following error:
    Normal Scheduled 8m1s default-scheduler Successfully assigned kube-system/kube-flannel-ds-amd64-lxgk9 to master
    Normal Pulled 8m kubelet, master Container image "quay.io/coreos/flannel:v0.11.0-amd64" already present on machine
    Normal Started 8m kubelet, master Started container install-cni
    Normal Created 8m kubelet, master Created container install-cni
    Normal Started 7m11s (x4 over 7m59s) kubelet, master Started container kube-flannel
    Warning BackOff 6m33s (x8 over 7m56s) kubelet, master Back-off restarting failed container

  4. Ahmer M Avatar

    Please contact me on Facebook or Linkedin.

  5. Ron Avatar
    Ron

    Whatever conversation you have on LinkedIn / FB should be backported to here. People should be able to see what the solution to the issue was.

  6. Ahmer M Avatar

    Hi Ron,
    Thanks for the advice.

  7. Unknown Avatar

    Hi Ahmer, thank you for your tutorial. It was so useful, but I have same issue. It is stuck in Flannel image pulling.
    Regards,
    Kevin

  8. Ahmer M Avatar

    Try the above solution, if problem persists then contact me on Facebook.

  9. Unknown Avatar

    Hi Ahmer,
    I could fix it. you need to define gateway in your NIC, even you have just one NIC, still you have to define gateway on it. Then flannel, and dns service will start correctly.

    Anyway, thank you for your help.
    Regards,
    Kevin

  10. Ahmer M Avatar

    Thank you very much for sharing the solution. It will really helpful for the other readers.

  11. aarav2251@gmail.com Avatar

    hello sir i want o install kuburnities 1.16 0n rhel 7 offline can you please help me out in this ..

  12. Ahmer M Avatar

    You can follow the same steps as in the above post to install Kubernetes offline on RHEL 7.

  13. Anonymous Avatar
    Anonymous

    My Kubernetes version is v1.19.3. I solved it by copying flannel docker image to offline machines and appending argument "–pod-network-cidr 10.244.0.0/16" to "kubeadm init" command.

  14. Ahmer M Avatar

    Thanks for sharing your experience.

  15. Anonymous Avatar
    Anonymous

    I had the same issue.. I ran kubectl describe pod {flannel-pod} -n kube-system and noticed that the flannel image was missing. I downloaded the image on the offline server using the same instruction used to download the other images.. I reran the kubectl apply -f flannel.yml and it worked

  16. Anonymous Avatar
    Anonymous

    Hi Ahmer,

    Nice article, but master status is showing as NOT READY
    [root@master docker-image]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master NotReady master 10m v1.16.1

    any idea??

  17. Ahmer M Avatar

    Execute following command to drill down the actual cause of this problem.

    kubectl describe nodes NODE_NAME

  18. Assem Avatar

    Hello! great article! But I have a mistake: when I do kubeadm init: I have this output: [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns/coredns:v1.8.4: output: Error response from daemon: Get "https://k8s.gcr.io/v2/&quot;: dial tcp: lookup k8s.gcr.io on [::1]:53: read udp [::1]:50587->[::1]:53: read: connection refused

    , error: exit status 1

    and I figured it is because of the version of coredns. so I tried to pull the correct version: docker pull k8s.gcr.io/coredns:v1.8.4

    Error response from daemon: manifest for k8s.gcr.io/coredns:v1.8.4 not found: manifest unknown: Failed to fetch "v1.8.4" from request "/v2/coredns/manifests/v1.8.4".

    I got this issue. What can I do?

  19. Ahmer M Avatar

    Try to download all Docker packages without mentioning any versions.

  20. Unknown Avatar

    Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    getting the above error

  21. Ahmer M Avatar

    try "systemctl start docker"

Leave a Reply