Deploying LPC Rancher¶
This section provides detailed steps for deploying the LPC Rancher environment and configuring the Kubernetes using RKE. It covers instructions for preparing the VMs, installing necessary dependencies and setting up Rancher with high availability to manage Kubernetes clusters efficiently.
1. Preparing the Visual Machine (VM)¶
-
Configure network interface and activate the network connection.
vi /etc/sysconfig/network-scripts/ifcfg-ens192 nmcli c reload ens192 nmcli c up ens192
-
Disable SELinux, Firewall and Swap.
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config setenforce 0 systemctl stop firewalld yum erase firewalld -y 1>/dev/null swapoff -a sed -i '/swap/d' /etc/fstab
-
Set Kernel Parameters.
sed -i '$a net.bridge.bridge-nf-call-iptables=1' /etc/sysctl.d/99-sysctl.conf sysctl -p
-
Set Hostname.
hostnamectl set-hostname rancher-box-server-01
2. Configuring the System¶
-
Add kernel parameters for Rancher and Kubernetes.
if [[ $(cat /etc/sysctl.conf | grep rancher-kernel | wc -l) -eq 0 ]];then echo " ## rancher-kernel ## net.bridge.bridge-nf-call-ip6tables=1 ## net.bridge.bridge-nf-call-iptables=1 net.ipv4.ip_forward=1 net.ipv4.conf.all.forwarding=1 net.ipv4.neigh.default.gc_thresh1=4096 net.ipv4.neigh.default.gc_thresh2=6144 net.ipv4.neigh.default.gc_thresh3=8192 net.ipv4.neigh.default.gc_interval=60 net.ipv4.neigh.default.gc_stale_time=120 kernel.perf_event_paranoid=-1 #sysctls for k8s node config net.ipv4.tcp_slow_start_after_idle=0 net.core.rmem_max=16777216 fs.inotify.max_user_watches=524288 kernel.softlockup_all_cpu_backtrace=1 kernel.softlockup_panic=0 kernel.watchdog_thresh=30 fs.file-max=2097152 fs.inotify.max_user_instances=8192 fs.inotify.max_queued_events=16384 vm.max_map_count=262144 ## fs.may_detach_mounts=1 net.core.netdev_max_backlog=16384 net.ipv4.tcp_wmem=4096 12582912 16777216 net.core.wmem_max=16777216 net.core.somaxconn=32768 net.ipv4.ip_forward=1 net.ipv4.tcp_max_syn_backlog=8096 net.ipv4.tcp_rmem=4096 12582912 16777216 net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.lo.disable_ipv6=1 kernel.yama.ptrace_scope=0 vm.swappiness=0 kernel.core_uses_pid=1 # Do not accept source routing net.ipv4.conf.default.accept_source_route=0 net.ipv4.conf.all.accept_source_route=0 # Promote secondary addresses when the primary address is removed net.ipv4.conf.default.promote_secondaries=1 net.ipv4.conf.all.promote_secondaries=1 # Enable hard and soft link protection fs.protected_hardlinks=1 fs.protected_symlinks=1 # see details in https://help.aliyun.com/knowledge_detail/39428.html net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.lo.arp_announce=2 net.ipv4.conf.all.arp_announce=2 # see details in https://help.aliyun.com/knowledge_detail/41334.html net.ipv4.tcp_max_tw_buckets=5000 net.ipv4.tcp_syncookies=1 net.ipv4.tcp_fin_timeout=30 net.ipv4.tcp_synack_retries=2 kernel.sysrq=1 " >> /etc/sysctl.conf sysctl -p fi
-
Set file limits.
if [[ $(cat /etc/security/limits.conf |grep nofile | wc -l) -le 1 ]];then cat >> /etc/security/limits.conf <<EOF * soft nofile 65535 * hard nofile 65536 EOF fi } mkfs.xfs /dev/sdb mkdir -p /var/lib/longhorn mount /dev/sdb /var/lib/longhorn echo "/dev/sdb /var/lib/longhorn xfs defaults 0 0" >> /etc/fstab
3. Deploying the dependency package¶
-
Install Docker and Kubernetes CLI.
yum install -y yum-utils device-mapper-persistent-data lvm2 1>/dev/null yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y -q docker-ce-19.03.13-3.el8 docker-ce-cli-19.03.13-3.el8 1>/dev/null yum install -y yum-utils device-mapper-persistent-data lvm2 1>/dev/null yum install -y iscsi-initiator-utils -y 1>/dev/null systemctl start iscsid && systemctl enable iscsid && systemctl status iscsid
-
Create the Docker daemon configuration, then create
kubernetes.repo
and installkubectl
.cat <<EOF > kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF sudo mv kubernetes.repo /etc/yum.repos.d/ yum clean all yum install kubectl -y 1>/dev/null mkdir -p ~/.kube mkdir -p /etc/docker cat <<EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "bip": "<ip-address>/24", "dns": ["<ip-address>","<ip-address>","<ip-address>"], "log-driver": "json-file", "log-opts": { "max-size": "100m", "max-file": "3" } } EOF systemctl enable docker systemctl daemon-reload systemctl restart docker systemctl status docker systemctl status iscsid
4. Deploying Rancher Using RKE¶
-
Download RKE v1.4.3 and set permissions. Then, generate SSH keys and configure nodes.
#server node <ip-address> <ip-address> <ip-address> adduser rancher passwd rancher sudo usermod -aG docker rancher #<ip-address> su - rancher ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa ssh-copy-id -i .ssh/id_rsa.pub rancher@<ip-address> ssh-copy-id -i .ssh/id_rsa.pub rancher@<ip-address> ssh-copy-id -i .ssh/id_rsa.pub rancher@<ip-address> Download rke: https://github.com/rancher/rke/releases/tag/v1.4.3 upload to: /home/rancher/rke/rke_linux-amd64 mv /home/rancher/rke/rke_linux-amd64 /usr/bin/rke chmod +x /usr/bin/rke rke --version
-
Create Rancher cluster configuration.
nodes: - address: <ip-address> user: rancher role: [controlplane, worker, etcd] ssh_key_path: /home/rancher/.ssh/id_rsa - address: <ip-address> user: rancher role: [controlplane, worker, etcd] ssh_key_path: /home/rancher/.ssh/id_rsa - address: <ip-address> user: rancher role: [controlplane, worker, etcd] ssh_key_path: /home/rancher/.ssh/id_rsa services: etcd: snapshot: true creation: 6h retention: 24h # Required for external TLS termination with # ingress-nginx v0.22+ ingress: provider: nginx options: use-forwarded-headers: "true"
-
Deploy the cluster.
rke up --config rancher-cluster.yml
5. Deploying Rancher with Helm¶
-
Install Cert-Manager and then deploy Rancher.
ansible all -m shell -a "echo \"<ip-address> rancherui.domain-poc.com\" >> /etc/hosts" wget https://get.helm.sh/helm-v3.3.1-linux-amd64.tar.gz tar xf helm-v3.3.1-linux-amd64.tar.gz cp linux-amd64/helm /usr/local/bin/ helm version #https://ranchermanager.docs.rancher.com/v2.6/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster#4-install-cert-manager kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml # Add the Jetstack Helm repository helm repo add jetstack https://charts.jetstack.io # Update your local Helm chart repository cache helm repo update # Install the cert-manager Helm chart helm install cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version v1.7.1 helm repo add rancher-stable https://releases.rancher.com/server-charts/stable helm fetch rancher-stable/rancher --version=v2.6.9 kubectl create namespace cattle-system helm install rancher ./rancher-2.6.9.tgz --namespace cattle-system --set hostname=rancherui.domain-poc.com --set replicas=3 --set useBundledSystemChart=true
-
Access Rancher UI: Retrieve the admin password.
Configuring the Rancher Ingress Controller¶
To configure the Rancher Ingress Controller for handling large headers, follow the steps:
- Log in to Rancher and navigate to your cluster (for example,
rancher-box-pune
). - Click "Storage" from the left navigation and select "ConfigMaps". Choose the namespace as "Ingress-nginx".
-
Edit the
configmap
named "ingress-nginx-controller". -
Add the following parameters in the specification section:
- client-header-buffer-size = 16K
- large-client-header-buffers = 4 16k
-
Save the configuration changes.
This will automatically apply the updated settings to the ingress controller.
Info
You can also update the ConfigMap YAML directly as shown below.