Getting Started With Rancher
Rancher is primarily a management and organization platform for Kubernetes clusters at scale. Rancher not only can deploy Enterprise Kubernetes on-prem using physical hardware or VMware’s vSphere but also orchestrate any certified Kubernetes clusters, including Amazon’s EKS, Google’s GKE, Microsoft’s AKS, etc., along with providing a unified platform. Whether it’s a Raspberry Pi cluster sitting on your desk or an RKE cluster running on physical servers in your data center, or even a complete PaaS solution in AWS. Section 2
he Rancher server is built on Kubernetes and runs as an application on any certified Kubernetes cluster, and, of course, Rancher is 100% open source with no license keys. Providing the primary controller for managing downstream clusters, the Rancher server also provides access to your downstream clusters in a standardized web UI and API. Rancher is primarily deployed on two types of clusters, RKE and K3s. RKE is mainly used in more traditional data centers and cloud deployments, and K3s are primarily used in more edge and developer laptop deployments.
RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It solves the common frustration of installation complexity with Kubernetes by removing most host dependencies and presenting a stable path for deployment, upgrades, and rollbacks. As long as you can run a supported Docker version, you can deploy and run Kubernetes with RKE.
K3s is a lightweight certified Kubernetes distribution. All duplicate, redundant, and legacy code is removed and baked into a single binary that is less than 40MB and contains everything needed to run a Kubernetes cluster. This includes etcd, traefik, and all Kubernetes components. It is designed to run resource-constrained, remote locations, or inside IoT appliances. K3s have also been built to support ARM64 and ARMv7 nodes fully, so they can even be ran on a Raspberry Pi.
Three Linux nodes with the following minimum specs:
You can either follow the Docker installation instructions or use Rancher’s install scripts to install Docker.
Commands: Shell1
curl https://releases.rancher.com/install-docker/20.10.sh |sudo bash.
From your workstation or management server, download the current latest RKE release.
Commands: Shell1
cd /tmp
2
3
wget https://github.com/rancher/rke/releases/download/v1.2.8/rke_linux-amd64
4
5
chmod +x rke_linux-amd64
6
7
sudo mv rke_linux-amd64 /usr/local/bin
From your workstation or management server, download the current latest kubectl release.
Commands: Shell1
cd /tmp
2
3
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
4
5
chmod +x kubectl
6
7
sudo mv kubectl /usr/local/bin/kubectl
RKE uses a cluster.yml file to define the nodes in the cluster and what roles each node should have. With three different roles that a node can have, the first is the etcd plane, the database for Kubernetes, and this role should be deployed in a HA configuration with an odd number of nodes and the default size of three nodes.
A five-member etcd cluster is the largest recommended size due to write performance suffering at scale. The second role being the control plane, which hosts the Kubernetes controllers and other related management services, should be deployed in a HA configuration with a minimum of two nodes.
Note: The control plane doesn’t scale horizontally very well and scales more vertically.
The final role is the worker plane, which hosts your applications and related services. Nodes can support multiple roles, and in the default Rancher configuration, we’ll be building a three-node cluster with all nodes running all roles.
Example cluster.yml file:
For more examples, check out the Rancher documentation.
After creating the cluster.yml, we need to run the command rke to build the cluster using the following steps:
Once these steps are done, RKE will create the file cluster.rkestate; this file contains credentials and the current state of the cluster. RKE will also create the file kube_config_cluster.yml; this file is used by kubectl to access the cluster. To make access more manageable, we’ll want to copy this file to kubectl’s config directory.
Commands: Shell1
mkdir -p ~/.kube/
2
3
cp kube_config_cluster.yml ~/.kube/config
4
5
Verify access:
6
7
kubectl get nodes
One Linux node with the following minimum specs:
While SSH into the K3s node, we’ll run the following commands: Shell1
sudo su -
2
3
curl -sfL https://get.k3s.io | sh –
4
5
Verify access:
6
7
k3s kubectl get node
Note: For K3s clusters, update the command “kubectl” to “k3s kubectl”.
From your workstation or management server, download the latest helm release.
Commands: Shell1
sudo su –
2
3
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
Using the command helm repo add, we’ll add the Rancher charts to helm: Shell1
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
2
3
helm repo add jetstack https://charts.jetstack.io
Cert-manager will manage the SSL certificates for Rancher: Shell1
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml
2
3
kubectl create namespace cert-manager
4
5
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.4
Please see the cert-manager’s documentation for more details.
We’re now going to install Rancher using the default settings and following commands: Shell1
kubectl create namespace cattle-system
2
3
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=rancher.example.com
In single-node mode, DNS is optional, and the node IP/Hostname can be used in place of the Rancher URL.
To provide a HA setup for Rancher, we’ll want to create a Layer-4 (TCP mode) or Layer-7 (HTTP mode) load balancer for ports 80 and 443 sitting in front of and forwards traffic to all nodes in the cluster. The DNS record for the Rancher URL should be pointed at the load balancer.
For more details, please see Rancher’s documentation.
Downstream clusters in Rancher are RKE/RKE2/K3s clusters that Rancher manages for you. They can also be clusters that are built outside Rancher then imported. In this example, we’ll be making a standard three-node with all nodes running all roles.
Three Linux nodes with the following minimum specs:
You can either follow these Docker installation instructions or use Rancher’s install scripts .
Example: Shell1
curl https://releases.rancher.com/install-docker/20.10.sh |sudo bash
We’ll want to run the previous command on each node. Then once all three nodes have joined successfully, the cluster should be in an active state. Section 3
Snapshots of the etcd database can be taken and saved locally or to S3. Etcd backups are used to back up the state of the Kubernetes cluster. This backup includes all the deployments, secrets, and configmaps for the cluster.
Note: This does not have backups for any application volumes being used in the cluster. You’ll need a third-party tool to back up your application data.
Rancher is powered by Prometheus, Grafana, Alertmanager, the Prometheus Operator, and the Prometheus adapter.
This monitoring stack allows you to:
OPA Gatekeeper constraints are a set of policies that allow or deny particular behavior in a Kubernetes cluster. Below are some example policies that I usually recommend applying:
By default, Kubernetes can be vulnerable to numerous security issues, including privilege escalation, allowing users to gain root access to the Kubernetes host servers. To address this issue, Rancher created a guide with a number of setting changes to lock down a cluster.
Check out these instructions for hardening a production installation of a RKE cluster with Rancher.
To verify the cluster hardening was applied correctly and hasn’t changed, we configure a scheduled scan using this guide.
By default, Rancher clusters have a scheduled backup job that takes an etcd backup every 12 hours. But this is only backing up the etcd database and not backing up any volume data. It’s also designed to restore a whole cluster without restoring individual objects and rolling the whole cluster back. This is where a third-party tool can be used to take volume and object-level backups.
For more details on the Rancher etcd backup, please see this documentation.
To install a third-party data protection tool, like TrilioVault for example, on a Rancher cluster, we’ll want to follow the official tool install guide.
See example below.
We’ll then want to follow the example application to deploy a WordPress site with a MySQL database with an attached volume. See here.
Then, to kick off a restore, we’ll need to create a restore job that can be on the same cluster or restored on a different cluster (Great of a DR plan) following these steps. Section 4
This getting started with Rancher Refcard provides a step-by-step guide for installing Rancher, addressing standard Day-2 tasks and making your Kubernetes cluster production-ready.
Source: https://dzone.com/refcardz/getting-started-with-rancher
Department of Information Technologies: https://www.ibu.edu.ba/department-of-information-technologies/