Provisioning an RKE2 Cluster on Custom VMs
This guide demonstrates how to provision an RKE2 cluster on custom VMs that are created and configured outside Rancher, and then import the cluster into Rancher for centralized management.
Overview
Custom VM RKE2 Cluster in Rancher
For environments where VMs are provisioned independently (e.g., via manual setup or external orchestration tools), RKE2 can be installed directly on the nodes and subsequently imported into Rancher for management.
Prerequisites
VM Requirements
- Operating System: Ubuntu 22.04, CentOS 8, or similar Linux distributions.
- Hardware: Minimum 4 CPUs and 8 GB RAM per node.
- Networking:
- Nodes must be able to communicate with each other on required ports.
- Nodes must be accessible from your local machine or Rancher server.
RKE2 Requirements
- RKE2 binaries must be installed on all nodes.
- A shared token for cluster authentication.
Setting Up RKE2
Step 1: Install RKE2 on Control Plane Nodes
SSH into each control-plane VM.
Install RKE2 using the installation script:
curl -sfL https://get.rke2.io | sh - systemctl enable rke2-server.service systemctl start rke2-server.service
Retrieve the cluster token from
/var/lib/rancher/rke2/server/node-token
on the first control-plane node. This token will be required for adding other nodes to the cluster.Copy the kubeconfig file to your local machine for kubectl access:
scp root@<control-plane-ip>:/etc/rancher/rke2/rke2.yaml ./kubeconfig.yaml
Update the server address in the kubeconfig file to the control-plane node’s IP.
Step 2: Install RKE2 on Worker Nodes
- SSH into each worker VM.
- Install RKE2 in agent mode:
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh - systemctl enable rke2-agent.service systemctl start rke2-agent.service
- Configure the worker nodes to join the cluster by editing
/etc/rancher/rke2/config.yaml
:server: https://<control-plane-ip>:9345 token: <cluster-token>
- Restart the RKE2 agent:
systemctl restart rke2-agent.service
Importing the Cluster into Rancher
Step 1: Add Cluster in Rancher
- Log in to Rancher.
- Navigate to Cluster Management and click Add Cluster.
- Select Import an Existing Cluster.
Step 2: Generate Import Command
- Provide a name for your cluster (e.g.,
custom-vm-cluster
). - Copy the generated kubectl command for importing the cluster.
Step 3: Run the Import Command
- SSH into one of the control-plane nodes or a machine with access to the cluster.
- Run the kubectl command to deploy the Rancher agents.
- Verify the cluster’s status in Rancher. Once the agents are deployed, the cluster should appear as Active.
Testing and Validation
Accessing the Cluster
Use kubectl with the provided kubeconfig file:
kubectl --kubeconfig=./kubeconfig.yaml get nodes
Ensure all control-plane and worker nodes are listed as Ready.
Testing Workloads
Deploy a test workload to validate cluster functionality:
apiVersion: v1
kind: Pod
metadata:
name: nginx-test
namespace: default
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
kubectl apply -f nginx-test.yaml
kubectl get pods
Considerations
- Node Health Monitoring: Ensure all nodes have sufficient resources and are accessible from Rancher.
- Backup Configuration: Regularly back up etcd and cluster configurations.
- Security: Use firewalls and VPNs to secure node communication.