Mitigating CVE-2025-1974 in RKE2 Clusters: A Comprehensive Guide to Updating ingress-nginx
The recent discovery of CVE-2025-1974, a critical vulnerability in the ingress-nginx controller with a CVSS score of 9.8, has sent Kubernetes administrators scrambling to patch their clusters. While my previous post covered general mitigation strategies, RKE2 users face unique challenges due to its specialized deployment methodology. As someone who’s helped numerous RKE2 users tackle this vulnerability, I’ve compiled this guide to address the specific nuances of updating ingress-nginx in RKE2 environments.
Understanding the RKE2 Difference
RKE2 (Rancher Kubernetes Engine 2) handles ingress-nginx differently than standard Kubernetes distributions. There are two key scenarios to consider:
- Default RKE2 ingress-nginx - RKE2 ships with ingress-nginx as a built-in component
- Custom ingress-nginx deployments - Separately deployed via Helm or manifests
Each scenario requires a different mitigation approach, so let’s dive into the specific strategies for each.
Scenario 1: Updating the Default RKE2 ingress-nginx Controller
If you’re using the default ingress-nginx that ships with RKE2, your update path depends on how your cluster was deployed - either via Rancher’s UI or using the standalone RKE2 CLI.
For Rancher-Managed RKE2 Clusters
If your RKE2 cluster is managed through Rancher, follow these steps:
Check Current RKE2 Version
Navigate to your cluster in the Rancher UI and note the RKE2 version.
Verify Vulnerability Status
Check if your RKE2 version includes a vulnerable ingress-nginx version:
# SSH into a control plane node ssh user@rke2-server # Check the running ingress-nginx version kubectl get deployment -n kube-system rke2-ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].image}'
If the version is older than v1.11.5 or v1.12.1, your cluster is vulnerable.
Update the Cluster
In the Rancher UI:
- Navigate to the cluster
- Click “⋮” > “Edit Config”
- Under “Kubernetes Version,” select a version that includes the patched ingress-nginx
- Recent RKE2 releases (v1.28.6+, v1.27.10+, v1.26.14+) include the patched versions
- Click “Save” to begin the upgrade process
Rancher will orchestrate a rolling upgrade of your cluster nodes.
Verify the Update
After the upgrade completes, verify the new ingress-nginx version:
kubectl get deployment -n kube-system rke2-ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].image}'
For Standalone RKE2 Clusters
If you’re running RKE2 without Rancher management, you’ll need to update using the RKE2 CLI:
Check Which Version to Upgrade To
Review the RKE2 release notes to identify a version with the patched ingress-nginx.
Update RKE2 on the First Server Node
# Update the RKE2 package curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.28 sh - # Restart the RKE2 service systemctl restart rke2-server.service # Verify the ingress-nginx version after restart /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get deployment -n kube-system rke2-ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].image}'
Update Remaining Control Plane Nodes
Repeat the same steps on each control plane node, one at a time. Wait for each node to become ready before proceeding to the next:
# Check node status /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes
Update Worker Nodes
Finally, update each worker node:
# On worker nodes: curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.28 sh - systemctl restart rke2-agent.service
Disable the Default ingress-nginx (Alternative Approach)
If you can’t immediately upgrade RKE2 but need to mitigate the vulnerability, you can disable the default ingress-nginx and deploy a patched version separately:
Create a Custom Config to Disable Default ingress-nginx
# On server nodes mkdir -p /etc/rancher/rke2 # Create or edit the config file cat << EOF > /etc/rancher/rke2/config.yaml disable: rke2-ingress-nginx EOF # Restart RKE2 systemctl restart rke2-server.service
For worker nodes, restart the rke2-agent service instead.
Deploy a Patched Version (Continue to the next section)
Scenario 2: Deploying a Custom Patched ingress-nginx
Whether you’ve disabled the default controller or are already using a custom deployment, here’s how to deploy a patched version of ingress-nginx on RKE2:
Using Helm (Recommended)
Add the Helm Repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update
Install or Upgrade ingress-nginx
# For new installations helm install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --set controller.image.tag=v1.12.1 \ --set controller.admissionWebhooks.enabled=true \ --set controller.service.type=LoadBalancer # For upgrades of existing Helm installations helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --set controller.image.tag=v1.12.1 \ --reuse-values
Verify the Deployment
kubectl get deployment -n ingress-nginx ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].image}'
RKE2-Specific Configuration Considerations
When deploying ingress-nginx on RKE2, consider these important configuration adjustments:
Set the Publish Service Correctly
RKE2 uses a specific networking setup. Ensure the publish-service flag is correctly set:
controller: publishService: enabled: true pathOverride: ingress-nginx/ingress-nginx-controller
Configure IngressClass
If migrating from the default RKE2 controller, ensure you set up the IngressClass properly:
# For Helm values controller: ingressClassResource: name: nginx default: true # Only if you want this to be the default
Create a matching IngressClass:
apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: nginx annotations: ingressclass.kubernetes.io/is-default-class: "true" # Only if you want this to be the default spec: controller: k8s.io/ingress-nginx
Update Existing Ingress Resources
Check if your existing Ingress resources need to be updated:
# Find Ingress resources using the rke2-nginx class kubectl get ingress --all-namespaces -o json | jq '.items[] | select(.spec.ingressClassName == "rke2-nginx") | .metadata.namespace + "/" + .metadata.name'
Update them to use your new IngressClass:
kubectl patch ingress INGRESS_NAME -n NAMESPACE --type=json -p='[{"op": "replace", "path": "/spec/ingressClassName", "value": "nginx"}]'
Temporary Mitigation: Disable the Admission Webhook
If you can’t immediately update ingress-nginx but need to mitigate the vulnerability, you can disable the admission webhook component:
For Default RKE2 ingress-nginx
# Patch the deployment to disable the admission webhook
kubectl patch deployment rke2-ingress-nginx-controller -n kube-system --type=json \
-p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": ["--election-id=ingress-controller-leader", "--controller-class=k8s.io/ingress-nginx", "--configmap=kube-system/rke2-ingress-nginx-controller", "--validating-webhook=false", "--validating-webhook-certificate=", "--validating-webhook-key="]}]'
# Delete the ValidatingWebhookConfiguration
kubectl delete validatingwebhookconfiguration rke2-ingress-nginx-admission -n kube-system
For Custom Helm-Deployed ingress-nginx
# Update with admission webhooks disabled
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.admissionWebhooks.enabled=false \
--reuse-values
Advanced Mitigation: Network-Level Protection
If you need a network-level mitigation for RKE2, you can use the same DaemonSet approach detailed in my previous post, but with RKE2-specific adjustments:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: webhook-protector-rke2
namespace: kube-system
spec:
selector:
matchLabels:
app: webhook-protector-rke2
template:
metadata:
labels:
app: webhook-protector-rke2
spec:
hostNetwork: true
containers:
- name: iptables-manager
image: rancher/mirrored-alpine:3.19
command:
- /bin/sh
- -c
- |
apk add --no-cache iptables curl
# Allow API server access (RKE2-specific approach)
# RKE2 typically uses the API server on localhost:6443
iptables -N WEBHOOK-PROTECTION || true
iptables -F WEBHOOK-PROTECTION
# Allow localhost access - needed for RKE2's local kube-apiserver
iptables -A WEBHOOK-PROTECTION -s 127.0.0.1 -p tcp --dport 8443 -j ACCEPT
# Allow pod CIDR ranges (specific to your RKE2 setup)
POD_CIDR=$(ip route | grep -E "pod|cluster" | awk '{print $1}')
if [ -n "$POD_CIDR" ]; then
echo "Detected Pod CIDR: $POD_CIDR"
# Allow only kube-apiserver from the pod network
iptables -A WEBHOOK-PROTECTION -s $POD_CIDR -p tcp --dport 8443 -j ACCEPT
fi
# Block all other access
iptables -A WEBHOOK-PROTECTION -p tcp --dport 8443 -j DROP
# Insert the chain into the INPUT chain
iptables -D INPUT -j WEBHOOK-PROTECTION 2>/dev/null || true
iptables -I INPUT -j WEBHOOK-PROTECTION
echo "Protection applied, sleeping..."
sleep infinity
securityContext:
privileged: true
tolerations:
- operator: "Exists"
Apply this with:
kubectl apply -f webhook-protector-rke2.yaml
Special Considerations for Air-Gapped RKE2 Environments
Air-gapped RKE2 installations require additional steps:
Pre-Download Updated Container Images
# On a connected system, pull and save the updated images docker pull registry.k8s.io/ingress-nginx/controller:v1.12.1 docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0 docker save registry.k8s.io/ingress-nginx/controller:v1.12.1 > ingress-controller.tar docker save registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0 > webhook-certgen.tar
Transfer Images to Air-Gapped Environment
Transfer the image tarballs to your air-gapped environment.
Import Images to Private Registry
# Load images docker load < ingress-controller.tar docker load < webhook-certgen.tar # Tag for your private registry docker tag registry.k8s.io/ingress-nginx/controller:v1.12.1 your.private.registry/ingress-nginx/controller:v1.12.1 docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0 your.private.registry/ingress-nginx/kube-webhook-certgen:v1.4.0 # Push to private registry docker push your.private.registry/ingress-nginx/controller:v1.12.1 docker push your.private.registry/ingress-nginx/kube-webhook-certgen:v1.4.0
Deploy Using Private Registry
helm install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --set controller.image.registry=your.private.registry \ --set controller.image.image=ingress-nginx/controller \ --set controller.image.tag=v1.12.1 \ --set controller.admissionWebhooks.patch.image.registry=your.private.registry \ --set controller.admissionWebhooks.patch.image.image=ingress-nginx/kube-webhook-certgen \ --set controller.admissionWebhooks.patch.image.tag=v1.4.0
Monitoring and Verification
After applying mitigations, verify that your RKE2 cluster is protected:
Check Running Version
For default RKE2 controller:
kubectl get deployment -n kube-system rke2-ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].image}'
For custom deployments:
kubectl get deployment -n ingress-nginx ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].image}'
Verify Webhook Status
If you’ve disabled the webhook:
# Check if the webhook configuration exists kubectl get validatingwebhookconfiguration | grep nginx # Check if the controller has the webhook disabled in its args kubectl get deployment -n kube-system rke2-ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].args}' | grep validating-webhook
Test Ingress Functionality
Deploy a test application to verify your ingress continues to work:
# Create a test deployment kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 # Create a test ingress cat << EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-test spec: ingressClassName: nginx # Or "rke2-nginx" for default controller rules: - host: "test.example.com" http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80 EOF # Test access curl -H "Host: test.example.com" http://<INGRESS_IP>/
Understanding the Risk in RKE2 Environments
RKE2 clusters face the same risks as any Kubernetes cluster with this vulnerability, but there are additional considerations:
Default Controller is System-Level - The default ingress-nginx runs in the kube-system namespace with elevated privileges
Fleet/Rancher Integration - If you’re using Rancher Fleet for GitOps, the vulnerability could potentially be exploited to gain access across multiple clusters
CNI Considerations - RKE2’s default CNI (Canal/Calico) may influence how you can isolate the webhook endpoint
Long-Term Recommendations for RKE2 Environments
Beyond immediate mitigation, consider these RKE2-specific recommendations:
Implement Systematic Updates
Create a regular process to update RKE2, particularly when security patches are released:
#!/bin/bash # Script to systematically update RKE2 clusters # Define update order - control plane first, then workers CONTROL_PLANE_NODES="server1 server2 server3" WORKER_NODES="worker1 worker2 worker3" # Target version TARGET_VERSION="v1.28.6+rke2r1" # Update control plane nodes one by one for node in $CONTROL_PLANE_NODES; do echo "Updating control plane node $node..." ssh $node "curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=$TARGET_VERSION sh -" ssh $node "systemctl restart rke2-server.service" # Wait for node to become ready echo "Waiting for node $node to become ready..." sleep 60 done # Update worker nodes (can be done in parallel if desired) for node in $WORKER_NODES; do echo "Updating worker node $node..." ssh $node "curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=$TARGET_VERSION sh -" ssh $node "systemctl restart rke2-agent.service" done
Consider Custom Ingress Controller Deployment
Instead of relying on the built-in controller, consider deploying ingress-nginx separately via Helm. This gives you more control over updates:
# Disable the default controller in RKE2 config cat << EOF > /etc/rancher/rke2/config.yaml disable: rke2-ingress-nginx EOF # Deploy your own via Helm or operator
Implement Network Policies
Use NetworkPolicies to isolate the ingress-nginx controller:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: protect-ingress-nginx namespace: kube-system # or ingress-nginx for custom deployments spec: podSelector: matchLabels: app: rke2-ingress-nginx # adjust selector as needed policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system - ports: - port: 80 protocol: TCP - port: 443 protocol: TCP
Conclusion
Mitigating CVE-2025-1974 in RKE2 environments requires understanding the unique aspects of how RKE2 deploys and manages the ingress-nginx controller. Whether you’re running the default controller or a custom deployment, the strategies outlined in this guide provide multiple paths to securing your clusters.
For RKE2 users, the most straightforward approach is typically to update to a newer RKE2 version that includes the patched ingress-nginx. If that’s not immediately feasible, disabling the admission webhook component provides effective interim protection without significant operational impact.
Remember that the security of your Kubernetes infrastructure is an ongoing responsibility. Use this vulnerability as an opportunity to implement more robust update procedures and security measures for your RKE2 clusters.
Have you encountered any RKE2-specific challenges when mitigating this vulnerability? Share your experiences in the comments to help the community navigate these security waters.