A critical vulnerability in the widely-used ingress-nginx controller was recently disclosed that could allow attackers to compromise Kubernetes clusters with minimal effort. With a CVSS score of 9.8, CVE-2025-1974 represents a serious threat to Kubernetes environments running unpatched versions. Having spent the last 48 hours helping clients remediate this issue, I’ve compiled this guide to explain the vulnerability and provide concrete mitigation steps with options for various operational constraints.

Understanding the Vulnerability: What’s at Stake

Let’s start with what makes this vulnerability particularly dangerous. While ingress controllers typically require privileged access to function properly, this issue flips the script by allowing low-privileged workloads to manipulate the controller through a flaw in its admission webhook.

Technical Details of CVE-2025-1974

The vulnerability affects the Validating Admission Controller component of ingress-nginx. Here’s what’s happening under the hood:

  1. The ingress-nginx controller deploys a validating webhook that intercepts and validates Ingress resources before they’re admitted to the cluster
  2. This webhook contains an input validation flaw that allows injection of malicious configuration
  3. When exploited, an attacker can manipulate the NGINX configuration to:
    • Execute arbitrary commands within the controller pod
    • Access secrets mounted to the controller
    • Potentially pivot to compromise the entire cluster

The most concerning aspect is the attack vector: any pod running in the cluster that can send HTTP requests to the webhook endpoint (typically port 8443) can potentially exploit this vulnerability. This means a compromised application running with minimal privileges could leverage this flaw to gain cluster-wide access.

This vulnerability affects all ingress-nginx versions prior to:

  • v1.12.1
  • v1.11.5

Given that ingress-nginx is deployed in roughly 40-50% of Kubernetes environments (based on CNCF survey data), the impact radius is substantial.

Comprehensive Mitigation Options

I’ll outline three mitigation approaches below, from most recommended to least, along with the technical implementation details for each.

The most straightforward and complete remediation is upgrading to a patched version. This addresses not only the primary CVE-2025-1974 but also four other security issues fixed in the same release.

Implementation Steps for Helm Users

If you installed ingress-nginx using Helm (the most common approach), here’s how to upgrade:

# Update Helm repository
helm repo update

# Upgrade to patched version
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --version 4.7.1 \
  --set controller.image.tag=v1.12.1 \
  --reuse-values

For those using v1.11.x, use --set controller.image.tag=v1.11.5 instead.

Implementation Steps for Manifest-Based Installations

If you deployed using YAML manifests, follow these steps:

# Download the latest patched manifest
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.1/deploy/static/provider/cloud/deploy.yaml

# Apply the updated manifest
kubectl apply -f deploy.yaml

Verify the upgrade was successful:

kubectl get pods -n ingress-nginx -l app.kubernetes.io/component=controller -o jsonpath='{.items[0].spec.containers[0].image}'

This should return the patched version.

Option 2: Disable the Validating Admission Controller (Interim Workaround)

If you can’t immediately upgrade due to change freeze, testing requirements, or other operational constraints, disabling the admission webhook provides effective mitigation against this specific vulnerability.

For Helm-Based Installations

helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --reuse-values \
  --set controller.admissionWebhooks.enabled=false

For Manifest-Based Installations

Remove the webhook components:

# Delete the ValidatingWebhookConfiguration
kubectl delete validatingwebhookconfiguration ingress-nginx-admission

# Patch the controller deployment to remove webhook args
kubectl patch deployment ingress-nginx-controller -n ingress-nginx --type=json \
  -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/args", "value": ["--validating-webhook=:8443", "--validating-webhook-certificate=/usr/local/certificates/cert", "--validating-webhook-key=/usr/local/certificates/key"]}]'

Important Operational Considerations:

Disabling the admission webhook removes an important guard rail that prevents invalid Ingress resources from being created. This could lead to:

  1. Deployments with potentially broken ingress configurations
  2. Unexpected routing behavior
  3. NGINX reload failures due to invalid configurations

Be sure to implement additional validation in your CI/CD pipeline if you choose this option, and plan to re-enable the webhook after upgrading to a patched version.

Option 3: Network-Level Protection (Last Resort)

If neither upgrading nor disabling the webhook is immediately possible, this third option implements network-level protection by restricting access to the webhook port.

This approach uses a privileged DaemonSet that:

  1. Identifies the Kubernetes API server IP addresses
  2. Configures iptables to only allow the API server to access port 8443
  3. Blocks all other pods from accessing the vulnerable endpoint

Here’s a complete YAML manifest to implement this protection:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: list-k8s-endpoints
  namespace: default
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  resourceNames: ["kubernetes"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: webhook-protector
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: webhook-protector
  namespace: default
subjects:
- kind: ServiceAccount
  name: webhook-protector
  namespace: kube-system
roleRef:
  kind: Role
  name: list-k8s-endpoints
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-webhook-protector
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: nginx-webhook-protector
  template:
    metadata:
      labels:
        app: nginx-webhook-protector
    spec:
      serviceAccountName: webhook-protector
      hostNetwork: true
      priorityClassName: system-node-critical
      containers:
      - name: iptables-manager
        image: alpine:3.19
        command:
        - /bin/sh
        - -c
        - |
          apk add --no-cache curl iptables
          
          # Get API server IPs
          APISERVER_IPS=$(curl -s --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
            -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
            https://kubernetes.default.svc/api/v1/endpoints/kubernetes | \
            grep -o '"ip": "[^"]*' | cut -d'"' -f4 | tr '\n' ' ')
          
          echo "API Server IPs: $APISERVER_IPS"
          
          # Create iptables rule to block access to 8443 except from API server
          iptables -N WEBHOOK-PROTECTION || true
          iptables -F WEBHOOK-PROTECTION
          
          # Allow API server access
          for ip in $APISERVER_IPS; do
            iptables -A WEBHOOK-PROTECTION -s $ip -p tcp --dport 8443 -j ACCEPT
          done
          
          # Allow localhost access
          iptables -A WEBHOOK-PROTECTION -s 127.0.0.1 -p tcp --dport 8443 -j ACCEPT
          
          # Block all other access
          iptables -A WEBHOOK-PROTECTION -p tcp --dport 8443 -j DROP
          
          # Insert the chain into the INPUT chain
          iptables -D INPUT -j WEBHOOK-PROTECTION 2>/dev/null || true
          iptables -I INPUT -j WEBHOOK-PROTECTION
          
          # Keep container running
          echo "Protection applied, sleeping..."
          while true; do
            sleep 3600
            # Refresh rules hourly in case API server IPs change
            # Get updated API server IPs
            NEW_APISERVER_IPS=$(curl -s --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
              -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
              https://kubernetes.default.svc/api/v1/endpoints/kubernetes | \
              grep -o '"ip": "[^"]*' | cut -d'"' -f4 | tr '\n' ' ')
            
            if [ "$NEW_APISERVER_IPS" != "$APISERVER_IPS" ]; then
              echo "API Server IPs changed, updating rules..."
              APISERVER_IPS="$NEW_APISERVER_IPS"
              
              # Refresh rules
              iptables -F WEBHOOK-PROTECTION
              for ip in $APISERVER_IPS; do
                iptables -A WEBHOOK-PROTECTION -s $ip -p tcp --dport 8443 -j ACCEPT
              done
              iptables -A WEBHOOK-PROTECTION -s 127.0.0.1 -p tcp --dport 8443 -j ACCEPT
              iptables -A WEBHOOK-PROTECTION -p tcp --dport 8443 -j DROP
            fi
          done
        securityContext:
          privileged: true
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "128Mi"
            cpu: "100m"
      tolerations:
      - operator: "Exists"

To apply this protection:

kubectl apply -f webhook-protector.yaml

Verify it’s working:

# Check that the DaemonSet is running on all nodes
kubectl get ds nginx-webhook-protector -n kube-system

# Ensure rules are in place (requires SSH access to a node)
kubectl debug node/your-node-name -it --image=alpine -- chroot /host iptables -L INPUT

This solution has limitations:

  • It requires privileged container access
  • It needs proper maintenance if API server IPs change
  • It should be considered a temporary solution until upgrading is possible

Validating Your Mitigation

Regardless of which mitigation you implement, validation is crucial. Here’s how to check if your cluster is protected:

Testing from Outside the Cluster

Create a simple test to verify the webhook is not externally accessible:

# Get the webhook service
WEBHOOK_SVC=$(kubectl get svc -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/component=controller-admission -o jsonpath='{.items[0].metadata.name}')

# Try to access the webhook port (should fail)
kubectl run --restart=Never --rm -it test-webhook --image=curlimages/curl -- \
  curl -k -v https://$WEBHOOK_SVC.ingress-nginx.svc:8443/healthz

If the mitigation is working, this request should fail or time out.

Verifying Patched Version

Check the running controller version:

kubectl get deployment -n ingress-nginx ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].image}'

This should return an image with one of the patched tags (v1.12.1 or v1.11.5).

Vulnerability in Context: Why This Matters

This vulnerability is particularly concerning because:

  1. Widespread Usage: ingress-nginx is one of the most popular ingress controllers
  2. Low Barrier to Exploit: Any pod in the cluster with network access can potentially exploit it
  3. Elevated Impact: The controller typically runs with high privileges, meaning a compromise gives significant cluster access
  4. Multi-tenant Risk: In shared clusters, a compromised tenant workload could affect others

For organizations running multi-tenant Kubernetes clusters or hosting customer workloads, this vulnerability should be treated as a high-priority issue requiring immediate attention.

Long-Term Security Considerations

Beyond the immediate mitigation, this vulnerability highlights several Kubernetes security practices worth implementing:

1. Network Policy Enforcement

Deploy strict NetworkPolicies that limit pod-to-pod communication:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: protect-admission-webhooks
  namespace: ingress-nginx
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/component: controller
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        # Allow only API server
        cidr: <api-server-cidr>
    ports:
    - protocol: TCP
      port: 8443

2. Implement Pod Security Standards

Enforce the Restricted Pod Security Standard to limit the potential impact of compromised pods:

apiVersion: v1
kind: Namespace
metadata:
  name: my-application
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/audit: restricted

3. Regular Vulnerability Scanning

Implement automated scanning for your Kubernetes manifests and container images:

# Example using Trivy for manifest scanning
trivy config --severity HIGH,CRITICAL ./kubernetes/manifests/

# Example for scanning container images
trivy image --severity HIGH,CRITICAL ingress-nginx/controller:v1.12.1

4. Automate Security Updates

Create an automated pipeline for security updates to critical components like ingress controllers:

# Example script for a GitOps workflow that updates ingress-nginx
#!/bin/bash
CURRENT_VERSION=$(grep "tag:" kubernetes/ingress-nginx/values.yaml | awk '{print $2}')
LATEST_VERSION=$(curl -s https://api.github.com/repos/kubernetes/ingress-nginx/releases/latest | jq -r '.tag_name' | sed 's/controller-//')

if [ "$CURRENT_VERSION" != "$LATEST_VERSION" ]; then
  echo "Updating ingress-nginx from $CURRENT_VERSION to $LATEST_VERSION"
  sed -i "s/tag: $CURRENT_VERSION/tag: $LATEST_VERSION/" kubernetes/ingress-nginx/values.yaml
  git add kubernetes/ingress-nginx/values.yaml
  git commit -m "Update ingress-nginx to $LATEST_VERSION"
  git push
fi

Conclusion

CVE-2025-1974 represents a significant security risk for Kubernetes environments running ingress-nginx. The most complete remediation is upgrading to a patched version (v1.12.1 or v1.11.5), but for organizations that cannot immediately upgrade, disabling the admission webhook or implementing network-level protection provides effective interim mitigation.

Whatever approach you choose, addressing this vulnerability should be a priority. The ease of exploitation combined with the potential impact makes this a particularly dangerous issue, especially in multi-tenant environments.

Have you already mitigated this vulnerability in your environment? What approach did you take? Share your experience in the comments below to help others in the community navigate this security challenge.