Resolving SchedulingDisabled Nodes in k3s on Raspberry Pi
Learn how to resolve SchedulingDisabled nodes in a k3s cluster on Raspberry Pi, ensuring smooth operation of your applications.
Resolving SchedulingDisabled Nodes in k3s on Raspberry Pi
This morning, I grabbed my Raspberry Pi cluster out of the box and fired it up again. However, my Docker repository wasn’t starting because the container was tied to a specific node that was showing as SchedulingDisabled.
Identifying the Issue
Using the default k3s local-path storage class ties containers to specific nodes. Let’s check the node statuses:
kubectl get nodes --sort-by '.metadata.name'
Example output:
NAME STATUS ROLES AGE VERSION
rpi401 Ready control-plane,master 154d v1.21.7+k3s1
rpi402 Ready <none> 154d v1.21.7+k3s1
rpi403 Ready <none> 152d v1.21.7+k3s1
rpi404 Ready <none> 152d v1.21.7+k3s1
rpi405 Ready,SchedulingDisabled <none> 152d v1.21.7+k3s1
Examining the Node Details
To get more details about the SchedulingDisabled node, use:
kubectl get node rpi405 -o yaml
Excerpt of the output:
spec:
taints:
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
timeAdded: "2021-07-10T09:41:33Z"
unschedulable: true
Fixing the SchedulingDisabled Node
Uncordon the node to allow scheduling:
kubectl uncordon rpi405
Output:
node/rpi405 uncordoned
Conclusion
The puzzling part is that I don’t remember using the kubectl cordon
command. I vaguely remember messing around with taints, which might have caused the issue. By uncordoning the node, the cluster can now schedule pods on it again, ensuring smooth operation of the applications.
By following these steps, you can resolve SchedulingDisabled nodes in your k3s cluster on Raspberry Pi, ensuring that your applications run smoothly.