Originally published at https://medium.com/@betz.mark/herding-pods-taints-tolerations-and-affinity-in-kubernetes-2279cef1f982
The general theory of pod scheduling in kubernetes is to let the scheduler handle it. You tell the cluster to start a pod, the cluster looks at all the available nodes and decides where to put the new thing, based on comparing available resources with what the pod declares it needs. That’s scheduling in a nutshell. Sometimes, however, you need a little more input into the process. For example you may have been asked to run a thing that requires more resources than any single node in your cluster offers. You can add a new node with enough juice, maybe using a nodepool if you’re running on GKE, but how do you make sure the right pods run on it? How do you make sure the wrong pods don’t run on it?
You can often nudge the scheduler in the right direction simply by setting resource requests appropriately. If your new pod needs 5 GB of ram and the only node big enough is the one you added for it to run on, then setting the memory request for that pod to 5 GB will force the scheduler to put it there. This is a fairly fragile approach, however, and while it will get your pod onto a node with sufficient resources it won’t keep the scheduler from putting other things there as well, as long as they will fit. Maybe that’s not important, but if it is, or if for some other reason you need positive control over which nodes your pod schedules to then you need the finer level of scheduling control that kubernetes offers through the use of taints, tolerations and affinity.
Continue reading