Hi, thanks for that suggestion. I took a look, but tit seems it
isn't quite what's needed.
It looks likes pod (anti)affinity is a binary thing. It works for
the first pod on the node with/without the specified label, but it
doesn't ensure an even spread when you schedule multiple pods.
In my case I scheduled pods using an antiaffinity
preferredDuringSchedulingIgnoredDuringExecution rule applying
across 3 nodes and that made sure that the first 3 pods went to
separate nodes as expected, but after that the rule seemed to not
be applied (there were no nodes that satisfied the rule, but as
the rule was 'preferred' not 'required' the pod was scheduled
without any further preference). So that by the time I had 6 pods
running 3 other them were on one node, 2 on another and only 1 on
So I suppose the anti-affinity rule is working as designed, but
that its not designed to ensure an even spread when you have
multiple pods on the nodes.
On 04/07/18 12:16, Joel Pearson wrote:
an OpenShift reference for the same thing.
probably after pod anti-affinity? https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
That lets you tell the scheduler that the pods aren’t allowed
to be on the same node for example.
a process the fires up a number of pods (bare pods, not
by replication controller) to execute a computationally
demanding job in
What I find is that the pods do not spread effectively
available nodes. In my case I have a node selector that
execution to 3 nodes, and the pods run mostly on the first
node, a few
run on the second node, and none run on the third node.
I know that I could specify cpu resource requests and
limits to help
with this, but for other reasons I'm currently unable to
It looks like this is controllable through the scheduler,
options for controlling this look pretty complex.
Could someone advise on how best to allow pods to spread
nodes rather than execute preferentially on one node?
users mailing list
users lists openshift redhat com