A late reply but I'm now busy with this.
- So the API server contains a desired state and an actual state:
Kubelet will read this desired state and if it's matching with the reality it's all fine.
When there is a pod in the desired state but not in the actual state, than the controller manager will see this (it's checking the API-server).
Than it's not very clear what the controller manager will do? will it describe this 'needed' pod in the actual state?
The scheduler will also check the API server and will see there is a pod without a place. And will change the actual state of the API-server by adding a place for the pod.
Kubelet is reading the desired state. But what wen kubelet sees the desired state isn't reality? How will it create the pods? will it create them by reading the 'new' actual state?
Date: Thu, 5 Nov 2015 13:36:02 -0500
Subject: Re: Which Kubernetes component creates a new pod
From: decarr redhat com
To: lorenz vanthillo outlook com
CC: users lists openshift redhat com
The master has the following basic components:
1. etcd - persistent storage
2. api-server - restful interface protected via auth/policy to crud resources, +watch for changes, each resource has a spec (desired state) and a status (actual state)
3. controller-manager - this is a client that watches api-server for changes, attempts to bring desired state in-line with actual state
4. scheduler - this is a client that watches api-server for pods whose spec.nodeName is not populated and populates it
The node has the following basic components:
1. kubelet - this is a client that watches api-server to find desired state for its host, reconciles current state of node with desired state
2. proxy - this allows for services to communicate with other nodes
So to address your question:
client X creates a ReplicationController Y whose spec.numReplicas=Z
on the master:
api-server receives client request, validates the user is authorized, persists the change in etcd
controller-manager sees the state change in api-server, and checks if there are in fact Z pods to meet Y's desire, if not it creates them, if there are more, it deletes them; it's just another client of api-server; this process happens all the time (this is what makes the system self-healing)
scheduler sees the state change in api-server and sees new pod(s) that have no node specified, sets the pod.Spec.nodeName, updates state in apiserver
on the node, the kubelet is forever asking API server what its desired state is for the host it manages in a sync-loop, so...
the kubelet sees a new pod whose spec.nodeName matches its host
the kubelet checks if the pod is running by asking docker daemon, if its running, great!, if its not, make sure it is!
the kubelet updates the api-server with the latest status of the pod on the node