[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: which kubernetes component creates a new pod?



The Kubelet watches the API server for the things it cares about, and
when it sees changes (via the watch or list apis) it makes updates.  A
few operations like logs, attach, exec, and proxy are initiated by the
master - the client connects to the master, the master connects to the
node (using its credentials), and then the node does the rest.

On Tue, Nov 3, 2015 at 8:43 AM, Fernando Lozano <flozano redhat com> wrote:
> Hi Lorenz,
>
> I've not gone into the level of detail you want but here's what I know:
>
> All OSE state is kept in etcd. Other components read it from there and react
> to changes there. Most components (all?) won't call each other directly,
> they'll either poll or listen to events from etcd. That's why sometimes
> things take some time to happens inside an OSE instance, like a building
> taking a few moments to start after oc new-app: oc new-app creates resources
> in etcd, them the controllers have to "notice" the change and start the
> dance with schedulers and nodes.
>
> Of course, someone has to check the "real" status of pods and nodes, the
> master polls the nodes regularly to get this info and update etcd. And the
> API server allows creating / changing resource states in etcd.
>
> The kubelet is there so the master can tell it things like "create a new
> pod" or "update your proxy rules", and also reply to the poll requests for
> current node and pod status.
>
> I don't know if the API server actually calls the kubelet or if it just lets
> some controller to this.
>
>
> []s, Fernando Lozano
>
>
> Hi,
> I have a problem to understand the kubernetes workflow:
> So as I understand the flow:
>
> You have a master which contains etcd, api-server, controller manager and
> scheduler.
> You have nodes which contain pods (wich contain containers), kubelet and a
> proxy.
>
> The proxy is working as a basic proxy to make it possible for a service to
> communicate with other nodes.
> When a pod dies, the controller manager will see this (it 'reads' the
> replication controller which describes how many pods there normally are).
>
> The controller manager will inform the API-server (I'm not right about
> this).
> The API-server will tell the scheduler to search a new place for the pod.
> After the scheduler has found a good place, the API will inform kubelet to
> create a new pod.
>
> I'm not sure about the last scenario? Can you tell me the right proces is a
> clear way?
>
>
>
> _______________________________________________
> users mailing list
> users lists openshift redhat com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> _______________________________________________
> users mailing list
> users lists openshift redhat com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]