[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: What if a container dies in OpenShift?

Hmm, okay so restartpolicty will be set 'always'. And when it's 'always' it will be defined by the replication controller. But the RC is only working on pod-level. So new pods will be recreated thanks to the RC. But the RC describes nothing about containers? So when A pod is still running but the container in it is dead, than there will happen nothing?

I'm sorry, I'm probably wrong but I do not understand it at the moment.

From: rcarvalh redhat com
Date: Wed, 25 Nov 2015 13:29:21 +0100
Subject: Re: What if a container dies in OpenShift?
To: dencowboy hotmail com
CC: users lists openshift redhat com

On Wed, Nov 25, 2015 at 10:14 AM, Den Cowboy <dencowboy hotmail com> wrote:
I read this in an E-book:
“Containers are really lightweight. That makes them super flexible and fast. However, they are designed to be short-lived and fragile.”

Excerpt From: David K. Rensin. “Kubernetes.” iBooks.

“Another key concept in any clustering system—including Kubernetes—is lack of durability. Pods are not durable things, and you shouldn’t count on them to be.”

Excerpt From: David K. Rensin. “Kubernetes.” iBooks.

This is what I know about it
A pod contains a container. The replication controller is a sort of file (A better name would probably be something like ReplicaSet). which describes how many identical pods are running (replicas). So you can scale one pod to many other pods which contain the same sort of container (instance of the same image) and same configuration. So your service can determine which pod will be used for which user.

When the amount of pods in the replication contoller changes, it will be reported to the API server on the master.
The Controller-manager sees the state change in api-server, and checks if there are in fact Z pods to meet Y's desire. (So what the replication controller was describing). When it isn't: the contoller manager will report it to the API-server. The scheduler is checking the state of the API-server (just like the controller-manager) and will see new pod(s) that have no node specified. The scheduler will determine the best place for that pod on that moment and will update the state of the API-server. On the node, the kubelet is forever asking API server for its state and kubelet will create the new components if there is a change in that state.
So kubelet will create a new pod if it's described by the API server. After that, kubelet will update the state of the API server.
Kubelet will trigger the Docker Daemon to create the new container inside the pod.

I hope I'm right here. You can improve my description.

So now my real question:
It's clear that pods and containers are both fragile and will die on certain moments. The solution for pods is clear. Thanks to the replication controler etc, a died pod will be recreated very fast.

But now what's unclear for me: What if a container dies? A pod will probaby not notice this. And a replication controller is working on Kubernetes-level. So what will happen if a container dies, but not the pod?

For that there's a restart policy, documented here [1], that applies to both pods and containers.

Rodolfo Carvalho

OpenShift Developer Experience

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]