[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Quota Policies



Perfect. got it.

 

Thanks you very much for helping to understand this easy yet complicated topic. Will reach out if need further info.

 

Really appreciated co opeation with great details.

 

-- 

Srinivas Kotaru

 

From: Derek Carr <decarr redhat com>
Date: Thursday, October 27, 2016 at 2:45 PM
To: Srinivas Naga Kotaru <skotaru cisco com>
Cc: "ccoleman redhat com" <ccoleman redhat com>, dev <dev lists openshift redhat com>
Subject: Re: Quota Policies

 

Your understanding is correct, but one caveat.

This config doesn’t alter or increase limit numbers put the developers

 

This is true UNLESS you set limitCPUToMemoryPercent.  In that case, the only value a user sets is memory limits.

In a nutshell, the idea behind the cluster resource override is users should only think about the limits for cpu/memory and not think about the request at all (since the operator is taking that responsibility).

 

Thanks,

Derek

 

 

On Thu, Oct 27, 2016 at 5:13 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

Derek

 

We have separate project for non-prod & prod.

 

I fully understood the example you quoted. It Is very clear. Would be nice if someone paste this explanation with example to the overcommit documentation.

 

In summary:

 

This config only applicable to pods which have explicit request or limit or both (via using limitrange/defaults)

This overcommit ratio apply to entire cluster/projects who satisfy above requirement

This is cluster administrator explicitly controlling the overcommit and overriding what development teams put on request #

This config doesn’t alter or increase limit numbers put the developers

 

Is above my understanding is correct?

 

-- 

Srinivas Kotaru

 

From: Derek Carr <decarr redhat com>
Date: Thursday, October 27, 2016 at 1:07 PM


To: Srinivas Naga Kotaru <skotaru cisco com>
Cc: "ccoleman redhat com" <ccoleman redhat com>, dev <dev lists openshift redhat com>
Subject: Re: Quota Policies

 

Do you plan to manage non-prod apps in the same project(s) as prod-apps?

I will describe the ClusterResourceOverride behavior via an example, but it is basically a giant hammer you can enable on the cluster that lets an administrator set a cluster-wide over-commit target which projects may opt-in/out from being utilized via annotation. 

If a project opts into the behavior, all incoming pods will be modified based on the configuration.

Sample Scenario:  A project opts into the ClusterResourceOverride and it has no LimitRange defined


$ kubectl run best-effort-pods --image=nginx

The resulting pod will still have no resource requirements made (the plug-in has no impact).

$ kubectl run pods-with-resources --image=nginx --limits=cpu=1,memory=1Gi

Traditionally, this pod would have Guaranteed quality of service and both the request and limit value would be cpu=1 and memory=1Gi.

But let's see what happens if you enable the overriding behavior on this project using the following config:

memoryRequestToLimitPercent: 25 

cpuRequestToLimitPercent: 25    

limitCPUToMemoryPercent: 200   

The pod ends up with the following:

 

requests.cpu=500m

limits.cpu=2

requests.memory=256Mi

limits.memory=1Gi

As you can see, the only value that had meaning from the end-user was the memory limit, but all other values were tuned relative to that value.  The memory request was tuned down to 25% of the the limit.  The cpu limit was tuned up to 2 cores because it was set to 200% of the memory limit where 1Gi =1 core in that conversion.  Finally, the cpu request was tuned down to 25% of the limit to 500m.

If we remove the limitCPUToMemoryPercent setting, and use the following configuration:

 

memoryRequestToLimitPercent: 25 

cpuRequestToLimitPercent: 25    

 

The pod ends up with the following:

 

requests.cpu=250m

limits.cpu=1

requests.memory=256Mi

limits.memory=1Gi

In this case, you can see the limit was respected from the user, but the requests were tuned down to meet the desired overcommit.  In effect, it is only possible to run BestEffort/Burstable pods but not Guaranteed pods with this configuration on in a project.

 

Thanks,

Derek

 

 

 

 

 

 

 

 

 

On Thu, Oct 27, 2016 at 2:32 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

Derek

 

Thanks for helping so far. It is not clear how quota & QOS works. We are planning ot use BestEffort for non-prod apps and non-BestEffort for prod applications. This has some side effect and app teams might complain that their application experience is not same as non-prod behaves different then prod when they testing release and monitoring performances. We need to think about it how to mitigate these challenges

 

I was reading below link and this is pretty good.

 

https://docs.openshift.com/container-platform/3.3/admin_guide/overcommit.html

 

didn’t understand Configuring Masters for Overcommitment and its example. Can you breif how this overcommitment works in the scanarios we talked about? BestEffort, Burst, and Guarnted ..

 

memoryRequestToLimitPercent: 25 

cpuRequestToLimitPercent: 25    

limitCPUToMemoryPercent: 200   

 

would be glad if you explain with simple examples… I’m trying to understand how this overcommit helps platform admisn to tune better.

 

-- 

Srinivas Kotaru

 

From: Derek Carr <decarr redhat com>
Date: Wednesday, October 26, 2016 at 1:23 PM


To: Srinivas Naga Kotaru <skotaru cisco com>
Cc: "ccoleman redhat com" <ccoleman redhat com>, dev <dev lists openshift redhat com>
Subject: Re: Quota Policies

 

A BestEffort pod is a pod whose pod.spec.containers[x].resources.requests and pod.spec.containers[x].resources.limits are empty so your understanding is correct.

If you want to have a project that supports both BestEffort and NotBestEffort pods together, you can do that and control usage via ResourceQuota using the examples I provided.

If you want to have a project that supports both BestEffort and NotBestEffort pods together, and use LimitRange to enforce min/max constraints and default resource requirements, you will encounter problems.

  1. The LimitRange will assign default resources to each BestEffort pod you submit (making them no longer BestEffort) or
  2. It will require that each pod have a cpu or memory value specified as part of its validation (if you configured it as such)

Thanks,

Derek

 

 

 

On Wed, Oct 26, 2016 at 2:54 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

Can u answer this question? Trying to understand how do we call BestEffort pods in terms of quota/limtrange/pod definitions perceptive?  

 

My understand is, a pod is called besteffort pod, it it does not have any quota defination without compute resources ( limit or request) and it doesn’t have any explicit request and limit in pod defiantion. Is It my understanding is correct?

 

-- 

Srinivas Kotaru

 

From: Srinivas Naga Kotaru <skotaru cisco com>
Date: Tuesday, October 25, 2016 at 3:42 PM
To: Derek Carr <decarr redhat com>


Cc: "ccoleman redhat com" <ccoleman redhat com>, dev <dev lists openshift redhat com>
Subject: Re: Quota Policies

 

This is good. I’m getting enough details to craft my policies.

 

In case of 1st example (BestEffort), we don’t have to create any limitrange with default request and limits? Or quota definition without having any request.cpu, request.memory, limit.cpu and limit.memory?

 

Am trying to understand what exactly it means by BestEffort when it comes to quota, limitrange, pod definitions perceptive. Is it just an arbitrary word or a pod is called as BestEffort if it doesn’t have request, limits in its definition?

 

-- 

Srinivas Kotaru

 

From: Derek Carr <decarr redhat com>
Date: Tuesday, October 25, 2016 at 2:26 PM
To: Srinivas Naga Kotaru <skotaru cisco com>
Cc: "ccoleman redhat com" <ccoleman redhat com>, dev <dev lists openshift redhat com>
Subject: Re: Quota Policies

 

Sorry, the command is the following (missed scopes on second):

$ kubectl create quota best-effort-not-terminating --hard=pods=5 --scopes=NotTerminating,BestEffort
$ kubectl create quota not-best-effort-not-terminating --hard=requests.cpu=5,requests.memory=10Gi,limits.cpu=10,limits.memory=20Gi --scopes=NotTerminating,NotBestEffort

 

On Tue, Oct 25, 2016 at 5:25 PM, Derek Carr <decarr redhat com> wrote:

If you only want to quota pods that have a more permanent footprint on the node, then create a quota that only matches on the NotTerminating scope.

If you want to allow usage of slack resources (i.e. run BestEffort pods), and define a quota that controls otherwise, create 2 quotas.

$ kubectl create quota best-effort-not-terminating --hard=pods=5 --scopes=NotTerminating,BestEffort

$ kubectl create quota not-best-effort-not-terminating --hard=requests.cpu=5,requests.memory=10Gi,limits.cpu=10,limits.memory=20Gi

So in this example:

 

1. the user is able to create 5 long running pods that make no resource request (i.e. no cpu, memory specified)

2. the user to request up to 5 cpu cores and 10Gi memory for scheduling purposes, and the node will work to ensure is available

3. are able to burst up to 10 cpu cores, and 20Gi memory based on node-local conditions

 

Thanks,

Derek

 

On Tue, Oct 25, 2016 at 5:14 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

Derek/Clayton

 

I saw this link yesterday. It was really good and helpful; I didn’t understand the last advanced section. Let me spend some time again.

 

@Clayton: Do we need to create separate quota policies for both terminated and non-terminated ? or just creating a single policy for non-terminated would be enough? Want to be simple but at same time, don’t want non-terminated short lived pods don’t create any issues to regular working pods.

 

-- 

Srinivas Kotaru

 

From: Derek Carr <decarr redhat com>
Date: Tuesday, October 25, 2016 at 1:09 PM
To: "
ccoleman redhat com" <ccoleman redhat com>
Cc: Srinivas Naga Kotaru <
skotaru cisco com>, dev <dev lists openshift redhat com>
Subject: Re: Quota Policies

 


>BestEffort or NotBestEffort are used to explain the concept or can Pod definition can have these words?

This refers to the quality of service for a pod.  If a container in a pod makes no request/limit for compute resources, it is BestEffort.  If it makes a request for any resource, its NotBestEffort.

You can apply a quota to control the number of BestEffort pods you can create separate from the number of NotBestEffort pods.

See step 5 in the above linked example for a walkthrough.

Thanks,

Derek

 

 

 

 

On Tue, Oct 25, 2016 at 4:02 PM, Clayton Coleman <ccoleman redhat com> wrote:

 

 

On Tue, Oct 25, 2016 at 3:55 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

Hi

 

I’m trying to frame a policy for best usage of compute resources for our environment. I stared reading documentation on this topic. Although documentation is pretty limited on this topic with working examples, now I have some better understanding on quota and limtrange objects.

 

We are planning to enforce quota and limtrange on every project as part of project provision. Client can increase these limits by going to modify screen on our system and pay the cost accordingly. Goal is to have high efficient cluster resource usage and minimal client disturbance.

 

Have few questions around implementation?

 

Can we exclude build, deploy like short time span pods from quota restrictions?

 

There are two quotas - one for terminating pods (pods that are guaranteed to finish in a certain time period) and one for non-terminating pods.

 

Quotas enforced only running pods or dead pods, pending status, succeeded?

 

Once a pod terminates (failed, succeeded) it is not counted for quota.  Pods that are pending deletion are still counted for quota.

 

What is the meaning of scopes: Terminating or scopes: NotTerminating in quota definition? It is bit confusing to understand.

 

Terminating means "will finish in bounded time", i.e. does not have RestartAlways and also has activeDeadlineSeconds.  NonTerminating is everything else.

 

BestEffort or NotBestEffort are used to explain the concept or can Pod definition can have these words?

 

We don't have quota per QoS class yet today, but it would be useful.

 

 

Any good documentation with examples would help in documentation.

 

I thought Derek had some good write ups of this.

 

 

Srinivas Kotaru

 

 

-- 

Srinivas Kotaru


_______________________________________________
dev mailing list
dev lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

 


_______________________________________________
dev mailing list
dev lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

 

 

 

 

 

 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]