[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Usage is more then Limits



OK got it. But I have occasions where requests <= node allocable but still pod scheduling is failing. If scheduler only considers requests then below scenario is valid but didn’t understand why scheduling failing in another scenario where requests <= nodes capacity. Any comment?  

 

 

-- 

Srinivas Kotaru

 

From: Derek Carr <decarr redhat com>
Date: Wednesday, May 24, 2017 at 12:04 PM
To: Srinivas Naga Kotaru <skotaru cisco com>
Cc: "ccoleman redhat com" <ccoleman redhat com>, dev <dev lists openshift redhat com>
Subject: Re: Usage is more then Limits

 

The scheduler does not look at actual utilization of the node.  It ensures that the sum of pod requests <= node allocatable.

 

On Wed, May 24, 2017 at 2:44 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

Is it true if sum of all pods limits ( cpu or memory)  from a node >= physical available then pods scheduling fail?

 

Today I got a complaint that pods are unable to schedule a node. While investigating, below is oc describe node/<node> output

 

Allocated resources:

  (Total limits may be over 100 percent, i.e., overcommitted.

  CPU Requests    CPU Limits            Memory Requests              Memory Limits

  ------------              ----------   ---------------            -------------

  7975m (99%)     7875m (98%)       14400Mi (22%)    14144Mi (22%)

 

 

it clearly says CPU requests and CPU limits are touching the physical’s available ( 8 CPU).  But the actual usage is pretty low

 

 

 

wont it be problem from capacity planning perceptie although usage is pretty low but unable to place pods and forcign us to add more nodes?

 

 

-- 

Srinivas Kotaru

 

From: Srinivas Naga Kotaru <skotaru cisco com>
Date: Tuesday, May 23, 2017 at 8:01 PM
To: Derek Carr <decarr redhat com>, "ccoleman redhat com" <ccoleman redhat com>


Cc: dev <dev lists openshift redhat com>
Subject: Re: Usage is more then Limits

 

I am using Heapster POD to collect data from Kubernetes API and processing for below graph. This graph shows Usage is more than Limits.  I have similar trend for CPU also.

 

 

 

 

I have seen multiple times in our clusters where pods scheduling is failing although nodes  has enough memory and CPU both memory and actual usage is much less percentage then actual available. What occasions pod scheduling fail although node has enough capaicty? As per my understanding as long as node has enough resurces to satisfy intial  requests size, it should be able to place the pod .

 

 

-- 

Srinivas Kotaru

 

From: Derek Carr <decarr redhat com>
Date: Tuesday, May 23, 2017 at 7:06 PM
To: "ccoleman redhat com" <ccoleman redhat com>, Srinivas Naga Kotaru <skotaru cisco com>
Cc: dev <dev lists openshift redhat com>
Subject: Re: Usage is more then Limits

 

Agree with Clayton.

 

To debug further, would need to see:

 

Pod spec as persisted

docker inspect output for suspect container

cpu and memory cgroup values in sys fs of suspect container 

 

Feel free to reply directly if you need more assistance.

 

Thanks,

 

On Tue, May 23, 2017 at 2:01 PM Clayton Coleman <ccoleman redhat com> wrote:

CPU request corresponds to fair share CPU - if you cross over your limit you use the slack capacity.  Memory request determines most scheduling (we place you on a node if it has at least request memory available).  Memory limit is a hard limit, and CPU limit is a hard limit (if you have 0.1 core limit, you can never run more than 10% of CPU time).

 

When you use overcommit, we rewrite either limit or requests.  If you see CPU or memory above limit, that's definitely a bug.

 

On Tue, May 23, 2017 at 1:15 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

Can someone comment on this?

 

 

-- 

Srinivas Kotaru

 

From: Srinivas Naga Kotaru <skotaru cisco com>
Date: Wednesday, May 10, 2017 at 12:25 PM
To: dev <dev lists openshift redhat com>
Subject: Usage is more then Limits

 

Hi

 

Is it possible Usage is more than Limits? Observed some nodes has more Usage then allowed Limits in our cluster. We have a Quota’s implemented, LimitRagen enabled per project (Defaults Limits and Requests) and Cluster overcommit % specified (10 % CPU Limits and 25 % Memory Limits as requests for scheduling to take place)

 

It is my understanding based on above data, is requets always 1/10 fo CPU and ¼ of memory and Limits can go as much specified by clients but Usage should be less then Limits as clients can’t go beyound Limits.

 

 

-- 

Srinivas Kotaru


_______________________________________________
dev mailing list
dev lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

 

_______________________________________________
dev mailing list
dev lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]