[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: NetworkCIDR for big cluster



Yes, you’re right. We can control most of the stuff using quotas. That make sense.

 

-- 

Srinivas Kotaru

 

From: "ccoleman redhat com" <ccoleman redhat com>
Date: Friday, October 7, 2016 at 5:39 PM
To: Srinivas Naga Kotaru <skotaru cisco com>
Cc: Rajat Chopra <rchopra redhat com>, dev <dev lists openshift redhat com>
Subject: Re: NetworkCIDR for big cluster

 

 


On Oct 7, 2016, at 6:19 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

 

Rajat & Clayton

 

That is really very useful information. Thanks for the data ..

 

Do we need to keep both clusterNetowrk & serviceNetwork same size? Based on your data, we would like to go with 1000 node cluster with 256 pods per node

 

clusterNetworkCIDR: 10.128.0.0/13

hostSubnetLength: 9

serviceNetworkCIDR: 172.30.0.0/13

networkPluginName: "redhat/openshift-ovs-multitenant"

 

is above configuration is right? Want to keep few things in mind

·         We don’t have control on how many projects clients will be created unless there is a hard limit platform admin can enforce

 

There is a limit that can be enforced - it is documented here: https://docs.openshift.org/latest/admin_guide/managing_projects.html#limit-projects-per-user

 

 

·         We don’t have predictions on how many services will be created in a project.

 

In practice most apps are 1 or 2 pods, and most users tend to create 1 or fewer apps, so estimating one service per pod is reasonable.

 

You can limit the number of services a user can create in a project with limit ranges (see the docs)

 

·         Since Openshift now support # pod per node enforcement, we can monitor and tune this number as per our environement

 

Do we have any latest planning guide ?

 

 

The planning guide should be up to date, although I have not checked in the last few weeks to see about 3.3z

 

 

 

-- 

Srinivas Kotaru

 

From: Rajat Chopra <rchopra redhat com>
Date: Friday, October 7, 2016 at 4:56 PM
To: "ccoleman redhat com" <ccoleman redhat com>
Cc: Srinivas Naga Kotaru <skotaru cisco com>, dev <dev lists openshift redhat com>
Subject: Re: NetworkCIDR for big cluster

 

Check this for the new defaults:
https://docs.openshift.com/container-platform/3.3/install_config/configuring_sdn.html#configuring-the-pod-network-on-masters

The default cluster subnet mask is now 14, but the important number to look at is hostSubnetLength. The default now is 9, which means one can have 512 pods and 512 nodes.
To have 1000 nodes, either keep hostSubnetLength 8 (256 pods) or put the cidr mask at 13.

For a cidr mask of 12, and a hostSubnetLength of 9, you will get 512 pods on each of 2048 nodes.

/Rajat

On Oct 7, 2016 4:08 PM, "Clayton Coleman" <ccoleman redhat com> wrote:

> Pod limits are now directly settable, so no longer hard coded.
>
> It sounds like you would plan for 100k pods which a 15 or 14 cidr for pods would cover.  I would recommend leaving some headroom there.  14 might be safer if in the future you want to continue to grow the cluster - our target max cluster size is ~ 300k pods, so if you expect a significant bump plan for 14 ahead of time.
>
> In practice, most large clusters will be dense, and be dense due to services.  I would plan for 1 pod per service just to be safe, so match the cidr from pods.
>
> There are multi tenant network settings that might impact how many pods per node are available that I think default for >>100 pods per node, but please double check.
>
> On Oct 6, 2016, at 5:24 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:
>

>> Yes, we are using multi-tenant SDN plugin and our nodes are 8cpu x 64 GB capacity.  Planning to host more than 100 pods per node. With the latest OpenShift release, I hope we don’t have old hard limits on cluster size and # pods on per node

>>
>>  
>>
>> I don’t have an accurate answer for # services per project, which we can’t control at platform side.
>>
>>  
>>
>> Based on abov facts can you recommend a decent CIDR for pods & service?
>>
>>  
>>
>> -- 
>>
>> Srinivas Kotaru
>>
>>  
>>
>> From: "ccoleman redhat com" <ccoleman redhat com>
>> Date: Thursday, October 6, 2016 at 1:50 PM
>> To: Srinivas Naga Kotaru <skotaru cisco com>
>> Cc: dev <dev lists openshift redhat com>
>> Subject: Re: NetworkCIDR for big cluster
>>
>>  
>>
>>  
>>
>>
>> On Oct 6, 2016, at 1:23 PM, Srinivas Naga Kotaru (skotaru) <skotaru cisco com> wrote:

 

>>>
>>> Hi
>>>
>>>  
>>>
>>> We ‘re building 3 big clusters, 1 specific to each data center. growth  expected to 1000 nodes each cluster over the time.
>>>
>>>  
>>>
>>> Questions:
>>>
>>>  
>>>
>>> 1.       
>>>
>>> # egrep 'clusterNetworkCIDR|serviceNetworkCIDR' /etc/origin/master/master-config.yaml
>>>
>>>  
>>>
>>>   clusterNetworkCIDR: 10.1.0.0/16
>>>
>>>   serviceNetworkCIDR: 172.30.0.0/16
>>>
>>>  
>>>
>>> above are default subnet values.  These defaults will be sufficient for 1k clsuter size?
>>
>>  
>>
>> That would be 64k pods and services.  If you expect densities in excess of 100 pods per node you should probably plan for a larger cidr for the cluster network and if you are using the multi tenant plugin you'll also want to plan around how many unique projects you will have.
>>
>>  
>>
>> So inputs are:
>>
>> # of projects
>>
>> Expected number of services per project (simple apps or complex apps per project)
>>
>> Expected pod densities per node
>>
>>  
>>
>>  

 

>>>
>>>  
>>>
>>> 2.      if answer is ‘no’ can we change once cluster is build with new CIDR values depending on growth? ( I heard it is not possible once cluster was build)
>>
>>  
>>
>>  
>>
>> It is possible to grow it if you include the original cidr in the expanded cidr.  However, it can be disruptive so we recommend planning up front.  I would recommend leaving the possibility of a larger block open by expanding the mask if you are unsure about exact limits.
>>
>>  
>>
>>  

 

>>>
>>> 3.      if answer is ‘no’ what is the right CIDR’s for 1k cluster size?
>>>
>>>  
>>>
>>>  
>>>
>>> -- 
>>>
>>> Srinivas Kotaru

 

>>>
>>> _______________________________________________
>>> dev mailing list
>>> dev lists openshift redhat com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
> _______________________________________________
> dev mailing list
> dev lists openshift redhat com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]