[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Introduction to opencontrail



Hi Pedro,

I have been following your work and it's taking good shape. The points you have raised here and in the PRs are quite valid. Some comments below.

On Oct 16, 2015 4:21 AM, "Pedro Marques" <pedro r marques gmail com> wrote:
>
> Hi,
> I would like to provide a little bit of context to the work that i’ve been doing in attempting to integrate opencontrail with openshift… I’ve proposed a couple of PRs and will be submitting a few more; I would appreciate help from the community in terms of understanding the DOs and DONTs but it may be useful to try to give you a bit on context so that the PRs dont appear to be just a random exercise.
>
> OpenContrail implements a network overlay; it is similar in overall concepts to other offerings out there. The biggest difference is that it bases itself on standard network protocols for interoperability both across different clusters as well as with other technologies. i.e. since it can interoperate with network appliances that support a given set of standards (https://tools.ietf.org/html/rfc4364https://tools.ietf.org/html/rfc4023, https://tools.ietf.org/html/rfc7510) one can connect multiple overlay (pod networks) to multiple external networks. E.g. it is possible to bring in an external VLAN and present that as a kubernetes service. One can also connect multiple data-replication networks between different clusters.
>
> For enterprise environments, it is useful to be able to have both TLS certificate validation between applications as well as network based access control. Network based access control integrates well with legacy technologies and provides an extra level of protection: often debug ports are incorrectly exposed by applications; prevents applications from ddos'ing each other; It is specially useful if coupled with the ability to audit network flows and be able to prove that observed flows match expected policy. However it must not get in the way of developers.
>
> In the current prototype that we’ve been working on with k8s/openshift a pod:
>   - get a unique IP (from the range 10.0/16)
>   - get associated with the virtual-network <namespace>:ObjectMeta.Label[‘name’]
>   - get access to services according to the Label[‘uses’].
>   - all pods can get access to a list of predefined services names (e.g. default/default).
>
> A service:
>   - Gets the ClusterIP address assigned by k8s (172.30/16)
>   - Creates a service network with the label[‘name’] (defaults to “default” if not present).
>
> The choice for the labels that are used is probably not the best (this is configurable); and currently labels do not allow one to specify a list of services… the k8s team suggests annotations for that. imho, it would be great if labels where user input and annotations where used by programs to record state.

This is definitely an area that we need to solve the usability problem. Yes, labels for user control and annotations for program control had been the philosophy. I think we probably need to capture basic intent in the pod structure itself maybe.

Either way, the issue of admission control and auth remains. We need to look at how the storage with had been done using claims as an example.

Currently in openshift we allow only the cluster admin to control the network parameters. E.g. admin decides that two projects should have complete access to each other's network. The net plugin then implements the instated control policy.

I think the right place to discuss what labels, fields or annotations need be standardized is upstream kubernetes itself.

>
> I’d really like to get feedback / suggestions from the community on how to best achieve this fine grain control of network  access; assuming that one doesn't unduly burden the developer.
>
> My assumption is that nodes do not get direct access to the internet; and that pods don’t get direct access to nodes.
> For instance, the openshift-master is accessible from the master address from outside a cluster, but from inside the cluster it should, imho, be accessible via the service address (kubernetes.default.svc.<DNS domain>).

Your assumption looks neat, currently though the default openshift sdn plugin does not heed to the idea. If contrail wants to encourage the access through service only idea, I see no issues.

>
> There are small tweaks here and there that need to be made to builder and deployer for this to be possible; I’ve a couple of PRs outstanding at the moment that try to address some of these issues.

Yes the builder pod spawns docker container directly and that becomes a pain for the sdn it stands to clash with docker's networking.
Checked your PR regarding this and it is going to be immensely useful for all of us. Thanks for tackling that bull :).

>
> Last, it seems to me that the conversation around network segmentation and proposed changes will be beneficial to openshift when other network implementations (e.g. redhat sdn) are used; at least that is my intent.

Yes, that discussion needs to happen. The basic APIs need to be hammered down too. See this example - https://github.com/openshift/origin/blob/master/docs/generated/oadm_by_example_content.adoc#oadm-pod-network-join-projects

Your help will be really useful in figuring out the APIs as well as how the intent is finally captured by a plugin, openshift-sdn or contrail or any other.

Thanks for raising these important issues.

/Rajat


>
> thanks,
>   Pedro.
>
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]