[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Introduction to opencontrail

I would like to provide a little bit of context to the work that i’ve been doing in attempting to integrate opencontrail with openshift… I’ve proposed a couple of PRs and will be submitting a few more; I would appreciate help from the community in terms of understanding the DOs and DONTs but it may be useful to try to give you a bit on context so that the PRs dont appear to be just a random exercise.

OpenContrail implements a network overlay; it is similar in overall concepts to other offerings out there. The biggest difference is that it bases itself on standard network protocols for interoperability both across different clusters as well as with other technologies. i.e. since it can interoperate with network appliances that support a given set of standards (https://tools.ietf.org/html/rfc4364https://tools.ietf.org/html/rfc4023, https://tools.ietf.org/html/rfc7510) one can connect multiple overlay (pod networks) to multiple external networks. E.g. it is possible to bring in an external VLAN and present that as a kubernetes service. One can also connect multiple data-replication networks between different clusters.

For enterprise environments, it is useful to be able to have both TLS certificate validation between applications as well as network based access control. Network based access control integrates well with legacy technologies and provides an extra level of protection: often debug ports are incorrectly exposed by applications; prevents applications from ddos'ing each other; It is specially useful if coupled with the ability to audit network flows and be able to prove that observed flows match expected policy. However it must not get in the way of developers.

In the current prototype that we’ve been working on with k8s/openshift a pod:
  - get a unique IP (from the range 10.0/16)
  - get associated with the virtual-network <namespace>:ObjectMeta.Label[‘name’]
  - get access to services according to the Label[‘uses’].
  - all pods can get access to a list of predefined services names (e.g. default/default).

A service:
  - Gets the ClusterIP address assigned by k8s (172.30/16)
  - Creates a service network with the label[‘name’] (defaults to “default” if not present).

The choice for the labels that are used is probably not the best (this is configurable); and currently labels do not allow one to specify a list of services… the k8s team suggests annotations for that. imho, it would be great if labels where user input and annotations where used by programs to record state.

I’d really like to get feedback / suggestions from the community on how to best achieve this fine grain control of network  access; assuming that one doesn't unduly burden the developer.

My assumption is that nodes do not get direct access to the internet; and that pods don’t get direct access to nodes.
For instance, the openshift-master is accessible from the master address from outside a cluster, but from inside the cluster it should, imho, be accessible via the service address (kubernetes.default.svc.<DNS domain>).

There are small tweaks here and there that need to be made to builder and deployer for this to be possible; I’ve a couple of PRs outstanding at the moment that try to address some of these issues.

Last, it seems to me that the conversation around network segmentation and proposed changes will be beneficial to openshift when other network implementations (e.g. redhat sdn) are used; at least that is my intent.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]