[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Segmentation of internal/external applications



No, routers can be instructed to ignore the provided hostname and provide an alternate.

On Mon, Oct 31, 2016 at 1:15 PM, Josh Baird <joshbaird gmail com> wrote:
Hi, thanks for the reply.  Would I need to manually configure routes for each application to select the appropriate hostname(s) since they will default to ${openshift_master_default_subdomain}?

On Mon, Oct 31, 2016 at 1:04 PM, Clayton Coleman <ccoleman redhat com> wrote:


On Oct 31, 2016, at 12:06 PM, Josh Baird <joshbaird gmail com> wrote:

Hi,

I'm in the initial planning phase for a OSE 3.3 HA deployment and had a few questions/concerns about how to properly segment routing for applications that should only be available within our firewalls, and other applications that should be accessible directly via the Internet.

As of now, the plan is to deploy a group of dedicated router nodes behind external load balancers and point our wildcard DNS (eg, *.ose.domain.com) to the VIP that fronts the router nodes.  My first thought was to handle these types of restrictions at our external load balancers with rules that restrict source networks for certain URIs.

Perhaps another solution would be to deploy multiple groups of dedicated router nodes (one for "internal" apps and one for "external" apps) and create two unique VIPs on our load balancers for each.  Our internet firewalls would allow access to the VIP for "external" applications (the one with the 'external' router nodes as pool members) and restrict/prevent access to the VIP fronting the "internal" applications. For this to work, I think we would need to use labels/selectors to "tag" each application/project/pod as 'internal' or 'external' so the appropriate group of routers gets configured properly.  In this scenario, I believe I would also have to have multiple wildcard DNS entries (eg, *.internal.ose.domain.com and *.external.ose.domain.com) that point to the router node VIPs.

We've recommended this topology to a number of users with this need.  If you assume most app authors are internal, then I would recommend treating the default router as "internal" and set your firewalls appropriately, and then have a special label / project label / process for getting access to external routes.  One way is to deploy the external routers looking for a specific route label (external=true) - this gives control to end users to decide to go external.  The more controlled way would be to require both a route label (external=true) and a namespace label (externalcapable=true), where admins set the project label (because end users can't)

One note when using project labels - because it's expensive to calculate that list, we only refresh it at the router sync interval, so there's more lag in making a project available externally (no lag to changes though).


Does this solution sound feasible?  Is there a better way to handle this or am I on the right track?

Thanks,

Josh
_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]