[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Direction on authentication, groups, and authorization in OpenShift

----- Original Message -----
> The Keystone guys are committed to making it scale to the same magnitude
> OpenShift needs. Rackspace is running it today on a huge cloud. So getting
> it to scale shouldn't be an issue. :)
> You are right about delegation and limiting scope. They are working on it now
> though:
> http://adam.younglogic.com/2013/07/a-vision-for-keystone/
> Getting those remaining bits into keystone will be easier then writing the
> whole thing from scratch though.

The only bits left for us to implement at this point are the final details of permissions, as well as the user experience around membership management.  Everything else has already been implemented previously, or is already in final rev code.  That said, things that we *don't* really want to implement are complex LDAP synchronization (unless it proves required for token propagation to managed resources) or a full OAuth stack.  What's the timeframe for the RBAC work and scopes?

> I see what you are saying about getting a list of all apps a user has access
> too. But that is going to be a very hard to scale query one way or another.
> If you get a DB or something to help you do the query, then you just run
> into a hard to scale DB. 

We've tested the DB queries for membership at millions of records and tens of millions of memberships, and we were still in the sub 3ms query range.  It's not a terribly difficult problem once the membership is materialized because we're going direct to a very simple mongoid in-memory index - the bulk of the work there is when you have to keep membership in sync between two systems.  For OpenShift, we expect lists of memberships to be in the hundreds, so that's a tractable problem.

> I think the OpenStack guys just side step the issue
> by having you select a tenant first before listing everything related to the
> tenant. This allows you to scale out servers by sharding across tenants. As
> a user, I don't really mind having to select a tenant first.

Maybe I'm missing how tenants are subdivided - given the concepts on the keystone documentation I was mapping a single tenant to a single domain or application, and users would be members of each tenant they have access to.  Is that not the resource mapping that keystone expects?  If not, what's the intermediate concept for resource (or is it part of the role definition)?

> It really sounds like to me that that two projects have very similar and
> complementary requirements/end goals here.

I think the discussion has certainly shown that the mapping is pretty close.  I think we'd like to boot this up in the next sprint or two within our model, but given the parallels having the checks and mappings be easily abstractable to a keystone underlying domain model makes sense, and is something we can look at over the course of the fall.

One more thought - OpenShift needs to know when membership in a resource changes in order to propagate security tokens to the various containers under its control.  Does Keystone offer an event stream for changes to tenants that can be subscribed to?  If not, then all interactions have to be through the OpenShift API and the underlying tenant has to be off limits for admin manipulation (although we could expose key propagation as a user/admin action if necessary to recover in those cases).  That's a fairly annoying mapping problem as well.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]