[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Direction on authentication, groups, and authorization in OpenShift

----- Original Message -----
> Keystone was designed to handle having new services be added to them over
> time. For example, Cinder, Neutron, Heat, Trove, Ironic, etc were not there
> when Keystone first came along. Adding OpenShift as another service of
> Keystone shouldn't need any code changes on Keystones side I think.
> Regarding making expensive remote calls, as I understand it, Keystone added
> their pki tokens mode to enable not needing to do lots of round trips. I
> think that would solve that issue?

Ultimately a round trip is resolved for all resolution of permissions, since local caches introduce a number of other problems.

Just to kind of map the 

  User: has account credentials, is associated with one or more tenants
  Tenant: unit of ownership in openstack, contains one or more users
  Role: a first-class piece of metadata associated with many user-tenant pairs.
  Token: identifying credential associated with a user or user and tenant
  Extras: bucket of key-value metadata associated with a user-tenant pair.
  Rule: describes a set of requirements for performing an action.

The set of users for an OpenShift deployment is expected to scale at least a million users (although 3-5 would be safer bound), and the set of tenants (domains and applications) we would expose in the backend should be considered to be at least a million.  I'm not counting the other simpler objects that need access control that could be modelled, because those can be implemented locally.

In terms of the underlying mapping, the model described below is intentionally compatible with that mapping - a set of permissions (called capabilities in Keystone) are assigned to roles, which users are given on a particular tenant.  In order to do the "hard" queries (domains and applications I have access to), you need to execute the intersection of the tenants of a particular type the user has access to with all the applications in the database.  Making the call to Keystone to fetch the list of tenants of a given type (application) and then performing the query against the DB is roughly the same scale, plus the constant latency of the call to keystone.

The other type of query - checking the permissions for a user operation, is again very similar to what we are describing - a policy check for a user, on a tenant, for a permission.  So no real mismatch there.

> As for saying you can't deploy OpenShift that is Keystone based without
> OpenStack is not entirely true. You should be able to install Keystone
> without all of the rest of Openstack. It is modular. You could choose to
> share an instance of Keystone with an OpenStack instance or just use the
> Keystone daemon for the OpenShift instance only.  Or you should be able to
> run Keystone with the Broker on EC2 for example.  Having a private instance
> of Keystone for OpenShift but sharing all the code/features is still
> preferable to implementing something all on your own that does the same
> thing I think. If it really is not the right fit for the project, that's
> just fine. But I think its worth trying because OpenStack development on
> Keystone would come to OpenShift for free. (Active Directory integration,
> scalability enhancements, etc). An auth system is really a lot of work to
> get right.

I think that's the important thing to note - OpenShift isn't really implementing an auth system, but we are pluggable to those.  For instance, today we delegate to mod_auth in a very similar way that a plugin could be written to delegate to Keystone. 

I was not aware keystone was consumable without other foundational components, that's good to know.  That simplifies the problem of using it, although it does add an additional component that must be scalable and properly abstracted in an OpenShift deployment at small scales.  I would much prefer to use it in deployments that already have OpenStack, than to introduce it elsewhere.

The next step - making the membership and authorization checks themselves capable of delegating to Keystone - is not unreasonable, it's just that it's the model described below + the necessary glue code to Keystone.  Certainly we should design the api for authorization in such a way that we can replace the implementation for membership with remote calls to keystone.  However, the expensive work that has to be done regardless is to design the model for interaction from a user perspective, and then build in the necessary checks and tests.  Delegating those operations to keystone at a later point is a much smaller bit of work there.

One gap there today is the concept of scopes for limiting the actions a token can execute (perhaps that exists, and I just missed it in the Keystone docs), which is important for us for user automation of application management.  Some of it can be done in our model, but I'm not sure that scales properly in all cases.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]