[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: How to do a flexvolume with ocp 4?

On Fri, Jun 14, 2019 at 10:34 AM Marc Boorshtein <mboorshtein gmail com> wrote:

I'll leave the discussion to guys with more knowledge than me, but using a sidecar container to provide network storage client seems overkill or more complicated than required to me. Network storage should be managed by the node, and containers should get the mount points without caring about the filesystem type. Only the nodes (or the privileged container that manages cifs on the node, for all containers/pods in that node) should need the keytab. I'd try providing that file to the privileged pod as a configmap.

Today we distribute the keytabs to the nodes via ansible.  My concern with v4 (including  Hemant's comments) are that this method will break with RHCOS since everything is supposed to run in a container.  I'm also not a huge fan of distributing keytabs to nodes that might run a pod instead of it only being available to the pod when it runs.  Can you run a flexvolume as its own container?  Instead of telling ocp4 to run a script can I tell it to run a container that does the mount?

You can't run flexvolume plugin in its own container, since calls to flexvolume plugin is directly `exec'd` the plugin must be accessible to the kubelet. Coming up with a RPC for running flexvolume in a container will be similar to reinventing CSI. The good news at least is - in v4 kubelet is *not* containerized and isn't running in a pod. I guess you will have to find a way of placing kerberos secret to the host and installing cifs package.

Can you write a a operator that installs a daemonset which runs on "selected" nodes and copies flexvolume plugin in aforementioned location and then mounts certain hostPath which allows it to copy both kerberos secret and mount.cifs mount binary on to the host so as flexvolume plugin can access it. I haven't tried it but https://gist.github.com/pantelis/540a19262cacc841fb0a

I just verified that kernel shipped with rhcos does ship with cifs kernel module although we don't ship cifs-utils package so there is no mount.cifs (but you can still do mount -t cifs). The daemonset pod should be privileged though.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]