I'm looking at two nodes where one has the problem and the other doesn't, and I have confirmed that their node-config.yaml is the same for both (modulo IP addresses). The generated kubeconfigs for these nodes on the master are also the same (modulo IP addresses and keys/certs).On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck <alexwauck exosite com> wrote:Oh, wait. I was looking at the wrong section. The non-root pod as a runAsUser attribute, but the root pod doesn't!On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck <alexwauck exosite com> wrote:A pod that IS running as root have this:
level: s0:c19,c14Another pod in the same project that is NOT running as root has the exact same securityContext section.On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman <ccoleman redhat com> wrote:Do the pods themselves have a user UID set on them? Each pod should have the container "securityContext" field set and have an explicit user ID value set.On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck <alexwauck exosite com> wrote:These are completely normal app containers. They are managed by deploy configs. Whether they run as root or not seems to depend on which node they run on: the older nodes seem to run pods as random UIDs, while the newer ones run as root. Our older nodes have docker-selinux-1.10.3 installed, while the newer ones do not. They only have docker-selinux-1.9.1 available, since the 1.10.3 package seems to have been removed from the CentOS extras repo.We are running OpenShift 1.2.1, since I haven't had time to upgrade it.On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman <ccoleman redhat com> wrote:Are you running them directly (launching a pod)? Or running them under another controller resource.
On Feb 6, 2017, at 2:00 AM, Alex Wauck <alexwauck exosite com> wrote:Recently, I began to notice that some of my pods on OpenShift run as root instead of a random UID. There does not seem to be any obvious cause (e.g. SCC). Any idea how this could happen or where to look for clues?