I'm not really sure if that was the case.
I did manage to fix this by manually re-creating the service and
endpoints using yaml like this;
- port: 1
- ip: 10.0.0.18
- ip: 10.0.0.24
- ip: 10.0.0.8
- port: 1
I have no idea whether this is the "correct" solution, but it
seema to work.
Nor do I have any real idea as to what caused this. Googling
suggests that this can happen if you delete and re-create the PVC
in quick succession, but I'm pretty sure that was not the case
On 07/04/2019 23:51, Nikolas Philips
Is it possible that these failing PVC/PVs are mounted on
the same compute node(s)? It might be, that a node has
somehow issues with the gluster mount points / docker
have a series of GlusterFS PVC that were working fine.
For some of these the corresponding service and endpoint has
e.g. when I do a `oc get svc` or `oc get endpoints` some of
the PVC are
not listed and those PVCs cannot then be mounted to a pod with
> MountVolume.NewMounter initialization failed for volume
> "pvc-3aafc4fa-3e5e-11e9-8522-fa163eca01d7" : endpoints
> "glusterfs-dynamic-xxxxxxx" not found
Any thoughts on:
1. what might have caused this?
2. How to re-create the service and endpoints?
users mailing list
users lists openshift redhat com