[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Unable to manually reclaim an existing pv

Hi folks,

I'm not able to manually reclaim a pv and would like to know what I'm doing wrong.
My setup is openshift 3.9 with glusterFS getting installed as part of the openshift installation.

The inventory setup creates a storage class for gluster and also makes it the default one.

As the setup by default is reclaim policy to Delete and I want to keep the pv when I delete the pvc I created a new storage class as follows:

# storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
  name: glusterfs-retain
  restuser: admin
  secretName: heketi-storage-admin-secret
  secretNamespace: glusterfs
reclaimPolicy: Retain  

and if I make a deployment requesting a volume via pvc it works well and the pv gets bounded as expected

# deployment
- kind: DeploymentConfig
  apiVersion: v1
          - name: "jenkins-data"
            mountPath: "/var/lib/jenkins"
        - name: "jenkins-data"
            claimName: "jenkins-data"

- kind: PersistentVolumeClaim
  apiVersion: v1
    name: "jenkins-data"
    - ReadWriteOnce
        storage: 30Gi
    storageClassName: glusterfs-retain 

Now if I delete the pvc and try to reclaim that pv by creating a new deployment that refers to it is when I get the unexpected behaviour. A new pvc is created but that generates a new pv with the same name and the original pv stays as Released and never gets Available.

How do I manually make it available? According to this I need to manually clean up the data on the associated storage asset??? How am I supposed to do this if the volumen has been dynamically provisioned by GlusterFS?? I´m pretty sure it must be much simpler than that.

Any advise?

Kind regards,
Carlos M.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]