[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: rbd volume locked by other nodes



Hrm - so it sounds like the pod didn't get the volume torn down
correctly to release the volume lock.  Copying some folks who might
know what part of the logs to look for.

On Tue, Jan 19, 2016 at 7:15 PM, Diego Spinola Castro
<spinolacastro gmail com> wrote:
> Hi, this is origin 1.1.0.1-0.git.7334.2c6ff4b and a ceph cluster for block
> storage.
>
> It happens a lot with my cassandra pod but i'd like to check if is a issue
> or something that i'm doing wrong.
>
> When i delete the pod with a ceph pv it's happens to not start again,
> looking at the pod events i found;
>
> FailedSync Error syncing pod, skipping: rbd: image cassandra is locked by
> other nodes
>
>
> I looked up and found the lock at the rbd system, indeed it was owner by a
> different node, so as soon as i deleted the pod was able to start.
>
> $ rbd lock list cassandra
>
> There is 1 exclusive lock on this image.
> Locker       ID                         Address
> client.10197 kubelet_lock_magic_nodebr0 10.0.2.5:0/1005447
>
> $ rbd lock remove cassandra kubelet_lock_magic_nodebr0 client.10197
>
>
> Does anybody else has this issue?
>
> _______________________________________________
> dev mailing list
> dev lists openshift redhat com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]