[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [EXTERNAL] Re: garbage collection docker metadata



Thanks all.

docker-gc looks like a viable solution for the interim. Cheers

On Sat, 10 Jun 2017 at 00:57 Alex Creek <therealcreek99 gmail com> wrote:

I used to use spotifys docker-gc too on build nodes running docker but there’s a storage leak somewhere between docker (1.12.6) and the thin lv.  After a while the thin lv wouldn’t shrink anymore after purging images/containers.  I looked into thin_trim from the device-mapper-persistent-data pkg but it wasn’t trivial to setup the nodes to support it. I settled on clean up being destroying the docker dir And the lvm and recreating the lvm with docker-storage-setup.  

 

 

Alex

 

 

From: <users-bounces lists openshift redhat com> on behalf of Aleksandar Lazic <aleks me2digital eu>
Organization: ME2Digital e. U.
Date: Friday, June 9, 2017 at 10:20 AM
To: Mateus Caruccio <mateus caruccio getupcloud com>, Gary Franczyk <Gary Franczyk availity com>
Cc: "users lists openshift redhat com" <users lists openshift redhat com>
Subject: Re: [EXTERNAL] Re: garbage collection docker metadata

 

Hi Mateus Caruccio.

on Freitag, 09. Juni 2017 at 14:50 was written:

I do basically the same on an node cronjob: docker rm $(docker images -q)


We also.

I think the same as Andrew that the kubernetes gc does not take care about the metadata part in the thinpool.

Maybe there is already a issue open in k8 for this.

Regards
Aleks

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-06-09 9:30 GMT-03:00 Gary Franczyk <Gary Franczyk availity com>:

I regularly run an app named “docker-gc” to clean up unused images and containers.

https://github.com/spotify/docker-gc


Gary Franczyk
Senior Unix Administrator, Infrastructure

Availity | 10752 Deerwood Park Blvd S. Ste 110, Jacksonville FL 32256
W
904.470.4953 | M 561.313.2866
Gary Franczyk availity com

From: <users-bounces lists openshift redhat com> on behalf of Andrew Lau <andrew andrewklau com>
Date: Friday, June 9, 2017 at 8:27 AM
To: Fernando Lozano <
flozano redhat com>
Cc: "
users lists openshift redhat com" <users lists openshift redhat com>
Subject: [EXTERNAL] Re: garbage collection docker metadata

WARNING: This email originated outside of the Availity email system.
DO NOT CLICK links or open attachments unless you recognize the sender and know the content is safe.


The error was from a different node.

`docker info` reports plenty of data storage free. Manually removing images from the node has always fixed the metadata storage issue, hence why I was asking if garbage collection did take into account metadata or only data storage.

On Fri, 9 Jun 2017 at 22:11 Fernando Lozano <
flozano redhat com> wrote:

If the Docker GC complains images are in use and you get out of disk space errors, I'd assume you need more space for docker storage.

On Fri, Jun 9, 2017 at 8:37 AM, Andrew Lau <
andrew andrewklau com> wrote:


On Fri, 9 Jun 2017 at 21:10 Aleksandar Lazic <aleks me2digital eu> wrote:

Hi Andrew Lau.

on Freitag, 09. Juni 2017 at 12:35 was written:

Does garbage collection get triggered when the docker metadata storage is full? Every few days I see some nodes fail to create new containers due to the docker metadata storage being full. Docker data storage has plenty of capacity.

I've been cleaning out the images manually as the garbage collection doesn't seem to trigger.


Do you have tried to change the default settings?

https://docs.openshift.org/latest/admin_guide/garbage_collection.html#image-garbage-collection

How was the lvm thinpool created?
https://docs.openshift.org/latest/install_config/install/host_preparation.html#configuring-docker-storage

The docker-storage-setup calculates normally 0.1% for metadata as describe in this line
https://github.com/projectatomic/container-storage-setup/blob/master/container-storage-setup.sh#L380


Garbage collection set to 80 high and 70 low.

Garbage collection is working on, I see it complain about images in use on other nodes:

ImageGCFailedwanted to free 3289487769, but freed
3466304680 space with errors in image deletion: [Error response from daemon: {"message":"conflict: unable to delete 96f1d6e26029 (cannot be forced) - image is being used by running container 3ceb5410db59"}, Error response from daemon: {"message":"conflict: unable to delete 4e390ce4fc8b (cannot be forced) - image is being used by running container 0040546d8f73"}, Error response from daemon: {"message":"conflict: unable to delete 60b78ced07a8 (cannot be forced) - image has dependent child images"}, Error response from daemon: {"message":"conflict: unable to delete 2aebdcf9297e (cannot be forced) - image has dependent child images"}]

docker-storage-setup with 99% data volume. I wondering if maybe only the data volume is watched




--
Best Regards
Aleks

 

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


The information contained in this e-mail may be privileged and confidential under applicable law. It is intended solely for the use of the person or firm named above. If the reader of this e-mail is not the intended recipient, please notify us immediately by returning the e-mail to the originating e-mail address. Availity, LLC is not responsible for errors or omissions in this e-mail message. Any personal comments made in this e-mail do not reflect the views of Availity, LLC.

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Best Regards
Aleks _______________________________________________ users mailing list users lists openshift redhat com http://lists.openshift.redhat.com/openshiftmm/listinfo/users

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]