[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Low Disk Watermark

Hi Luke, 

I am using an additional block device (Option A on the link you provided) for docker storage, but local disk volume for ES storage. 

I suppose I underestimated how aggressively my devs print debug logs. 

Have just configured Curator to clear some old logs; will probably create new volumes for ES storage. 


Systems Engineer

VSee: frank vsee com | Cell: +65 9338 0035

On Thu, Sep 1, 2016 at 5:26 AM, Luke Meyer <lmeyer redhat com> wrote:
Looks like you're using your root partition for docker volume storage (and thus Elasticsearch storage). That is the default configuration, but not a recommended one - we recommend specifying storage specifically for docker https://docs.openshift.org/latest/install_config/install/prerequisites.html#configuring-docker-storage

Also ES data will keep getting blown away if you don't give it a persistent volume, but hopefully that was already evident to you.

On Mon, Aug 29, 2016 at 9:55 PM, Frank Liauw <frank vsee com> wrote:
Hi All, 

My Origin cluster is pretty new, and I happen to spot the following log entry by elasticsearch in kibana (I'm using OpenShift's logging stack): 

[2016-08-30 01:44:25,997][INFO ][cluster.routing.allocation.decider] [Quicksilver] low disk watermark [15%] exceeded on [t2l6Oz8uT-WS8Fa7S7jzfQ][Quicksilver] free: 1.5gb[11.4%], replicas will not be assigned to this node

df on the node shows the following: 

/dev/mapper/centos_node3-root   14G   13G  1.6G  89% /
tmpfs                          7.8G  4.0K  7.8G   1% /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/builder-dockercfg-3z4qk-push
tmpfs                          7.8G  4.0K  7.8G   1% /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/sshsecret-source
tmpfs                          7.8G   12K  7.8G   1% /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/builder-token-znk7k
tmpfs                          7.8G  4.0K  7.8G   1% 

This appears to be the case on one of my other nodes as well (with a slightly different tmpfs size of 5.8G). 

Is this normal? 

Systems Engineer

users mailing list
users lists openshift redhat com

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]