[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: HPA not getting metrics



Hello!

Ran through similar issue/confusion. Apparently the openshift-metrics ansible install which generates the openshift-infra project isn't't enough for the HPA to work. https://docs.okd.io/3.11/dev_guide/pod_autoscaling.html#req-for-using-hpas
So you actually need to install the metrics-server for kubernetes which generates a new project called openshift-metrics-server inside your cluster.

After that you can create and use HPA objects, but only for API schemas/versions of v1 and v2beta1, so no v2beta2. The API reference is on the Kubernetes documentation site: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#horizontalpodautoscaler-v2beta1-autoscaling

În lun., 1 iul. 2019 la 17:56, Robert Dahlem <robert dahlem gmx net> a scris:
Hi,

I'm on CentOS 7.6, OpenShift 3.11 and Ansible 2.6.14.

I am tring to get HPAs to work. What I did so far:

# oc login -u system:admin ...
# oc project openshift-infra
# PLAYBOOKS=/usr/share/ansible/openshift-ansible/playbooks
# ansible-playbook $PLAYBOOKS/openshift-metrics/config.yml \
  -e openshift_metrics_install_metrics=True \
  -e openshift_metrics_hawkular_hostname=hawkular-metrics.$MYDOMAIN

# oc login -u dev ...
# oc project $MYSPACE
# oc autoscale dc/$MYAPP --min 1 --max 5 --cpu-percent=75
# oc set resources dc $MYAPP --requests=cpu=400m

I can browse to https://hawkular-metrics.$MYDOMAIN/hawkular/metrics.
Also in the Web Console I can see the metrics (87 Mib Memory, 0.002
Cores CPU and 0.5 Kib/s Network).

Unfortunately the HPA seems not to be able to fetch the metrics:

================================================================================
# oc describe hpa
Name:                                                  $MYAPP
Namespace:                                             $MYSPACE
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Mon, 01 Jul 2019
14:57:49 +0200
Reference:
DeploymentConfig/$MYAPP
Metrics:                                               ( current / target )
   resource cpu on pods  (as a percentage of request):  <unknown> / 75%
Min replicas:                                          1
Max replicas:                                          5
DeploymentConfig pods:                                 1 current / 0 desired
Conditions:
   Type           Status  Reason                   Message
   ----           ------  ------                   -------
   AbleToScale    True    SucceededGetScale        the HPA controller
was able to get the target's current scale
   ScalingActive  False   FailedGetResourceMetric  the HPA was unable to
compute the replica count: unable to get metrics for resource cpu:
unable to fetch metrics from resource metrics API: the server could not
find the requested resource (get pods.metrics.k8s.io)
Events:
   Type     Reason                        Age                 From
                 Message
   ----     ------                        ----                ----
                 -------
   Warning  FailedComputeMetricsReplicas  40m (x13 over 46m)
horizontal-pod-autoscaler  failed to get cpu utilization: unable to get
metrics for resource cpu: unable to fetch metrics from resource metrics
API: the server could not find the requested resource (get
pods.metrics.k8s.io)
   Warning  FailedGetResourceMetric       1m (x91 over 46m)
horizontal-pod-autoscaler  unable to get metrics for resource cpu:
unable to fetch metrics from resource metrics API: the server could not
find the requested resource (get pods.metrics.k8s.io)
================================================================================

I can ping hawkular-metrics.$MYDOMAIN and I can
        curl --insecure https://hawkular-metrics.$MYDOMAIN/hawkular/metrics
from the Master and the Node.

Additionally I found this (timestamps deleted for brevity):

# oc logs heapster-g48vb
================================================================================
The endpoint check has successfully completed.

Starting Heapster with the following arguments:
--source=kubernetes.summary_api:${MASTER_URL}?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250
--tls_cert=/heapster-certs/tls.crt --tls_key=/heapster-certs/tls.key
--tls_client_ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
--allowed_users=system:master-proxy
--metric_resolution=30s
--sink=hawkular:https://hawkular-metrics:443?tenant=_system&labelToTenant=pod_namespace&labelNodeId=nodename&caCert=/hawkular-metrics-certs/tls.crt&user=hawkular&pass=$HEAPSTER_PASSWORD&filter=label(container_name:^system.slice.*|^user.slice)&concurrencyLimit=5

heapster.go:72] heapster
--source=kubernetes.summary_api:${MASTER_URL}?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250
--tls_cert=/heapster-certs/tls.crt --tls_key=/heapster-certs/tls.key
--tls_client_ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
--allowed_users=system:master-proxy
--metric_resolution=30s
--sink=hawkular:https://hawkular-metrics:443?tenant=_system&labelToTenant=pod_namespace&labelNodeId=nodename&caCert=/hawkular-metrics-certs/tls.crt&user=hawkular&pass=$HEAPSTER_PASSWORD&filter=label(container_name:^system.slice.*|^user.slice)&concurrencyLimit=5

heapster.go:73] Heapster version v1.3.0

configs.go:61] Using Kubernetes client with master
"https://172.30.0.1:443" and version v1

configs.go:62] Using kubelet port 10250

driver.go:341] Initialised Hawkular Sink with parameters {_system
https://hawkular-metrics:443?tenant=_system&labelToTenant=pod_namespace&labelNodeId=nodename&caCert=/hawkular-metrics-certs/tls.crt&user=hawkular&pass=XC3Ky4WL9mXWcdQ&filter=label(container_name:^system.slice.*|^user.slice)&concurrencyLimit=5
0xc42006de40    5 }

heapster.go:196] Starting with Hawkular-Metrics Sink

heapster.go:196] Starting with Metric Sink

heapster.go:106] Starting heapster on port 8082

reflector.go:323]
k8s.io/heapster/metrics/processors/namespace_based_enricher.go:84: watch
of *v1.Namespace ended with: The resourceVersion for the provided watch
is too old.

reflector.go:323]
k8s.io/heapster/metrics/processors/namespace_based_enricher.go:84: watch
of *v1.Namespace ended with: The resourceVersion for the provided watch
is too old.
[repeats ...]
================================================================================

Any idea how to debug this?

Kind regards,
Robert


_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]