Scaling Kubernetes workloads using custom Prometheus metrics

Chaithanya Kopparthi
3 min readJul 28, 2022

In the world of microservices scaling the workloads is the most prominent advantage. Autoscaling makes sure that the application meets its SLA at any given time without any tradeoffs on cost and performance.

To scale an application efficiently, metrics that metrics-server provides out of the box to the Kube-api server might not be sufficient.

Many monitoring tools constantly collect multiple metrics from the kubernetes workloads and monitor the performance of the workloads. These metrics can be utilized to make better scaling decisions.

In this blog, we are using the Prometheus adaptor to read the metrics from the Prometheus endpoint and feed them to the Kube-api server. The HPA will use this data to make the scaling decisions.

Installing the Prometheus-adapter:

We use helm 3 to install the Prometheus adapter and add the required helm repo.

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update

After adding the repo, get all the default values needed for the chart.

helm show values prometheus-community/prometheus-adapter >> values.yaml

Prometheus adapter has default rules that will read the existing metrics and feed them to the Kube-api server. We also can add custom metrics with the below changes to the values.yaml file of the helm chart.

rules:
default: false
custom:
- metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[1m])) by (<<.GroupBy>>)
name:
as: ""
matches: requests_seconds_count
resources:
overrides:
namespace:
resource: namespace
pod:
resource: pod
seriesFilters: []
seriesQuery: '{__name__=~"requests_seconds_count",container!="POD",namespace!="",pod!=""}'

metricsQuery: Query that the Prometheus adapter runs and gathers the data

matches: Does the regex match all the metrics in Prometheus and selects the matched metrics. This will allow one to filter the unwanted metrics.

In the above example, we are collecting data only for metrics request_seconds_count, sampling the requests at the rate of 1m and feeding them to the Kube-api server.

Once the values.yaml is ready, install Prometheus adaptor using the below command

helm install prometheus-adapter prometheus-community/prometheus-adapter -f values.yaml 

once the workloads are deployed we can query the metrics using the following command

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/dev/pods/*/requests_seconds_count" | jq '.items[1]'
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "nginx-7994b8ffd6-x9gcb",
"apiVersion": "/v1"
},
"metricName": "requests_seconds_count",
"timestamp": "2022-05-30T14:53:45Z",
"value": "0",
"selector": null
}

Now that the required metrics are ready, we need to create the HPA for the workload. Below is the YAML definition for the HPA.

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 6
metrics:
- type: Pods
pods:
metric:
name: requests_seconds_count
target:
type: AverageValue
averageValue: "100"

In the above sample HPA, we defined to scale the workload when there is an average of 100 requests for deployment nginx. It averages requests coming to all the pods in the deployment. When the value is greater than 100 requests it will scale the replica based on the surge.

--

--