Introduction
Recently I had an opportunity to look into deploying PostgreSQL and pgpool on Kubernetes. The deployment is straightforward, but I also need to obtain the metrics information such as CPU and memory usages that each deployed pod is using. There are several ways to do this, but today I am sharing my way, which utilizes k8s’s native metrics server + cronjob to achieve.
This blog assumes you have already installed Kubernetes and deployed some services already (for my case, I have both pgpool and PostgreSQL running). You may need to refer to other posts about how to deploy them on Kubernetes.
initial setups
We have a pgpool pod running inside a deployment, a standalone pod running PostgreSQL as primary node and another deployment set up to scale up or down replica as needed (initially scaled to 2 replica pods). All these are deployed in a simple k8s cluster, minikube
in my machine. We will be adding a cronjob block that can periodically checks CPU usage of each primary and replica nodes and perform custom tasks should threshold exceeded. They can be visualized as:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-primary 1/1 Running 0 1m
postgres-replica-deployment-69f995884b-72v6g 1/1 Running 0 9s
postgres-replica-deployment-69f995884b-kkkn5 1/1 Running 0 9s
pgpool-59b77dbc76-6nr45 1/1 Running 0 1m
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
postgres-replica-deployment 2/2 2 2 16m
pgpool 1/1 1 1 1m
set up k8s metric server
To get the metric data from k8s, we need to setup metric server first:
$ wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
for demonstration purpose, I do add one line into components.yaml to disable strict TLS checking. Simply add --kubelet-insecure-tls
under spec-args
under the pod template for metrics-server
...
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls ####### <====== line added here to disable TLS checks
...
check metrics server is installed:
$ kubectl get deployment metrics-server -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 15d
then we should be able to get metrics from each pod
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
postgres-primary 9m 63Mi
postgres-replica-deployment-69f995884b-72v6g 7m 48Mi
postgres-replica-deployment-69f995884b-kkkn5 7m 46Mi
pgpool-59b77dbc76-6nr45 2m 143Mi
Note that the unit of CPU usage is in millicores
, which equals 1000 per CPU core you have physically. For example, my PC has a 8 core CPU, so I have in total 8000 millicores available.
allocate max CPU usage per pod
To get correct reading of CPU usage, we need to specify a maximum CPU resource a pod can use. I find the value of 400m is suitable for my needs:
Follow this example manifest to set CPU limits to 400m:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: myimage:latest
resources:
limits:
cpu: 400m
create a cpumon deployment for CPU monitoring
Now, we have everything needed to define a CPU monitoring cronjob on k8s. This is called cpumon
in this example. We will deploy it next:
- Prepare a config file for running k8s cluster (minikube), and example TLS certificate, CA certificate and private key files. All these are needed for
cpumon
to communicate with k8s controller to get metrics information. You can find the config file for your cluster with this command:
$ kubectl config view
- create a new folder called
scripts
andspecs
- save the contents of the output into a separate file, called
my-cluster.yml
and put it inspecs
folder - make a copy of CA certificate, client certificate and keys as specified in the output and put them in
specs
folder as well. For most cases they are located in/home/$USER/.minikube/ca.crt
,/home/$USER/.minikube/profiles/minikube/client.crt
, and/home/$USER/.minikube/profiles/minikube/client.key
- modify the
my-cluster.yml
file, and change all the path prefix to/k8sconfig
. For example:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:35545
name: kind-kind
- cluster:
certificate-authority: /k8sconfig/ca.crt
extensions:
- extension:
last-update: Thu, 20 Apr 2023 10:37:33 PDT
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.49.2:8443
name: minikube
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
- context:
cluster: minikube
extensions:
- extension:
last-update: Thu, 20 Apr 2023 10:37:33 PDT
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: kind-kind
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
- name: minikube
user:
client-certificate: /k8sconfig/client.crt
client-key: /k8sconfig/client.key
6. prepare a simple script that will be run when cron triggers and place it in scripts
folder. Remember to run chmod 755
on the script. For example:
#!/bin/bash
# this is the max cpu assigned to each pod
total_millicore=400
echo "cpumon: total cpu millicore = $total_millicore"
for pod in $(kubectl get pods | grep "primary\|replica" | awk '{print $1}'); do
pod_millicore=$(kubectl top pods $pod | awk 'NR!=1{print $2+0}')
pod_cpu_percent=$(awk "BEGIN {printf \"%.2f\", ${pod_millicore}/${total_millicore} * 100}")
echo "cpumon: $pod uses $pod_cpu_percent% CPU (${pod_millicore}/${total_millicore} millicores used)"
done
exit 0
- create
configMap
object forscripts
andspecs
$ kubectl create configmap cpumon-scripts --from-file=./scripts
$ kubectl create configmap k8s-config --from-file=./specs
- create the cpumon cronjob with this example manifest below. This cpumon is run within
bitnami/kubectl
utility, which containskubectl
utility to access pod information from k8s controller. The files inspecs
andscripts
will be passed into the pod as volume viaconfigMap
. Note that environment variableKUBECONFIG
has to point to themy-cluster.yml
that we have just created.
Save this manifest as cpumon.yml
apiVersion: batch/v1
kind: CronJob
metadata:
name: cpumon
spec:
schedule: "* * * * *" # run every 1 minutes
jobTemplate:
spec:
template:
spec:
containers:
- name: cpumon
image: bitnami/kubectl
command: ["/cpumon/cpumon.sh"]
env:
- name: KUBECONFIG
value: "/k8sconfig/my-cluster.yml"
volumeMounts:
- name: cpumon-scripts
mountPath: "/cpumon"
- name: k8s-config
mountPath: "/k8sconfig"
volumes:
- name: cpumon-scripts
configMap:
name: cpumon-scripts
defaultMode: 0755
- name: k8s-config
configMap:
name: k8s-config
restartPolicy: Never
ttlSecondsAfterFinished: 1800
- deploy
cpumon
by
kubectl apply -f cpumon.yml
examine cronjob runs
Every minute, a pod will be started to run the cpumon.sh
script and exit. We can see their execution history by looking at the pod status below:
NAME READY STATUS RESTARTS AGE
postgres-primary 1/1 Running 0 142m
cpumon-28075474-hjx4l 0/1 Completed 0 2m42s
cpumon-28075475-qw9b7 0/1 Completed 0 102s
cpumon-28075476-8nlgv 0/1 Completed 0 42s
postgres-replica-deployment-69f995884b-72v6g 1/1 Running 0 99m
postgres-replica-deployment-69f995884b-kkkn5 1/1 Running 0 99m
pgpool-59b77dbc76-6nr45 1/1 Running 0 127m
We can pick any of these instances of cpumon to check their log output.
$ kubectl logs cpumon-28075476-8nlgv
cpumon: total cpu millicore = 400
cpumon: postgres-primary uses 2.00% CPU (8/400 millicores used)
cpumon: postgres-replica-deployment-69f995884b-72v6g uses 2.00% CPU (8/400 millicores used)
cpumon: postgres-replica-deployment-69f995884b-kkkn5 uses 2.00% CPU (8/400 millicores used)
Having this basic infrastructure setup, we can then customize cpumon.sh
further to perform certain actions when certain threshold is exceeded. For example, scale up or down replica deployment. This would be the topic for next time.
Cary is a Senior Software Developer in HighGo Software Canada with 8 years of industrial experience developing innovative software solutions in C/C++ in the field of smart grid & metering prior to joining HighGo. He holds a bachelor degree in Electrical Engineering from University of British Columnbia (UBC) in Vancouver in 2012 and has extensive hands-on experience in technologies such as: Advanced Networking, Network & Data security, Smart Metering Innovations, deployment management with Docker, Software Engineering Lifecycle, scalability, authentication, cryptography, PostgreSQL & non-relational database, web services, firewalls, embedded systems, RTOS, ARM, PKI, Cisco equipment, functional and Architecture Design.
Recent Comments