What is workload
A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods.
Why workload
Each pod cannot be managed, use workload resources that manage a set of pods on your behalf.
These resources configure controllers that make sure the right number of the right kind of pod is running, to match the state user specified.
Kubernetes workload resources:
Deployments
A Deployment provides declarative updates for Pods and ReplicaSets.
Use cases
Create a deployment to rollout ReplicaSet
Declare new state of POD
Rollback to earlier deployment version
Scale up deployment to facilitate more load
Pause the Rollout of the Deployment
Use Status of Deployment
Create a Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The above yaml file creates deployment with 3 replicas.
start minikube
minikube start
Create the deployment
kubectl apply -f
https://k8s.io/examples/controllers/nginx-deployment.yaml
Check if the Deployment was created.
kubectl get deployments
Note : The number of desired replicas is 3 in ngnix-deployment as mentioned in yaml file. Here I have my yaml in my local
NAME
lists the names of the Deployments in the namespace.READY
displays how many replicas of the application are available to your users. It follows the pattern ready/desired.UP-TO-DATE
displays the number of replicas that have been updated to achieve the desired state.AVAILABLE
displays how many replicas of the application are available to your users.AGE
displays the amount of time that the application has been running.
To check the deployment status
kubectl rollout status deployment/nginx-deployment
.To see the ReplicaSet (
rs
) created by the Deployment, runkubectl get rs
To see pods created by deployment
kubectl get pods
To see pod lables
kubectl get pods --show-labels
Note: The a pod-template-hash
label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.
Update Deployment
- To update the ngnix version in deployment
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
or
kubectl edit deployment/nginx-deployment
Run below comant to check if the changes has happened
kubectl describe deployments
or
kubectl describe deployment <name>
Output:
Rollback Deployment
To see the rollback status
kubectl rollout status deployment/nginx-deployment
Output
deployment "nginx-deployment" successfully rolled out
Scaling a Deployment
Scale deployment using the below command
kubectl scale deployment/nginx-deployment --replicas=10
or auto scale option as below
kubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80
Output:
deployment.apps/nginx-deployment scaled
Check the scaling with
kubectl get deployment
or
kubectl get deploy
Pause a Deployment
Pause by running the following command:
kubectl rollout pause deployment/nginx-deployment
The output is similar to this:
deployment.apps/nginx-deployment paused
Then edit the deployment
Check the status of deployment
Resume the Deployment
Resume the Deployment rollout
kubectl rollout resume deployment/nginx-deployment
Deployment status
A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.
ReplicaSet
ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
It ensures the availability of pods.
Use-Case
A ReplicaSet can also be a target for Horizontal Pod Autoscalers (HPA).
Scaling ReplicaSets
When to use a ReplicaSet
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features.
YAML
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
- Create ReplicaSet
kubectl apply -f
https://kubernetes.io/examples/controllers/frontend.yaml
- View ReplicaSet
kubectl get rs
- Check state of ReplicaSet
kubectl describe rs/frontend
- Check for Pods
kubectl get pods
- To verify if a pod has replica set
kubectl get pods frontend-b2zdv -o yaml
Deleting ReplicaSets
Deleting an rs and its pods is possible via API
Deleting just an rs is also possible via API
Isolating rs from pods and then deleting rs
StatefulSet
StatefulSet is the workload API object used to manage stateful applications.
Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.
A StatefulSet manages Pods that are based on an identical container spec and also maintains a sticky identity for each of its Pods.
These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
Why using StatefulSets
StatefulSets are valuable for applications that require one or more of the following.
Stable, unique network identifiers.
Stable, persistent storage.
Ordered, graceful deployment and scaling.
Ordered, automated rolling updates.
Yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
minReadySeconds: 10 # by default is 0
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
In the above example:
A Headless Service, named
nginx
, is used to control the network domain.The StatefulSet, named
web
, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.The
volumeClaimTemplates
will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.
Create StatefulSets
Open two terminal windows.
Terminal 1: watch the creation of the StatefulSet's Pods.
kubectl get pods -w -l app=nginx
Terminal 2:
kubectl apply -f web.yaml
output:
service/nginx created
statefulset.apps/web created
Verify service and statefulset using the below commands
kubectl get service nginx
kubectl get statefulset web
DaemonSet
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Use-case
Some typical uses of a DaemonSet are:
running a cluster storage daemon on every node
running a logs collection daemon on every node
running a node monitoring daemon on every node
Create a DaemonSet
The file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
The name of a DaemonSet object must be a valid DNS subdomain name.
YAML
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# these tolerations are to have the daemonset runnable on control plane nodes
# remove them if your control plane nodes should not run pods
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
Create a DaemonSet based on the YAML file:
kubectl apply -f
https://k8s.io/examples/controllers/daemonset.yaml
Check the status
kubectl describe daemonset fluentd-elasticsearch -n kube-system
Confirm this by listing all running pods with the following command:
kubectl get pod -o wide -n kube-system
Reference:
Job
A Kubernetes job is a workload controller object that performs one or more finite tasks in a cluster. The finite nature of jobs differentiates them from most controller objects, such as deployments, replica sets, stateful sets, and daemon sets.
While these objects permanently maintain the desired state and number of pods in the cluster, jobs run until they complete the task and then terminate the associated pods
As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete.
Use-Cases:
Kubernetes jobs can perform many important tasks in a cluster, including:
Maintenance tasks (such as performing backups).
Large calculations.
Batch tasks (such as sending emails).
Monitoring node behaviors.
Managing work queues.
YAML
An example Job config. It computes π to 2000 places and prints it out. It takes around 10s to complete.
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4CronJob
Create Jobs
Run this command -
kubectl apply -f
https://kubernetes.io/examples/controllers/job.yaml
Check the job status
kubectl describe job pi
or
kubectl get job pi -0 yaml
To view the logs of a Job:
kubectl logs jobs/pi
Output:
3.14159265358979323846264338327950288419716939937510582097494459230781640
CronJobs
A CronJob creates Jobs on a repeating schedule.CronJob is meant for performing regularly scheduled actions such as backups, report generation, and so on.
One CronJob object is like one line of a crontab (cron table) file on a Unix system. It runs a job periodically on a given schedule, written in Cron format.
Writing Cron Jon
Schedule syntax
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat
# │ │ │ │ │
# * * * * *
Macros:
Other than the standard syntax, some macros like @monthly
can also be used
Entry | Description | Equivalent to |
@yearly (or @annually) | Run once a year at midnight of 1 January | 0 0 1 1 * |
@monthly | Run once a month at midnight of the first day of the month | 0 0 1 |
@weekly | Run once a week at midnight on Sunday morning | 0 0 0 |
@daily (or @midnight) | Run once a day at midnight | 0 0 * |
@hourly | Run once an hour at the beginning of the hour | 0 |
Running Task with Cron Job
YAML:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Create a Cronjob - kubectl create -f https://k8s.io/examples/application/job/cronjob.yaml
Output
cronjob.batch/hello
createdGet its status using this command:
kubectl get cronjob hello
Watch for the job to be created in around one minute:
kubectl get jobs --watch
Output:
NAME COMPLETIONS DURATION AGE hello-4111706356 0/1 0s hello-4111706356 0/1 0s 0s hello-4111706356 1/1 5s 5s
Now check the status of a job again
ReplicationController
A ReplicationController ensures that a specified number of pod replicas are running at any one time.
Note: A Deployment
that configures a ReplicaSet
is now the recommended way to set up replication.
It is much similar to ReplicaSet
Summary:
By the knowledge I have gained, Workloads are the building components of Kubernetes. It would be much more useful to learn it after architecture.
Thanks for reading my blog. Hope it helps in gaining some insights on K8s Worklods.
Suggestions are always welcomed.
Will see you in the next blog ........... :)