K8s Storage and Security

K8s Storage and Security

In this blog, we will see various storage concepts and security in K8s.

With this knowledge, you can configure K8s in any cloud environment.

K8s Storage

To store K8s, it is important to understand storage works with containers. So we will learn how docker works with storage.

Docker storage

There are two concepts in Docker for storage:

  1. Storage Drivers

  2. Volume Drivers

How docker stores data on the File system

When docker is installed, it creates the below folder structure.

To understand how Docker stores these containers and images, we need to understand the concept of Layered Architecture which is used by Docker

When Docker build image is built in builds docker layers. Each line in the Docker file creates a new layer in the Docker image as a change from the previous layer. As docker builds images only to changes in each layer, the image size is also small. You can see that in the below picture.

Let's see if there is another DockerFile with similar code but there is a small change in line number 4 and 5.

Docker will use the same layer 1, 2 and 3 that was created earlier from the cache when Docker was built for DockerFile2. So no new layer will be created and the size will be 0MB. And creates a new layer for lines 4 and 5.

The images that are created when the Docker build was run, so it is called image layers. These are read-only layers.

When Docker runs on this build, it creates a new write-only layer called the container layer on top of the image layer.

The container layer has any file modified by the user on the container. The life of the layer is only when the container is there. When the container is deleted all the files and the container layer will be deleted.

But the image layer will be there as it is used by other containers.

Please read more on Docker volumes and volume driver plugins.

Container Storage Interface

Before CSi, K8s was having a Storage plugin within K8s. So when vendors wanted to add support for their storage system to Kubernetes were forced to align with the Kubernetes release process. n addition, third-party storage code caused reliability and security issues in core Kubernetes binaries and the code was often difficult (and in some cases impossible) for Kubernetes maintainers to test and maintain. So CSI came into the picture.

CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes

So now the vendors can create plugins with CSI standards.

Volumes:

Docker Volumes

Docker containers are transient, these Volumes are for a short time, created and used only when needed so as the data in it.

To persist data, we attach a volume to the container, So data will be placed in these volumes even when containers are deleted.

K8s Volume.

Same to Docker, pods are created to process data. When the pod is deleted the data is also deleted. To eliminate this we attach a volume to the pods. So even when the pod is deleted the data remains.

In the below picture, the /opt folder of the pod is mounted with the /data folder of the host. When pods are deleted the volume remains.

But this cannot be same for multiple node clusters because pods will use /data and need the same data.

K8s supports multiple storage solutions like NFS, ceph, scaleio, EBS, and Azure Disk.

When these solutions are moved these data volumes will be in public cloud space and a volume id will be used in K8s.

Persistent Volume

In the previous section, users need to create volumes in every pod. But when it large environment it is hard to configure.

Storage can be managed centrally, so the admin can create a large pool of storage and the user can use it. That is when persistent volume helps us.

PV cluster vide pool of storage volume configured by admin to be used by users deploying an app on the cluster.

Persistant Volume Claims

This object is to make storage available to Node. Admin creates PV and the user creates PVC to use storage. Once Claims are created, K8s find PV to claim based on request and properties set on volume.

Every PV is bound to a single PVC.

K8s binds PV to PVC based on properties like Sufficient capacity, Access modes, volume modes, storage class and selector.

However, if two PC matches for a PVC, then we can use labels and selectors to mention the volumes.

Once PVC is bound PV it cannot be used by other PVC.

Creating PVC

Once it is created it will check for the previous PV and it matches with this requirement. So PVC is bound to the PV.

Deleteing PVC

We can delete the PVC using the below command, but by default, the PV bound to it will remain and it cannot be used by any other PVC.

But we can mention it be deleted automatically.

Or we can recycle the data in data volume will be scrubbed making available to other claims

Storage Classes

In the bwfore class we saw below

But here before PV is created we need to create a disk in GC and this has to be created manually. Same way PV needs to be created manually. This is called Static provisioning volumes.

It would be nice if the volume gets provisioned automatically when an application requires it and that is where storage classes come in.

With Storage classes we can define the provisioner like GC, that automatically provisions storage in GCP and attach that to pods when a claim is made, That is called Dynamic Provisioning of Volumes.

This is done by creating a storage class object. In the below picture now that the storage class is defined, PV will be automatically created so we don't need to specify it. But in PVC we need to mention the stroageClassName and in the pod use the PVC name.

So when PVC is created, it will automatically create the storage in the GCP and PV also will be get created by the storage class automatically.

There are different Volume provisioner plugins, We need to mention the type and replication type. So different storage classes can be created and used later with their name when PVC is created.

StateFulSets

StatefulSet is the workload API object used to manage stateful applications.

Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.

Like a Deployment, a StatefulSet manages Pods that is based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of its Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.

If we use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution

Use cases

StatefulSets are valuable for applications that require one or more of the following.

  • Stable, unique network identifiers.

  • Stable, persistent storage.

  • Ordered, graceful deployment and scaling.

  • Ordered, automated rolling updates.

Components

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  minReadySeconds: 10 # by default is 0
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: registry.k8s.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "my-storage-class"
      resources:
        requests:
          storage: 1Gi

Refer to this for deploying Cassandra application as StatefulSets Example: Deploying Cassandra with a StatefulSet | Kubernetes

K8s Security

Kubernetes Security is defined as the actions, processes and principles that should be followed to ensure security in your Kubernetes deployments. This includes – but is not limited to – securing containers, configuring workloads correctly, Kubernetes network security, and securing your infrastructure.

RBAC

Kubernetes RBAC is a key security control to ensure that cluster users and workloads have only access to resources required to execute their roles. It is important to ensure that, when designing permissions for cluster users, the cluster administrator understands the areas where privilege escalation could occur, to reduce the risk of excessive access leading to security incidents.

Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.

RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.

Enable RBAC

To enable RBAC, start the API server with the --authorization-mode flag set to a comma-separated list that includes RBAC; for example:

kube-apiserver --authorization-mode=Example,RBAC --other-options --more-options

The RBAC API declares four kinds of Kubernetes objects:

  1. Role

  2. Cluster Role

  3. Role Binding

  4. Cluster Role Binding

Role & Cluster Role

An RBAC Role or ClusterRole contains rules that represent a set of permissions. Permissions are purely additive

A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in.

ClusterRole, by contrast, is a non-namespaced resource.

Role Example

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Cluster Role

A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:

  • cluster-scoped resources (like nodes)

  • non-resource endpoints (like /healthz)

  • namespaced resources (like Pods), across all namespaces

Here is an example of a ClusterRole that can be used to grant read access to secrets in any particular namespace, or across all namespaces

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  # "namespace" omitted since ClusterRoles are not namespaced
  name: secret-reader
rules:
- apiGroups: [""]
  #
  # at the HTTP level, the name of the resource for accessing Secret
  # objects is "secrets"
  resources: ["secrets"]
  verbs: ["get", "watch", "list"]

RoleBinding and ClusterRoleBinding

A role binding grants the permissions defined in a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts), and a reference to the role being granted.

A RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding grants that access cluster-wide.

Here is an example of a RoleBinding that grants the "pod-reader" Role to the user "Jane" within the "default" namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:


- kind: User
  name: jane
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role 
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

ClusterRoleBinding example

To grant permissions across a whole cluster, you can use a ClusterRoleBinding. The following ClusterRoleBinding allows any user in the group "manager" to read secrets in any namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-secrets-global
subjects:
- kind: Group
  name: manager # Name is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: secret-reader
  apiGroup: rbac.authorization.k8s.io

Pods Security

PodSecurityPolicy was deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25.

Instead of using PodSecurityPolicy, you can enforce similar restrictions on Pods using either or both:

  • Pod Security Admission

  • a 3rd party admission plugin, that you deploy and configure yourself

Pod Security Admission

The Kubernetes Pod Security Standards define different isolation levels for Pods.

Kubernetes offers a built-in Pod Security admission controller to enforce the Pod Security Standards. Pod security restrictions are applied at the namespace level when pods are created

Pod Security levels

Pod Security admission places the three levels defined by the Pod Security Standards: privileged, baseline, and restricted

Pod Security Admission labels for namespaces

ModeDescription
enforcePolicy violations will cause the pod to be rejected.
auditPolicy violations will trigger the addition of an audit annotation to the event recorded in the audit log but are otherwise allowed.
warnPolicy violations will trigger a user-facing warning but are otherwise allowed.

For each mode, two labels determine the policy used:

pod-security.kubernetes.io/<MODE>: <LEVEL>
pod-security.kubernetes.io/<MODE>-version: <VERSION>

Enforce-standards-namespace-labels

Pod Security Admission was available by default in Kubernetes v1.23,

To check the version, enter kubectl version.

This manifest defines a Namespace my-baseline-namespace that:

apiVersion: v1
kind: Namespace
metadata:
  name: my-baseline-namespace
  labels:
    pod-security.kubernetes.io/enforce: baseline
    pod-security.kubernetes.io/enforce-version: v1.27

    # We are setting these to our _desired_ `enforce` level.
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/audit-version: v1.27
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: v1.27

Add Label to existing namespaces kubectl label --dry-run=server --overwrite ns --all pod-security.kubernetes.io/enforce=baseline

It is helpful to apply the --dry-run flag when initially evaluating security profile changes for namespaces. The Pod Security Standard checks will still be run in dry run mode, giving you information about how the new policy would treat existing pods, without actually updating a policy.

Applying to all namespaces

kubectl label --overwrite ns --all \
  pod-security.kubernetes.io/audit=baseline \
  pod-security.kubernetes.io/warn=baseline

To list namespaces without an explicitly set enforce level using this command:

kubectl get namespaces --selector='!pod-security.kubernetes.io/enforce'

Applying to a single namespace

kubectl label --overwrite ns my-existing-namespace \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/enforce-version=v1.27

Secrets

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or a container image.

Using a Secret means that you don't need to include confidential data in your application code

Secrets are similar to ConfigMaps but are specifically intended to hold confidential data.

Note: Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd).

To safely use Secrets, take at least the following steps:

  1. Enable Encryption at Rest for Secrets.

  2. Enable or configure RBAC rules with least-privilege access to Secrets.

  3. Restrict Secret access to specific containers.

  4. Consider using external Secret store providers.

Use cases

There are three main ways for a Pod to use a Secret:

  • As files in a volume mounted on one or more of its containers.

  • As container environment variable.

  • By the kubelet when pulling images for the Pod.

Working with Secrets

Creating Secret

There are several options to create a Secret:

  • Use kubectl

  • Use a configuration file

  • Use the Kustomize tool

Editing Secret

You can edit an existing Secret unless it is immutable. To edit a Secret, use one of the following methods:

  • Use kubectl

  • Use a configuration file

  • Use the Kustomize tool

Using a Secret

Secrets can be mounted as data volumes or exposed as environment variables to be used by a container in a Pod.

Optional Secrets

By default, secrets are not optional, but they can be made optional. So If secrets don't exist then K8s will skip it

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret
      optional: true

Define a container environment variable with data from a single Secret

  • Define the env variable as key-value pair: kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'

  • Assign the backend-username value defined in the Secret to the SECRET_USERNAME environment variable in the Pod specification.

      apiVersion: v1
        kind: Pod
        metadata:
          name: env-single-secret
        spec:
          containers:
          - name: envars-test-container
            image: nginx
            env:
            - name: SECRET_USERNAME
              valueFrom:
                secretKeyRef:
                  name: backend-user
                  key: backend-username
    
    • Create pod -

      kubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml

    • Display the content of SECRET_USERNAME container environment variable

      kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'

    • The output is backend-admin

Network policies

NetworkPolicies are an application-centric construct that allow you to specify how a pod is allowed to communicate with various network "entities" over the network

The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:

  • Other pods that are allowed

  • Namespaces that are allowed

  • IP blocks (

When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.

Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).

Network policies are implemented by the network plugin. To use network policies, you must be using a networking solution that supports NetworkPolicy.

There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress.

Here is an example to deny all INgress traffic

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress

TLS

Securing an application by using secrets that contains a TLS (Transport Layer Security) private key and certificate.

YAML

apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKakNDQWc2Z0F3SUJBZ0lKQUw2Y3R2bk9zMzlUTUEwR0NTcUdTSWIzRFFFQkJRVUFNQll4RkRBU0JnTlYKQkFNVEMyWnZieTVpWVhJdVkyOXRNQjRYRFRFNE1USXhOREUxTWpJeU1Gb1hEVEU1TVRJeE5ERTFNakl5TUZvdwpGakVVTUJJR0ExVUVBeE1MWm05dkxtSmhjaTVqYjIwd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFDbWVsQTNqVy9NZ2REejJNazMwbXZ4K2VOSHJkQlIwMEJ4ZUR1VjBjYWVFUGNFa2RmSnk5V28KaTFpSXV1V04vZGV6UEhyTWMxenBPNGtzbWU5NThRZVFCWjNmVThWeGpRYktmb1JzNnhQUlNKZVVSckVCcWE4SQpUSXpEVVdaUTAwQ2xsa1dOejE4dDYvVjJycWxJd1VvaTVZWHloOVJsaWR4MjZRaXJBcFFFaXZDY2QzdUExc3AwCkUxRXdIVGxVdzFqSE9Eb3BLZGxaRndmcWhFSHNmYjZvLzJFb1A1MXMwY2JuTld6MHNsUjhhejdzOExVYnhBWnkKQkNQdDY1Z2VhT3hYWWUxaWhLYzN4SE4wYSsxMXpBYUdDMnpTemdOcEVWeFFJQ3lZdVZld3dNb0FrcHNkdGEybwpnMnFTaDZQZzRHeFFabzRwejIwN0c2SkFUaFIyNENiTEFnTUJBQUdqZHpCMU1CMEdBMVVkRGdRV0JCU3NBcUZoCkpPS0xZaXNHTkNVRGU4N1VWRkp0UERCR0JnTlZIU01FUHpBOWdCU3NBcUZoSk9LTFlpc0dOQ1VEZTg3VVZGSnQKUEtFYXBCZ3dGakVVTUJJR0ExVUVBeE1MWm05dkxtSmhjaTVqYjIyQ0NRQytuTGI1enJOL1V6QU1CZ05WSFJNRQpCVEFEQVFIL01BMEdDU3FHU0liM0RRRUJCUVVBQTRJQkFRQU1wcDRLSEtPM2k1NzR3dzZ3eU1pTExHanpKYXI4Cm8xbHBBa3BJR3FMOHVnQWg5d2ZNQWhsYnhJcWZJRHlqNWQ3QlZIQlc1UHZweHpKV3pWbmhPOXMrdzdWRTlNVHUKWlJHSXVRMjdEeExueS9DVjVQdmJUSTBrcjcwYU9FcGlvTWYyUVUvaTBiN1B2ajJoeEJEMVZTVkd0bHFTSVpqUAo0VXZQYk1yTWZUWmJka1pIbG1SUjJmbW4zK3NTVndrZTRhWXlENVVHNnpBVitjd3BBbkZWS25VR0d3TkpVMjA4CmQrd3J2UUZ5bi9kcVBKTEdlNTkvODY4WjFCcFIxRmJYMitUVW4yWTExZ0dkL0J4VmlzeGJ0b29GQkhlVDFLbnIKTTZCVUhEeFNvWVF0VnJWSDRJMWh5UGRkdmhPczgwQkQ2K01Dd203OXE2UExaclVKOURGbFl2VTAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcG5wUU40MXZ6SUhRODlqSk45SnI4Zm5qUjYzUVVkTkFjWGc3bGRIR25oRDNCSkhYCnljdlZxSXRZaUxybGpmM1hzeng2ekhOYzZUdUpMSm52ZWZFSGtBV2QzMVBGY1kwR3luNkViT3NUMFVpWGxFYXgKQWFtdkNFeU13MUZtVU5OQXBaWkZqYzlmTGV2MWRxNnBTTUZLSXVXRjhvZlVaWW5jZHVrSXF3S1VCSXJ3bkhkNwpnTmJLZEJOUk1CMDVWTU5ZeHpnNktTblpXUmNINm9SQjdIMitxUDloS0QrZGJOSEc1elZzOUxKVWZHcys3UEMxCkc4UUdjZ1FqN2V1WUhtanNWMkh0WW9Tbk44UnpkR3Z0ZGN3R2hndHMwczREYVJGY1VDQXNtTGxYc01ES0FKS2IKSGJXdHFJTnFrb2VqNE9Cc1VHYU9LYzl0T3h1aVFFNFVkdUFteXdJREFRQUJBb0lCQUMvSitzOEhwZWxCOXJhWgpLNkgvb0ljVTRiNkkwYjA3ZEV0ZVpWUnJwS1ZwWDArTGdqTm1kUTN0K2xzOXMzbmdQWlF4TDFzVFhyK0JISzZWCi9kMjJhQ0pheW1mNmh6cENib21nYWVsT1RpRU13cDZJOEhUMnZjMFhGRzFaSjVMYUlidW0rSTV0MGZlL3ZYWDEKUzVrY0Mya2JGQ2w3L21lcmZJTVNBQy8vREhpRTUyV1QydEIrQk01U2FMV3p4cDhFa3NwNkxWN3ZwYmR4dGtrTwpkZ1A4QjkwWlByck5SdUN5ekRwRUkvMnhBY24yVzNidlBqRGpoTjBXdlhTbTErVk9DcXNqOEkrRkxoUzZJemVuCm1MUkFZNnpWVGpZV05TU2J3dTRkbnNmNElIOEdiQkZJajcrdlN5YVNVTEZiVGJzY3ZzQ3I1MUszbWt2bEVMVjgKaWsvMlJoa0NnWUVBMFpmV2xUTjR2alh2T0FjU1RUU3MwMFhIRWh6QXFjOFpUTEw2S1d4YkxQVFJNaXBEYklEbQp6b3BiMGNTemxlTCtNMVJCY3dqMk5HcUNodXcyczBaNTQyQVhSZXdteG1EcWJaWkFQY0UzbERQNW5wNGRpTFRCClZaMFY4UExSYjMrd2tUdE83VThJZlY1alNNdmRDTWtnekI4dU1yQ1VMYnhxMXlVUGtLdGpJdThDZ1lFQXkxYWMKWjEyZC9HWWFpQjJDcWpuN0NXZE5YdGhFS2dOYUFob21nNlFMZmlKakVLajk3SExKalFabFZ0b3kra1RrdTJjZAp0Wm1zUi9IU042YmZLbEpxckpUWWkzY2E1TGY4a3NxR0Z5Y0x1MXo3cmN6K1lUaEVWSFIyOVkrVHVWYXRDTnkzCklCOGNUQW1ORWlVMlVHR2VKeUllME44Z1VZRXRCYzFaMEg2QWllVUNnWUFETDIrUGJPelUxelQvZ1B3Q09GNjQKQjBOelB3U2VrQXN1WXpueUR6ZURnMlQ2Z2pIc0lEbGh3akNMQzVZL0hPZ0lGNnUyOTlmbURBaFh6SmM0T2tYMwo4cW5uNGlMa3VPeFhKZ1ZyNnRmUlpNalNaRXpHbXhpbEdISVE2MS9MZGdGVTg3WExYWHdmaTZPdW80cUVhNm9YCjhCRmZxOWRVcXB4bEVLY2Y1N3JsK1FLQmdGbjVSaFc2NS9oU0diVlhFWVZQU0pSOW9FK3lkRjcrd3FvaGRoOVQKekQ0UTZ6THBCQXJITkFYeDZZK0gxM3pFVlUzVEwrTTJUM1E2UGFHZ2Rpa2M5TlRPdkE3aU1nVTRvRXMzMENPWQpoR2x3bUhEc1B6YzNsWXlsU0NvYVVPeDJ2UFFwN2VJSndoU25PVVBwTVdKWi80Z2pZZTFjZmNseTFrQTJBR0x3ClJ1STlBb0dCQU14aGFJSUdwTGdmcHk0K24rai9BSWhJUUhLZFRCNVBqaGx0WWhqZittK011UURwK21OeTVMbzEKT0FRc0Q0enZ1b3VxeHlmQlFQZlllYThvcm4vTDE3WlJyc3lSNHlhS1M3cDVQYmJKQlNlcTc5Z0g5ZUNIQkxMbQo0aThCUFh0K0NmWktMQzg3NTNHSHVpOG91V25scUZ0NGxMQUlWaGJZQmtUbURZSWo4Q0NaCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
kind: Secret
metadata:
  name: test-tls
  namespace: default
type: kubernetes.io/tls

Create secret: kubectl create -f tls.yaml

verify secret using: kubectl get secrets

Summary:

Security and Storage play a major role in any platform. For large architecture, it is hard to maintain both storage and security. The topics that are explained here are what I have learned about K8s Storage and security.

Thanks for reading my blog. Hope it helps to understand a few topics in K8s Storage and Security.

Suggestions are always welcomed.

Will see you in the next blog ........... :)

~~Saraa