Kubernetes Cluster Component Security

KubeAPI Server

  • The KubeAPI Server is at the center of all operations within Kubernetes.

  • We interact with it through the kubectl utility or by accessing the API directly, and through that you can perform almost any operation on the cluster.

  • This is the first line of defense, controlling access to the API server itself.

  • We need to make 2 type of decision:

    • Who can access the cluster? - Authentication

      • Files - Username and Passwords

      • Files - Username and Tokens

      • Certificates

      • External Authentication providers - LDAP

      • Service Accounts

    • What can they do? - Authorization

      • RBAC Authorization

      • ABAC Authorization

      • Node Authorization

      • Webhook Mode

  • All communication with the cluster between the various components such as the ETCD, Kube Scheduler, Kube Controller, etc. is secured using TLS encryption.

Securing Controller Manager and Scheduler

  • The Kubernetes Controller Manager and Scheduler are crucial for keeping your cluster in the desired state.

  • The Controller Manager handles tasks like ensuring nodes are healthy, maintaining the right number of pod replicas, and managing service accounts, with controllers like the Replication Controller, Endpoints Controller, Namespace Controller, and Service Accounts Controller.

  • The Scheduler manages where and when pods run within your cluster, looking at available resources and deciding the best node for each pod.

Isolation

  • Now, how to protect them? You protect them by isolating them.

  • You run them on dedicated nodes separate from where the applications are running. This way, if a security breach occurs in one of the application nodes, it doesn't directly impact these critical components.

RBAC

  • In another scenario, you set up Role-Based Access Control (RBAC) to limit what the Controller Manager and Scheduler can do.

  • Suppose your Controller Manager only needs to manage pod replicas and service accounts, and the Scheduler only needs to manage scheduling tasks. You configure RBAC rules to give them permissions only for those tasks.

  • This way, even if someone tries to exploit these components, they won't be able to perform unauthorized actions, like accessing secrets or altering network policies.

TLS

  • Communication security is also crucial. Let's say your Controller Manager and Scheduler need to communicate with the API server and etcd.

  • By using TLS (Transport Layer Security), you ensure all data transferred between these components is encrypted.

  • This prevents attackers from intercepting sensitive information.

Audit Logging

  • To keep track of what happens in your cluster, you enable audit logging for the Controller Manager and Scheduler.

  • This records all actions taken, providing a detailed log that you can review.

  • Tools like Prometheus and Grafana help monitor these activities, and setting up alerts ensures you are notified of any suspicious behavior, allowing you to respond quickly.

Summary

  1. Isolate Controller Manager and Scheduler on separate dedicated nodes.

  2. Use RBAC to limit permission of Controller Manager and Scheduler.

  3. Encrypt communication between components using TLS for security.

  4. Enable audit logging to track and review all actions taken.

  5. Secure default settings and protect the configuration files.

  6. Run the latest version of Kubernetes.

  7. Scan for vulnerabilities regularly.


Securing the Kubelet

What is Kubelet?

  • The Kubelet in the Kubernetes worker node registers the node with the cluster.

  • When it receives instructions to load a container or a pod on the node, it requests the container runtime engine, which may be a Docker or containerd (or any other CRE) to pull the required image and run an instance.

  • And the kubelet then continues to monitor the state of the pod and the containers in it and reports the status to the KubeAPI server on a timely basis.

Installing kubelet

  • Simply download the kubelet binary and configure it to run as a service.

    wget https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux.amd64/kubelet

  • Note: If you use the kubeadm tool to deploy your cluster, it automatically downloads the required binaries and bootstraps the cluster. However, it does not automatically deploy the kubelet, you must manually install the kubelet on your worker nodes.

  • As can be seen in the kublet.service, there are various options required to pass to the kubelet service. But with the release of v1.10, most of the options/parameters are moved to a filed named as kubelet-config.yaml for ease of deployment and configuration management and this files is then passed with an option --config=/var/lib/kubelet/kubelet-config.yaml to the kubelet service.

  • If you specify flag both in the service configuration file or on the command line (kubelet.service) as well as in the kubelet configuration file, then the flag specified on the command line will override whatever is in the kubelet configuration file.

View kubelet options

  • Now once the kubelet is configured, inspecting the kubelet process on a node show the different options configured, including the path to the kubeconfig file.

  • Inspecting the kubeconfig file (/var/lib/kubelet/config.yaml) gives us a list of parameters configured for the kubelet.

Ports

  • 10250 → serves API that allows full access

  • 10255 → servers API that allows unauthenticated read-only access

  • By default, the kubelet allows anonymous access to its API.

  • If you do a curl -sk https://localhost:10250/pods/ you’ll be able to see the list of pods running on the node.

  • Similarly, you can access other APIs as well. Some of them are as below:

  • This is a big security risk. Anyone that knows the IP address of these hosts can access these APIs to perform anything that the API server can do, such as viewing existing pods, creating new pods, viewing logs and metrics, etc.

Kubelet security

  • Any request that comes to the kubelet first authenticated and then authorized.

  • The authentication process decides if the user or the requesting entity has access to the API and the authorization process decides what areas of the API can the user access and what operations can they perform.

Authentication

  • As we know, curl -sk https://localhost:10250/pods/ list all the pods running on the node, because be default, the kubelet permits all requests to go through without any kind of authentication.

  • Now these requests are marked as anonymous users part of an unauthenticated group.

  • This behavior can be changed by setting the flag anonymous-auth=false in the kubelet service configuration file.

  • The best practice is to disable the anonymous authentication and enable on of the supported authentication mechanisms.

  • Supported Authentication mechanism:

    • Certificates (x509)

      • Set the clientCAFile path to /path/to/ca.crt

    • API Bearer Tokens

Authorization

  • The default authorization mode is AlwaysAllow, and this allows all access to the APIs.

  • To prevent this, we set the authorization mode to webhook.

  • When set to webhook, the kubelet makes a call to the KubeAPI server to determine whether the request can be authorized or not. And depending on the result received from the API server, it either rejects or approves that particular request.

Read-only access

  • We can use port 10255 to view the metric service with the read-only access, without the need of any kind of authentication or authorization.

  • Now this service is enabled when the read-only port flag on the kubelet service is set to 10255.

  • If this value is set to 0, then this service is disabled.

  • Note: If the kubelet config file is in use, then this flag has a default value of 0. So it is disabled by default.


Securing Container Runtime

  • Each compute/worker node in a Kubernetes cluster has a container runtime engine (CRE).

  • Most Kubernetes applications historically used Docker as the container runtime engine. However, Kubernetes now supports other runtime engines such as containerd and CRI-O , which are more suited to Kubernetes' architecture and security needs.

Container Vulnerabilities

Updating and Patching regularly

  • Keeping your container runtime updated is one of the simplest yet most effective security measures. Regular updates include patches for known vulnerabilities.

  • This is crucial because many attacks exploit known vulnerabilities that have been fixed in newer versions.

Running Containers with Least Privileges

  • Running containers with the least privileges helps to minimize the risk if a container is compromised.

  • Avoid running containers as the root user. Instead, run them with a non-root user whenever possible.

  • In Kubernetes, you can set the runAsUser and runAsGroup fields in your pod specification to specify that containers should run as a non-root user and group.

Using Read-Only Filesystems

  • Using read-only filesystems can prevent unauthorized modifications to the container's filesystem.

  • In Kubernetes, you can set the readOnlyRootFilesystem field in your pod specification to make the container's filesystem read-only.

  • This is particularly important for containers that do not need to write to disk during normal operation.

Limiting Resource Usage

  • Setting resource limits on CPU and memory usage prevents any single container from consuming all the host’s resources.

  • This helps maintain the performance and availability of your system, even if one container behaves unexpectedly.

Using Security Profiles

  • Implementing security profiles like SELinux and AppArmor adds an additional layer of security by enforcing mandatory access controls on containers.

  • These profiles restrict what containers can do, reducing the risk of exploitation.

  • SELinux (Security-Enhanced Linux) is a security module in the Linux kernel that provides a mechanism for supporting access control security policies.

  • SELinux policies define how processes and users can access resources on a system.

  • AppArmor (Application Armor) is another security module for the Linux kernel, providing a different approach to mandatory access control.

  • AppArmor uses profiles to restrict the capabilities of individual programs, preventing them from performing unauthorized actions.

  • For SELinux, ensure it is enabled and configured correctly on your nodes. For AppArmor, you can apply profiles using Kubernetes annotations.

  • Security profiles help enforce good security practices by limiting the actions containers can perform. This can help prevent containers from performing actions that could compromise the host.

Transitioning to Supported Runtimes

  • Ensure your Kubernetes nodes are configured to use the supported runtimes by updating your cluster configuration and node settings.

  • Transitioning to these runtimes ensures compatibility with future Kubernetes releases and leverages their security features.

Audit Logs

  • Finally, implementing monitoring and logging with tools like Fluentd, Logstash, or Elasticsearch for centralized logging, and Prometheus or Grafana for runtime behavior monitoring helps detect and respond to security incidents promptly.

  • Enabling audit logs provides a detailed record of actions, aiding in forensic analysis and understanding the impact of security incidents.

Summary

  1. Regularly update and patch container runtimes to fix vulnerabilities.

  2. Run containers with least privileges to minimize security risks.

  3. Use read-only filesystems to prevent unauthorized filesystem modifications.

  4. Limit resource usage to prevent denial-of-service (DoS) attacks.

  5. Apply security profiles like SELinux and AppArmor for protection.

  6. Transition to supported runtimes like containerd or CRI-O.

  7. Implement monitoring and logging for runtime behavior detection.


Securing Kube-Proxy

  • Kube-Proxy is a network proxy that runs on each and every node in the Kubernetes cluster, maintaining network rules.

  • So it ensures that your nodes can communicate with internal as well as external resources as required and allowed.

Kubeconfig file

  • First and most important to ensuring the security of kube-proxy is maintaining the security and integrity of its kubeconfig file.

  • This file contains the configuration needed for kube-proxy to communicate with the Kubernetes API server.

  • To start, you need to locate the kubeconfig file that kube-proxy uses. You can do this by running the command ps -ef | grep kube-proxy.

  • This command lists all the running processes and then filters the results to show only those related to kubeproxy.

  • In the output, look for the --config flag. This flag specifies the path to the kube-proxy config file.

  • Next, look into the kube-proxy file and identify the kubceconfig file.

  • Once you have the location of the kube-proxy config file, the next step is to check its permissions.

  • You can do this by running stat -c %a /var/lib/kube-proxy/kubeconfig.conf.

  • The stat command displays detailed information about the file, and the -c %a option specifically shows the file's permissions in numeric format.

  • Ensuring that the permissions are set to 644 or stricter is crucial because this setting allows only the file owner to write to the file while others can only read it.

  • To further secure the kubeconfig file, you should also check its ownership. Run the command stat -c %U:%G /var/lib/kube-proxy/kubeconfig.conf, where the -c %U:%G option shows the file's owner and group.

  • The output should be root:root, meaning that only the root user has access to the file.

  • This setting prevents unauthorized users from modifying the kubeconfig file.

  • Unauthorized users attempting to access or alter the file would be restricted by both ownership and permissions, which is ideal.

Securing Communication

  • Securing communication between kube-proxy and the Kubernetes API server is crucial.

  • Ensure that this communication is encrypted using TLS to prevent eavesdropping and tampering.

  • In this case if you open the kubeconfig file used you'll see the ca cert configured to validate the API server’s TLS certificate and the kube-proxy itself uses a service account token to authenticate to kube-api-server.

Audit Logs

  • Audit logs play a significant role in kube-proxy security.

  • Enabling auditing in Kubernetes to log all actions performed by kube-proxy allows you to track changes and identify unauthorized access.

  • Regularly reviewing these logs helps in detecting and addressing suspicious activities promptly.

  • Use below configuration to create a policy to log everything kube-proxy does.

Regular Updates and Patches

  • Keeping kube-proxy updated with the latest patches is fundamental.

  • Regular updates often include important security fixes for known vulnerabilities.

  • Using automated systems for updates can help ensure that kube-proxy and other Kubernetes components are always running the most secure versions.

Summary

  1. Secure kube-proxy config file with strict permissions.

  2. Encrypt API server communication using TLS and Service Accounts.

  3. Run kube-proxy with least privileges necessary.

  4. Implement network policies for traffic control (if necessary).

  5. Use logging and monitoring for detecting anomalies.

  6. Regularly update and patch kube-proxy for security.

  7. Enable audit logs to track kube-proxy actions.


Pod Security

  • Pod Security helps us implement policies to restrict pods from being created with specific capabilities or privileges.

  • The original implementation of pod security was through what was known as Pod Security Policies (PSP) (removed in v1.25).

  • PSP is replaced by Pod Security Admission (PSA) and Pod Security Standards (PSS).

  • We need to have the basic idea about PSP and how it works as it was the original implementation for pod security and you may see it mentioned in many places on the web, and even in older clusters.

Pod Security Policies (PSP)

  • At a very high level, how Pod Security Policies work is that when it is enabled, the Pod Security Policy Admission Controller observers all pod creation requests and validate the configuration against the set of pre-configured rules. And if it detects a violation of the rule, then that request to create the pod is rejected and the user receives an error message.

  • The PSP is deployed as an admission controller. So to enable PSP Admission Controller, we need to add --enable-admission-plugins=PodSecurityPolicy flag in the KubeAPI server as shown below.

  • Next, we create a PSP object defining our pod security requirements.

  • In the below ex, we don’t want to allow creating containers with the privilege flag set to true (pod.yaml), set the privilege option to false (psp.yaml).

  • To disallow running as root user, set the run as user rule to ‘MustRunAsNonRoot‘.

  • To disallow certain capabilities, we add the ‘requiredDropCapabilities‘.

  • You may use ‘defaultAddCapabilities‘ to provide a list of capabilities that must be added be default. So those listed will be added to the pod definition automatically.

  • This indicates that PSP are not only used to verify and approve or reject pod creation requests, but they can also change or mutate the pod definition by adding certain default values. This feature is not available in PSA and PSS.

  • To ensure only a specific type of volumes are allowed, define a set of allowed volume types under ‘volume’ section.

  • Now that we have PSP Admission Controller enabled and the PSP object create with our rules in it, the next time a request comes in, the PSP Admission Controller will try to query the PSP object to look for rules to deny or approve the request.

  • However, if you have only enabled the PSP Admission Controller and not authorize PSP API, then the PSP Admission Controller will not able to communicate with the PSP API and does reject all requests, even if the requests are valid.

  • As soon as the PSP Admission Controller is enabled, it’ll start denying all requests to create a pod. It is one of the major drawback of PSP which made it hard to roll out PSP to the existing cluster.

  • So the entity that is trying to create a pod like user or the pod itself must have access to the PSP API.

  • We can do that by using RBAC (role and role-binding) in Kubernetes.

  • Working:

    • So, every pod when create has a service account associate with it named as default.

    • To authorize the pod with the PSP, we create a role and bind it to the default service account in that namespace.

Challenges

  1. Not enabled by default - users had to spend extra efforts in enabling the plugin and configuring it to work.

  2. Rollout - If you were to enable PSP in a cluster, you have to create all the appropriate policies in advance for all the requirements before it’s enabled. If you don’t do that, all the requests to create pods would be denied.

  3. Authorization - As soon as PSP is enabled, it’s going to deny all requests to create pods because the required rules aren’t created. So, we must create a role and role-binding to the user or service accounts to get it work.

  4. Complexity - Users started creating different policies for different requirements that resulted in a big mess which ultimately leads to a lot of complexity.


Securing ETCD

  • Etcd is a critical component in Kubernetes, acting as the key-value store that holds the configuration and state data of the cluster.

Enabling Data Encryption at Rest

  • First, you need to make sure that encryption is enabled for the data stored in etcd (data at rest). Here’s how you can enable encryption:

    1. Create an Encryption Configuration File:

      • You need to create an encryption configuration file that Kubernetes will use to encrypt data.

      • Here is an example configuration : Let's break down the important parts:

        • kind: Specifies the type of Kubernetes object. Here, it's an EncryptionConfiguration.

        • apiVersion: Specifies the version of the Kubernetes API to use. Here, it's apiserver.config.k8s.io/v1.

        • resources: Lists the types of resources to be encrypted. In this example, it’s set to secrets, meaning Kubernetes Secrets will be encrypted.

        • providers: Defines the encryption providers to use. In this case, aescbc is used, which is a type of encryption algorithm. The keys section contains the encryption keys. The name is a label for the key, and secret is the actual encryption key, base64-encoded.

        • identity: Acts as a fallback in case the aescbc provider cannot be used.

  • You should replace with your actual base64-encoded encryption key. You can generate a base64-encoded key using OpenSSL or another encryption tool.

    1. Update the Etcd Pod Specification:
  • Next, you need to update the etcd pod specification to use the encryption configuration file.

  • Open the /etc/kubernetes/manifests/etcd.yaml file and add the --encryption-provider-config flag to the etcd command.

Using TLS for Secure Communication

  • This is to secure the data in transit/motion, which is usually done using TLS certificates. . You can use tools like openssl or cfssl to create the necessary certificates.

  • So, the etcd configuration file has a set of paths to different certificates:

    • --cert-file: the path to the server’s certificate file, ensuring the server can present its identity securely.

    • --key-file: The path to the server’s key file, used alongside the certificate.

    • --client-cert-auth: Enables client certificate authentication, ensuring only trusted clients can communicate with etcd.

    • --trusted-ca-file: The path to the CA certificate used to verify client certificates.

    • --peer-cert-file: The path to the peer’s certificate file for secure peer-to-peer communication.

    • --peer-key-file: The path to the peer’s key file.

    • --peer-client-cert-auth: Enables peer client certificate authentication.

    • --peer-trusted-ca-file: The path to the CA certificate used to verify peer certificates.

Regular Backups

  • Regularly backing up the etcd data is crucial to ensure you can recover your cluster state in case of data loss or corruption.

  • Use etcd’s built-in snapshot functionality to create backups:

    ETCDCTL_API=3 etcdctl snapshot save /path/to/backup.db --endpoints= --cacert=/path/to/ca.crt -- cert=/path/to/etcd-client.crt --key=/path/to/etcd-client.key

  • Let’s break down the command:

    • ETCDCTL_API=3: Specifies the etcdctl API version to use. Version 3 is the latest and most commonly used version.

    • etcdctl snapshot save: The command to take a snapshot of the etcd data.

    • /path/to/backup.db: The path where the backup file will be saved.

    • --endpoints=: Specifies the etcd endpoints to connect to. Replace with the actual addresses of your etcd instances.

    • --cacert=/path/to/ca.crt: The path to the CA certificate used to verify the etcd server certificate.

    • --cert=/path/to/etcd-client.crt: The path to the client certificate used to authenticate the client to the etcd server.

    • --key=/path/to/etcd-client.key: The path to the client key used to authenticate the client to the etcd server.

Summary

  1. Enable data encryption at erst for etcd security.

  2. Use TLS to secure etcd communication.

  3. Regularly back up etcd data for recovery.


Secure Container Networking

  • By default, Kubernetes networking is designed to be simple and flat, allowing each pod in the cluster to get its own IP address.

  • Containers within a pod share the same network namespace, including the IP address and network ports, enabling seamless communication.

  • This design allows all pods to communicate with each other without needing Network Address Translation (NAT) and lets pods reach services using standard DNS resolution.

  • Additionally, external users can access services through ingress controllers or external load balancers.

  • There are 3 areas where we need to secure our container networking

    1. At the network layer

    2. Pod-to-pod communication

    3. Point-to-point communication

Implementing Network Policies

  • As mentioned, by default, Kubernetes allows all traffic between pods, but you can create network policies to restrict this communication based on specific rules.

  • You can see an example of a network policy that restricts traffic to a specific pod.

  • This policy denies all incoming and outgoing traffic to pods in the default namespace, providing a baseline level of security.

  • By defining what traffic is allowed and denied, you can better control and protect the communication between pods.

Using Service Meshes

  • Service meshes like Istio or LinkerD provide advanced networking features, including mutual TLS for encrypted communication, traffic management, and observability.

  • They help secure, control, and monitor the traffic between microservices.

  • For example, Istio can be used to enforce mutual TLS, ensuring that all communication between services is encrypted and authenticated. This significantly reduces the risk of man-in-the-middle attacks.

Encrypting Network Traffic

  • Encrypting network traffic between containers is crucial for protecting data in transit.

  • Kubernetes supports network encryption through various mechanisms, including using IPsec or WireGuard to encrypt traffic at the network layer.

  • For instance, you can use a CNI plugin like Calico to enable IPsec encryption for network traffic.

  • This ensures that all data transferred between nodes is encrypted, preventing unauthorized access and ensuring the integrity and confidentiality of your data.

Isolating Sensitive Workloads

  • Isolating sensitive workloads by using namespaces and network policies can help contain potential security breaches.

  • By segregating workloads into different namespaces and applying strict network policies, you can limit the blast radius of any security incident.

  • This isolation helps ensure that even if one part of your system is compromised, the impact is minimized and does not affect other critical parts of your application.

Summary

  1. Implement network policies to control pod traffic flow.

  2. User service meshes for encrypted, secure service communication.

  3. Encrypt network traffic between containers using IPsec or WireGuard.

  4. Isolate sensitive workloads with namespaces and network policies.


Client Security - kubectl proxy port forwad

kubeconfig and curl

  • As we know, we can use kubectl utility to interact with the Kubernetes API server and when doing so, we don’t need to provide any authentication mechanism in the command line because we configured it in the kubeconfig file on our system.

  • Kubeconfig has stores the necessary details and credentials in it to access and interact with the Kubernetes cluster through the API server.

  • We can use kubectl utility from anywhere to dig into the cluster as long as we have the kubeconfig file with the necessary security credentials.

  • Another way to interact with the KubeAPI Server is through the port 6443 using curl command and by passing the certificate files within the command.

kubectl proxy

  • Alternate option (apart from kubeconfig and curl) to dig into the cluster is to start a kubectl proxy client.

  • The kubectl proxy client launches a proxy service locally on port 8001 by default, and uses the credentials and certificates from your kubeconfig file to access the cluster.

  • That way, you don’t have to specify the certificate files in the curl command when you try to access the API server.

  • Now when you access the kubectl proxy service on localhost at port 8001, the proxy will use the credentials from the kubeconfig file stored on your local computer and forward your request to the KubeAPI server. And this will list all available APIs at root.

  • You can make any kind of API request to the API server with this.

  • You can proxy requests to any service that’s running within the cluster.

Example:

  • Say we have an NGINX pod exposed with an NGINX service.

  • Because of some security reasons, we have not exposed it to outside of the cluster through a NodePort or LoadBalancer service. It is a ClusterIP type service that is accessible within the cluster.

  • Now what if you want to reach that NGINX service from your laptop as an administrator?

  • One way is to expose it as a NodePort or a LoadBalancer service but it is not allowed.

  • Another option is to use this kubectl proxy.

  • You can access the kubectl proxy URL and then form a URL through kubectl proxy at port 8001.

    curl http://localhost:8001/api/v1/namespaces/default/services/nginx/proxy

Kubectl Port Forward

  • Another option to access the NGINX service is to configure a kubectl port forward.

  • With kubectl, you can forward a port from your laptop to a port on a service within the cluster.

  • The forward command takes a pod, deployment, replicas, or a service as an argument. And then specify a port on our host which will be forwarded to port on the service.

  • And to access the service running on the remote cluster, we can just do a curl to localhost like below:


Client Security - kubeconfig

  • So far, we have seen how to generate a certificate for a user.

  • You've seen how a client uses the certificate file and key to query the Kubernetes REST API for a list of pods using cURL.

    curl https://my-kube-playground:6443/api/v1/pods --key admin.key --cert admin.crt --cacert ca.crt

  • In this case, the cluster is called my kube playground, so send a cURL request to the address of the kube-apiserver while passing in the bearer files, along with the CA certificate as options.

  • This is then validated by the API server to authenticate the user.

  • Now, how do you do that while using the kubectl command? You can specify the same information using the options server, client key, client certificate, and certificate authority with the kubectl utility.

    kubectl get pods --server my-kube-playground:6443 --client-key admin.key --client-certificate admin.crt --certificate-authority ca.crt

  • Obviously, typing those in every time is a tedious task, so you move this information to a configuration file called as kubeconfig. And then specify this file

    as the kubeconfig option in your command.

  • By default, the kubectl tool looks for a file named config under a directory.kube under the user's home directory. So if you put the kubeconfig file there, you don't have to specify the path to the file explicitly in the kubectl command.

Kubeconfig file format

  • The config file has three sections:

    • clusters

      • Clusters are the various Kubernetes clusters that you need access to.

      • Say you have multiple clusters for development environment or testing environment or prod or for different organizations or on different cloud providers, etc. All those go there.

    • users

      • Users are the user accounts with which you have access to these clusters.

      • For example, the admin user, a dev user, a prod user, et cetera.

      • These users may have different privileges on different clusters.

    • contexts

      • Finally, contexts attach cluster and users .together.

      • Contexts define which user account will be used to access which cluster.

      • For example, you could create a context named admin@production that will use the admin account to access a production cluster. Or I may want to access the cluster I have set up on Google with the dev user's credentials to test deploying the application I built.

  • Remember, you're not creating any new users or configuring any kind of user access or authorization in the cluster with this process.

  • You're using existing users with their existing privileges and defining what user you're going to use to access what cluster.

  • That way you don't have to specify the user certificates and server address in each and every kubectl command you run.

KubeConfig file

  • Once the file is ready, you don’t have to create any object, like you usually do for other Kubernetes objects. The file is left as is and is read by kubectl command and the required values are used.

  • current-context in the kubeconfig file specifies the context to be used by kubectl command.

  • kubectl config view: To view the kubeconfig file

  • If you do not specify which kubeconfig file to use, it ends up using the default file located in the folder .kube in the user’s home directory.

  • Alternatively, you can specify a kubeconfig file by passing the kubeconfig option in the command line

    kubectl config view --kubeconfig=my-custom-config

  • How to update your current context?

    kubectl config use-context prod-user@production

  • Namespaces: each cluster maybe configured with multiple namespaces within it. The context section in the kubeconfig file can take additional field called namespace where you can specify a particular namespace.

  • Certificates: It is always recommended to use the full path while giving the certificates info.

    Another way:


Securing Storage

  • In Kubernetes, pods rely on PVs and PVCs – that's persistent volumes and persistent volume claims to access storage.

  • Storage security issues can have significant impacts on Kubernetes environments.

Using Encryption

  • Encrypting data both at rest and in transit is crucial for protecting sensitive information.

  • Kubernetes supports encryption for data stored in etcd and for Persistent Volumes. (We have already discussed etcd data encryption above).

  • Many storage providers support encryption at the disk level. For instance, AWS EBS, Azure Disk Storage, and Google Cloud Persistent Disks all offer encryption options.

  • Here’s how you can enable encryption by setting the encrypted parameter to true in the StorageClass: by setting the encrypted parameter to true. This configuration ensures that all data stored on these volumes is encrypted, providing an additional layer of security.

Implementing Access Controls (RBAC)

  • Access controls ensure that only authorized users and services can access storage resources.

  • We use Role-Based Access Control (RBAC) to manage permissions effectively.

  • For example here we grant a user read-only access to PVCs in a namespace using get and list:

Storage in Kubernetes

  • Storage Classes in Kubernetes define the types of storage available in the cluster and their parameters.

  • By using Storage Classes, you can enforce policies such as encryption, IOPS limits, and backup policies.

Using Storage Classes and Policies

  • A Storage Class can specify parameters that enforce security measures, such as enabling encryption or setting performance limits.

  • For example this storage class has a set of parameters that help define certain standards.

  • Here, IOPS represents the number of read and write operations a storage device can perform in a second.

  • Higher IOPS means better performance for applications that require frequent data access.

  • Using this type of storage Classes allows you to standardize storage configurations across your cluster, ensuring that security measures are consistently applied.

Implementing Backup and Disaster Recovery

  • Next let's look at taking Regular backups and a robust disaster recovery plan.

  • These are essential to protect data against loss or corruption.

  • You can Use tools and solutions that integrate with Kubernetes to automate backups and ensure data can be restored quickly.

  • Velero, PortWorx, OpenEBS, and Kasten are some of the backup solutions available.

Velero

  • Velero is an open-source tool that provides backups, restores, and disaster recovery for Kubernetes clusters.

  • It can back up Kubernetes resources and Persistent Volumes.

Monitoring and Auditing

  • Additionally, using tools like Prometheus and Grafana allows you to monitor storage metrics and visualize access patterns, helping identify unusual activities and potential security breaches.

Summary

  • To summarize use Role based access controls to secure who has what access to PVs and PVCs.

  • Enable backups using suitable backup tools.

  • And enable data encryption at rest using encryption objects.

  1. Encrypt data at rest and in transit for protection.

  2. Implement RBAC to control access to storage resources.

  3. Use Storage Classes to enforce security and performance policies.

  4. Regularly back up data and have a disaster recovery plan.

  5. Monitor and audit storage access for compliance and security.