If you or your team are responsible for optimising and governing Kubernetes architecture, this article will provide you with a good baseline understanding of how Kubernetes is targeted by threat actors and how to prevent these attacks. It focuses on some of the most common security weaknesses which may be present throughout your Kubernetes environment and explains how to remediate them with immediate effect.
Why does Kubernetes get attacked?
Kubernetes is now reportedly used by over a third of back-end developers, so, it follows that the persistence of Kubernetes-based attacks will continue as that market grows. Whether opportunistic or highly motivated, the threat actors that carry out these attacks share an appetite for efficiency and impact. By targeting the most established and/or routinely used technologies, the cost of attack and the resource needed are lowered because of the availability of the target, of known security issues, and of suitable tradecraft; regardless of resources, skills, or motive, it’s simply good business sense for threat actors to use what their victims use.
The sophistication of the Kubernetes architecture is also appealing to these individuals and groups because it provides a large attack surface with an abundance of opportunities to access and exfiltrate sensitive data, destroy key files, pivot to connected systems, or perform cryptomining. Kubernetes’ innate complexity also means that mistakes happen easily; misconfiguration is the main cause for vulnerabilities in the cloud. It also leads to the development of sophisticated, harder to detect attack techniques.
In the past 12 months, we’ve seen:
- A vulnerability in AWS IAM Authenticator allowing Privilege Escalation on Kubernetes clusters (remediated before any known exploitation in the wild took place).
- Kinsing malware taking advantage of vulnerable Kubernetes images and PostgreSQL containers.
- A large-scale Dero cryptojacking operation using DaemonSets to deploy malicious pods.
Fig. 1. Diagram showing various Kubernetes attack vectors.
None of Kubernetes built-in security mechanisms are active by default, making your clusters, nodes, and pods vulnerable by design. This puts the onus very much on the user to understand and implement the correct controls effectively.
Common Kubernetes Misconfigurations
Insecure pod security controls
Admission controllers and policy management tools can be configured to restrict the container capabilities assigned to pods within a cluster. Without restrictions, any subject with the permission to create pods could do so, giving them unwarranted privileges or excessive capabilities. Such a scenario would provide a threat actor with an arbitrary container break out (“container escape”) vector, enabling them to:
- Access the underlying node (host) and, consequently, sensitive information such as secrets, keys, or sensitive configuration data.
- Escalate their privileges by accessing the service accounts of every pod hosted on the node.
- Leverage the Kubelet configuration to impersonate the node itself, which commonly leads to a total compromise of the cluster.
Even where Privilege Escalation cannot be achieved directly, threat actors may still attempt Lateral Movement to the host network, privileged ports, processes, or filesystems, or even to pods hosted in other namespaces that could lead to further attacks.
Remediations
Kubernetes supports multiple security admission controllers such as the newly released PodSecurity Admission Controller and third-party policy management tools, which can validate and control the security aspects of pod specifications. They work by intercepting resource creation requests being sent to the API server and validating each request against pre-defined security specifications (e.g., restricting access to host processes and volumes). If a request doesn’t meet your specifications, the request is denied, limiting any malicious attempts.
Either solution can be implemented and enforced within a cluster to provide the necessary protection. Once deployed, the controller will need to be configured to be effective. Most Kubernetes policy management tools have built-in policies that can provide a baseline security standard, but for safe measure, make sure you proactively restrict the following:
- Access to privileged pods.
- Use of privileged capabilities.
- Access to host namespaces and processes.
- Access to sensitive host filesystem mounts.
- Root access.
- Use of host networking and ports.
- Access to writeable filesystem.
- Image provenance (to ensure pods only run with images that are using an approved base image and are subject to a recent vulnerability scan).
In the case of third-party supported admission controllers and pod security policy tools, there are different options available. Some of the most popular are:
- Azure Policy for Azure Kubernetes Service (AKS): Azure policies can be enabled to consequently enforce security policies for Kubernetes resources and can be configured to restrict pod specifications. Azure policies use the Gatekeeper webhook for Open Policy Agent (OPA) to support individual policy definitions and policy groups (Initiatives).
- OPA/Gatekeeper: OPA is an open-source, policy-based controller for cloud native deployments providing security controls for multiple cloud services including Kubernetes. Gatekeeper is the Kubernetes specific implementation that provides a native way for Kubernetes to enforce your desired policies.
- Kyverno: This is a native Kubernetes open-source policy engine which provides simple to use flexible policy management where policies are managed as Kubernetes resources. Kyverno can validate, mutate, and generate Kubernetes resources and support CI/CD integration to automatically test policies and validate resources.
Regularly monitoring and auditing these controls will help your pods maintain protection as your environment and technologies change. Policies should be regularly reviewed and updated to include the latest best practices and vulnerability information. And where pods require privileged capabilities for legitimate reasons, keep policy exceptions to a minimum.
More information:
- Kubernetes Pod Security migration
- Kubernetes Pod Security Admission
- Kubernetes Pod Security Standards
- Microsoft Azure Policy: Kubernetes Templates
- Kyverno Pod Security
- OPA/Gatekeeper Pod Security
(Note with regards to Kubernetes Pod Security Policies (PSPs): PSPs were the original built-in Kubernetes security admission controller for enforcing pod security standards. Though they're still fully functional, they were discontinued as of Kubernetes v1.21 and removed in v1.25. It’s recommended that any potential PSP configurations are migrated to another supported controller.)
Lack of Network Policies
By default, Kubernetes implements a flat network where all ingress and egress network connections are permitted. This enables pods to communicate with all other pods and entities within the cluster, as well as the public internet if an external route is available. In the absence of any network policies, there are no restrictions on network communications within the cluster, allowing all network traffic in and out of pods. A threat actor could leverage this configuration to communicate with applications and services and use them to launch attacks against the cluster and related services. Unrestricted access to the internet could enable them to establish Command and Control (C2) channels (for example using Web Service techniques) to exfiltrate data, maintain persistence, and install malicious software.
Within public cloud environments, unrestricted network communications can even provide pods with access to resources within the cloud virtual network. This may allow Lateral Movement opportunities and access to sensitive data including the instance metadata services.
Remediations
To restrict network communications, the Kubernetes NetworkPolicies resource can be enforced via a supported Container Network Interface (CNI). NetworkPolicies are an application-centric implementation which enable developers to control traffic at the IP/port level on a per-namespace basis. If no NetworkPolicies are enforced within a namespace, all ingress and egress traffic is permitted.
NetworkPolicies should be created and enforced for all namespaces within the cluster. Perhaps the safest measure is to create a default deny-all rule for ingress and egress network communications within each namespace. You may then apply additional rulesets to accommodate legitimate required traffic based on each pod and application’s use case.
NetworkPolicies can be configured to restrict network traffic based on the following designations:
- Pods that are allowed
- Namespaces that are allowed
- IP blocks (exception: traffic to and from the node where a pod is running is always allowed, regardless of IP address)
Once a NetworkPolicy is applied and has a matching selector, corresponding pods will reject all traffic that isn’t permitted. As the order of the policies is not specific, all enforced policies are evaluated and only allowed if one such rule exists. To describe this in more detail: for a connection to be established between a source and destination pod, both the egress policy on the source pod and the ingress policy on the destination pod must permit the connection. If either the egress policy or the ingress policy disallows the connection, it will not be established.
More information:
Insecure secrets management
Kubernetes requires secure distribution of secrets for applications to communicate with other services, for example, an application requesting access to credentials to authenticate to a backend database and execute queries. In some cases, this security consideration gets missed. Often, this comes down to the wrong system being used or data being handled incorrectly. It’s possible for secrets to be exposed in many ways including:
- Directly within a container: anyone with access to the container can retrieve the output using the ‘env’ command.
- In PodSpecs: any user with permission to get Pods/ReplicaSets/Deployments can view the variable key/value pairs within the PodSpec.
- On nodes: users with access to the hosting node can inspect containers to extract environmental variables.
- In logging solutions: for example, within Azure monitor, environmental variables for containers can be accessed via the ‘Insights’ feature.
- Within application code, source code, and so on.
Remediations
As a first action, secrets must be removed from all plain-text data sources such as PodSpecs and ConfigMaps. A dedicated secrets management solution (see below) should be implemented, and secret values should be rotated to reduce the chance of a compromised key being exploited.
Kubernetes secrets
Kubernetes Secrets are a built-in resource type which provide a simple method for storing sensitive information independently from the pods and applications that depend on them. This can reduce the risk of sensitive information being exposed through the application workflow. You can protect these further by using Role-Based Access Control (RBAC) configurations to provide greater access control. Neither encryption, nor key rotation are a default provision of Kubernetes Secrets, so it is essential you do both to get sufficient protection from this control.
A downside of Kubernetes Secrets is that it is not possible to restrict access to individual secrets via granular RBAC assignments, so anyone able to retrieve secrets can do so across the entire namespace or cluster depending on the assigned permissions. Access to secrets is a high-risk permission which often leads to Lateral Movement and Privilege Escalation, as well as the exposure of highly sensitive information. In these cases, where granular controls are required, a third party dedicated secrets management tool should be considered.
Third-party secret management solutions
There are multiple third-party secret management solutions available that can provide additional security features, common options include:
- Cloud native: Each platform provides a dedicated secrets management solution such as Azure Key Vault and AWS Secrets Manager, which store and encrypt secrets within the native cloud environment. These services can also be used to automatically rotate secret values, manage access to secrets with IAM policies, and enable centralised auditing.
- Open-source tools: several dedicated secrets management tools can be implemented with Kubernetes to provide secrets management and data protection, with advanced features such as dynamic secrets, namespaces, leases, and revocation for secrets data. Common products include:
- Hashicorp Vault
- CyberArk Conjur
- Lyft Confidant
- AquaSec
Tools such as Hashicorp Vault have the additional benefit of being implemented as a sidecar – a separate container that is co-located and runs alongside a primary application container within the same Kubernetes pod. The sidecar container is responsible for managing secrets and securely passing them to the primary application container as needed. This enhances the security of secrets by separating secret management from the primary application, reducing the risk of secrets being exposed or compromised.
Regardless of which secrets management solution you choose, the following security controls are required to provide adequate and reliable protection:
- Implement a need-to-know policy by mapping out exactly which container applications need access to each individual secret to minimise the risk of exposure.
- Ensure role-based access controls (RBAC) are robust, so that only restricted, pre-approved entities have authorisation to access secrets.
- Where possible, implement multifactor authentication (MFA) and just-in-time credential management.
- Log and regularly audit access to secrets, removing access to entities if they’re no longer required.
- Rotate and update secrets regularly to reduce the window of opportunity for compromised credentials to be effectively used.
Use of default namespace
Within Kubernetes, namespaces are a security mechanism for implementing security boundaries and isolating resources within a single cluster. Most Kubernetes instances implement several default namespaces for deploying resources, including kube-system (Kubernetes control-plane components), kube-public (public resources, which should be avoided) and default. If no namespace is specified, Kubernetes uses the default namespace to schedule resources, which can introduce risk due to conflicting security requirements and a lack of segregation. If an application or container within the default namespace has been compromised, the threat actor may be able to access all other resources within the default namespace.
Remediations
To reduce the blast radius of compromised applications whilst benefitting from granular security controls, you can create custom namespaces to segment services into isolated categories according to their use case and security requirements. Using Kubernetes namespaces alongside secure RBAC will enable you to create security boundaries between applications and teams to support a multi-tenant architecture. This allows each user, team, or application to exist within its own namespace, isolated from every other user of the cluster, and operating as if it were the sole entity of the cluster.
To support Kubernetes security best practice, the default namespace should not be used to host container services and applications.
Custom namespaces should be implemented for each application service and/or team. Administrators should then implement fine-grained RBAC policies using roles and rolebindings following the principal of least privilege (PoLP) to ensure service accounts and users have need-only access to resources within their own namespace.
Further security controls should also be implemented to isolate access to singular namespaces, including:
- Pod Security Controls: using the Pod Security Admission Controller (or similar third-party plugins) you can restrict use of pod security standards to individual namespaces. This should be used to restrict sensitive aspects of pod specifications based on the security requirements of deployments in each namespace.
- Network policies: network policies should restrict network level communications to resources within the corresponding namespace only. Where access to other namespaces is required, least privilege network assignments should be implemented to reduce the opportunity for lateral movement between namespaces.
- Node restrictions: Taints and tolerations can be applied at the namespace level to restrict which nodes containers may be placed on. This is increasingly important in self-hosted Kubernetes clusters to prevent applications from being scheduled on Master nodes which contain sensitive control-plane resources. Taints and Tolerations can then be enforced using admission controllers, such as the PodNodeSelector or custom policies provided by third-party policy plugins.
- Resource quotas: resource quotas provide constraints for limiting resource consumption per namespace and can prevent teams within a namespace from exceeding resource limits such as CPU and memory. The LimitRanger Admission Controller or custom policies can also be implemented to enforce defaults for pods which do not submit resource requirements.
More information:
- OWASP Cheat Sheet: Use Kubernetes namespaces to properly isolate your Kubernetes resources.
- Kubernetes Security Best Practice.
Use of Default Service Accounts
Kubernetes service accounts provide identities for processes running in a pod. Attacks that use default service accounts take advantage of how these are assigned within a cluster. This process of assignment is explained below:
- When a namespace is created, a default service account is automatically generated and assigned to that namespace.
- This default service account will be assigned to all pods within the assigned namespace unless an alternative service account is specified.
- When a pod is assigned a service account, the service account token (an identifiable JSON Web Token (JWT)) is automatically mounted into the pod’s container to enable authentication to the Kubernetes API and other services.
Initially, the default service account has minimal permissions. However, if a pod using this service account requests additional privileges, these will be assigned to all pods corresponding with the default service account, which could lead to multiple pods with excessively high privileges. For a threat actor, this would offer a chance to compromise high-privilege account tokens, escape the cluster, move laterally, and perform actions on the host itself.
Remediations
It’s possible to stop the default service account being used for pods within a cluster using the following steps:
- Assess the sensitivity of each workload: identify which workloads contain sensitive data or perform sensitive operations and classify them accordingly.
- Create custom service accounts: for each workload, create a unique service account and use RBAC to allocate suitable permissions so it can access its required resources.
- Assign service accounts to pods: assign the appropriate service account to each pod and enforce with Admission Controllers where appropriate.
- Implement access controls: use RBAC to implement PoLP settings designed to limit access to each pod and their tokens.
- Monitor and review: regularly monitor and review the service accounts and their associated permissions to ensure that they are still appropriate for the workloads they are associated with.
- Remove unnecessary service tokens: if an application doesn’t require access to the Kubernetes API, you may prevent service account being mounted into their respective pods to add an extra safeguard.
Are you vulnerable to Kubernetes attacks?
The most effective way to lower the risk of Kubernetes-based cyber attacks is to configure the control plane correctly. Some researchers expect that by 2025, most cloud security failures will originate from preventable misconfigurations. On one hand, the reliance on admins and users to apply best security practice in Kubernetes means more room for human error. On the other, with suitable planning and process, misconfiguration incurs minimal costs to fix, and by doing so yourself, you retain autonomy of the environments you work with every day. This is good news for teams who want to be proactive. So, where to start?
Kubernetes Security Assessments are a form of Penetration Test that identifies vulnerabilities in a Kubernetes environment by simulating different attacks against it. The goal of the assessment is to evaluate the environment’s resilience by identifying the weaknesses that could be exploited by a threat actor. During the assessment, consultants simulate various attack scenarios that would allow a threat actor to target misconfigurations to gain unauthorised access to the Kubernetes cluster, containers, or applications and determine the overall impact of such risks. This can include testing for vulnerabilities in the Kubernetes infrastructure, misconfigurations in Kubernetes resources and deployments, and vulnerabilities in container images or application code.
The aim with any security engagement shouldn’t be to remove all possibility of attack or to become “totally secure”. With fast moving technologies like Kubernetes, this is simply impossible. Instead, your focus is best placed on the reduction of risk in the highest likelihood and highest impact scenarios. This is where a well scoped Kubernetes Security Assessment can help.
To find out more, get in touch below.