Kubernetes has become the de facto standard for container orchestration, but its flexibility comes with security complexity. With the increasing sophistication of cloud-native attacks, securing Kubernetes clusters requires a comprehensive, defense-in-depth approach. This guide covers the essential security practices for protecting your Kubernetes workloads in 2026, from cluster configuration to runtime protection.
The Kubernetes Security Landscape
Kubernetes security encompasses multiple layers: the cluster infrastructure, the control plane, workload configurations, network traffic, and runtime behavior. Each layer requires specific security controls, and weaknesses at any layer can compromise the entire system.
- Infrastructure Security: Node hardening, etcd encryption, API server protection
- Authentication & Authorization: RBAC, service accounts, admission controllers
- Pod Security: Security contexts, Pod Security Standards, resource limits
- Network Security: Network policies, service mesh, ingress security
- Secrets Management: External secret stores, encryption at rest
- Runtime Security: Container scanning, runtime monitoring, anomaly detection
- Supply Chain Security: Image signing, SBOM, vulnerability scanning
Pod Security Standards Implementation
Pod Security Standards (PSS) replaced the deprecated PodSecurityPolicy in Kubernetes 1.25+. PSS defines three security profiles: Privileged (unrestricted), Baseline (minimal restrictions), and Restricted (hardened). Implementing the Restricted profile provides the strongest security posture.
# Namespace with Pod Security Standards enforced
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
# Enforce restricted profile
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
# Warn on violations
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
# Audit violations
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: latest
---
# Secure Pod specification meeting Restricted profile
apiVersion: v1
kind: Pod
metadata:
name: secure-app
namespace: production
spec:
# Don't use the default service account
serviceAccountName: app-service-account
automountServiceAccountToken: false
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myregistry.io/app:v1.2.3@sha256:abc123...
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
resources:
limits:
cpu: "500m"
memory: "256Mi"
ephemeral-storage: "100Mi"
requests:
cpu: "100m"
memory: "128Mi"
# Mount temporary directory for writes
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir:
sizeLimit: 50MiRBAC Best Practices
Role-Based Access Control (RBAC) is fundamental to Kubernetes security. Follow the principle of least privilege: grant only the permissions necessary for each role, and audit access regularly.
# Minimal Role for application deployment
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-deployer
namespace: production
rules:
# Deployments - full control
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
# Note: No delete permission
# Pods - read only for debugging
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
# ConfigMaps and Secrets - limited access
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
# No access to secrets directly - use external secret store
---
# RoleBinding for CI/CD service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cicd-deployer-binding
namespace: production
subjects:
- kind: ServiceAccount
name: cicd-deployer
namespace: cicd
roleRef:
kind: Role
name: app-deployer
apiGroup: rbac.authorization.k8s.io
---
# Audit RBAC permissions with kubectl
# kubectl auth can-i --list --as=system:serviceaccount:cicd:cicd-deployer -n production# Cluster-wide security policies via ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: security-auditor
rules:
# Read-only access for security auditing
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "secrets", "serviceaccounts"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "daemonsets", "statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["networkpolicies", "ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
verbs: ["get", "list", "watch"]
# Access to audit logs
- nonResourceURLs: ["/logs", "/logs/*"]
verbs: ["get"]Network Policies for Zero Trust
By default, all pods in Kubernetes can communicate with each other. Network Policies implement zero-trust networking by explicitly defining allowed traffic flows. Start with a deny-all policy and then whitelist required connections.
# Default deny all ingress and egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Applies to all pods
policyTypes:
- Ingress
- Egress
---
# Allow specific ingress for web application
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web-ingress
namespace: production
spec:
podSelector:
matchLabels:
app: web-frontend
policyTypes:
- Ingress
ingress:
# Allow from ingress controller only
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
podSelector:
matchLabels:
app: ingress-nginx
ports:
- protocol: TCP
port: 8080
---
# Allow backend to database communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-backend-to-db
namespace: production
spec:
podSelector:
matchLabels:
app: api-backend
policyTypes:
- Egress
egress:
# Allow DNS resolution
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
# Allow database connection
- to:
- podSelector:
matchLabels:
app: postgresql
ports:
- protocol: TCP
port: 5432
# Allow external API calls (specific IPs)
- to:
- ipBlock:
cidr: 203.0.113.0/24 # External API CIDR
ports:
- protocol: TCP
port: 443Secrets Management with External Stores
Kubernetes Secrets are base64-encoded, not encrypted, and stored in etcd. For production workloads, use external secret management solutions like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault with the External Secrets Operator.
# External Secrets Operator - SecretStore configuration
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
namespace: production
spec:
provider:
vault:
server: "https://vault.example.com"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "production-app"
serviceAccountRef:
name: "vault-auth"
---
# AWS Secrets Manager SecretStore
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secrets
namespace: production
spec:
provider:
aws:
service: SecretsManager
region: us-west-2
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
---
# ExternalSecret - Syncs from external store to K8s Secret
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: db-credentials
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: production/database
property: username
- secretKey: password
remoteRef:
key: production/database
property: password
---
# Using the synced secret in a deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
namespace: production
spec:
template:
spec:
containers:
- name: api
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: passwordAdmission Controllers for Policy Enforcement
Admission controllers intercept requests to the Kubernetes API before persistence, enabling policy enforcement. Tools like Kyverno and OPA Gatekeeper provide declarative policy management.
# Kyverno Policy - Require resource limits
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
spec:
validationFailureAction: Enforce
background: true
rules:
- name: require-limits
match:
any:
- resources:
kinds:
- Pod
validate:
message: "CPU and memory limits are required for all containers"
pattern:
spec:
containers:
- resources:
limits:
memory: "?*"
cpu: "?*"
---
# Kyverno Policy - Require image from trusted registry
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-registries
spec:
validationFailureAction: Enforce
background: true
rules:
- name: validate-registries
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Images must be from trusted registries"
pattern:
spec:
containers:
- image: "gcr.io/myproject/* | myregistry.io/*"
initContainers:
- image: "gcr.io/myproject/* | myregistry.io/*"
---
# Kyverno Policy - Add default security context
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-securitycontext
spec:
rules:
- name: add-security-context
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- (name): "*"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALLRuntime Security Monitoring
Static security configurations aren't enough. Runtime security monitors actual container behavior and detects anomalies like unexpected process execution, file access, or network connections. Falco is the leading open-source runtime security tool.
# Falco custom rules for production security
# Save as /etc/falco/rules.d/custom-rules.yaml
- rule: Detect Crypto Mining
desc: Detect crypto mining processes
condition: >
spawned_process and
(proc.name in (xmrig, minergate, minerd, cpuminer) or
proc.cmdline contains "stratum+tcp" or
proc.cmdline contains "pool." and proc.cmdline contains "mining")
output: >
Crypto mining detected (user=%user.name command=%proc.cmdline
container=%container.name image=%container.image.repository)
priority: CRITICAL
tags: [cryptomining, mitre_execution]
- rule: Reverse Shell Detection
desc: Detect reverse shell connections
condition: >
spawned_process and
((proc.name in (bash, sh, zsh) and proc.args contains "-i") or
proc.cmdline contains "/dev/tcp/" or
proc.cmdline contains "nc -e" or
proc.cmdline contains "python -c" and proc.cmdline contains "socket")
output: >
Potential reverse shell detected (user=%user.name command=%proc.cmdline
container=%container.name)
priority: CRITICAL
tags: [reverseshell, mitre_execution]
- rule: Sensitive File Access
desc: Detect access to sensitive files
condition: >
open_read and
container and
(fd.name startswith /etc/shadow or
fd.name startswith /etc/passwd or
fd.name startswith /root/.ssh or
fd.name contains "id_rsa")
output: >
Sensitive file access detected (user=%user.name file=%fd.name
container=%container.name image=%container.image.repository)
priority: WARNING
tags: [filesystem, mitre_credential_access]
- rule: Container Drift Detected
desc: New executable written and executed in container
condition: >
spawned_process and
container and
proc.is_exe_upper_layer=true
output: >
Container drift - new executable (proc=%proc.name
container=%container.name image=%container.image.repository)
priority: ERROR
tags: [drift, mitre_persistence]# Install Falco with Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set driver.kind=ebpf \
--set falcosidekick.enabled=true \
--set falcosidekick.config.slack.webhookurl="https://hooks.slack.com/..." \
--set falcosidekick.config.customfields="cluster:production"
# Verify Falco is running
kubectl logs -l app.kubernetes.io/name=falco -n falco --tail=50Image Security and Supply Chain
Container image security is critical. Implement image scanning, signing, and SBOM generation as part of your CI/CD pipeline.
# GitHub Actions workflow for secure image builds
name: Secure Container Build
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build-and-scan:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write # For signing
steps:
- uses: actions/checkout@v4
# Build image
- name: Build Docker image
run: |
docker build -t myapp:${{ github.sha }} .
# Scan with Trivy
- name: Scan for vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1' # Fail on critical/high
# Generate SBOM
- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
image: myapp:${{ github.sha }}
format: spdx-json
output-file: sbom.spdx.json
# Sign image with Cosign
- name: Install Cosign
uses: sigstore/cosign-installer@main
- name: Sign container image
env:
COSIGN_EXPERIMENTAL: 1
run: |
# Tag and push
docker tag myapp:${{ github.sha }} ghcr.io/${{ github.repository }}:${{ github.sha }}
docker push ghcr.io/${{ github.repository }}:${{ github.sha }}
# Sign with keyless signing (Sigstore)
cosign sign ghcr.io/${{ github.repository }}:${{ github.sha }}
# Attach SBOM
cosign attach sbom --sbom sbom.spdx.json ghcr.io/${{ github.repository }}:${{ github.sha }}Security Checklist
Kubernetes Security Checklist
Enable Pod Security Standards (Restricted profile)
Implement RBAC with least privilege
Deploy network policies (default deny)
Use external secrets management
Configure admission controllers
Enable audit logging
Implement runtime security monitoring
Scan and sign container images
Encrypt etcd at rest
Keep Kubernetes and components updated
Use service mesh for mTLS
Implement resource quotas and limits
Conclusion
Kubernetes security requires continuous attention across multiple layers. By implementing Pod Security Standards, RBAC, Network Policies, external secrets management, admission controllers, and runtime monitoring, you create a defense-in-depth posture that significantly reduces risk. Regular audits and staying current with security updates are equally important.
Remember that security is a journey, not a destination. Start with the foundational controls outlined in this guide and continuously improve based on your threat model and compliance requirements.
Need help securing your Kubernetes infrastructure? Contact Jishu Labs for expert DevSecOps services. Our team has secured production Kubernetes environments across industries including finance, healthcare, and technology.
About David Kumar
David Kumar is the DevOps Lead at Jishu Labs with expertise in cloud-native security and Kubernetes operations. He has secured production clusters for financial services and healthcare organizations.