Skip to main content

Secrets in Kubernetes

Let's review secrets using content from the CKA.

We use secrets to store:

  • Passwords
  • API keys
  • Credentials
  • Certificates
  • Database connections

One of the biggest security mistakes is putting hardcoded secrets inside a repository, please never let this happen.

It's still possible to encrypt a secret and work with it hardcoded in the repository and decrypt that secret within the application. This is still used today and from a security perspective it's fine, but every time a secret needs to be changed, it's necessary to redeploy the application.

Kubernetes can decouple and inject secrets as environment variables and as mounted files inside the pod as soon as the pod starts.

Let's set up the following scenario:

  • secret1 with the key user and value admin mounted as a volume.
  • secret2 with the key password and value 123456abcdef available as an environment variable.
  • pod with nginx and with these secrets.
root@cks-master:~# kubectl create secret generic secret1 --from-literal user=admin
secret/secret1 created
root@cks-master:~# kubectl create secret generic secret2 --from-literal password=123456abcdef
secret/secret2 created
root@cks-master:~# kubectl run nginx --image=nginx -o yaml --dry-run=client > nginx.yaml

Modify nginx.yaml to add the secrets.

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
### Add this block
env:
- name: password
valueFrom:
secretKeyRef:
name: secret2
key: password
volumeMounts:
- name: secret1
mountPath: "/etc/secret1"
readOnly: true
volumes:
- name: secret1
secret:
secretName: secret1
###
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

Applying...

root@cks-master:~# vim nginx.yaml
root@cks-master:~# k apply -f nginx.yaml
pod/nginx created
root@cks-master:~# k exec pods/nginx -- env | grep password
password=123456abcdef

root@cks-master:~# k exec pods/nginx -- cat /etc/secret1/user
admin

# The pod went to cks-worker
root@cks-master:~# k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 2m8s 192.168.1.8 cks-worker <none> <none>

Now let's try to access the secrets, let's go to the node where the pod is running and do some analysis.

root@cks-worker:~# crictl ps | grep nginx
b9ca02ba11f99 5ef79149e0ec8 3 minutes ago Running nginx 0 7146adfb818d4 nginx
124380aa60e89 a80c8fd6e5229 11 minutes ago Running controller 1 4d9c694b089a7 ingress-nginx-controller-7d4db76476-xxqvt

# Let's inspect this container
# I'll remove some of the output and leave only some points to make it easier to read
root@cks-worker:~# crictl inspect b9ca02ba11f99
{
...
"info": {
"sandboxID": "7146adfb818d45c305d21f2743dfa380a05ed7cfff32dc19d3c5b44b00148fc3",
"pid": 6488, # PID of the process on the host that we will also use
"removing": false,
"snapshotKey": "b9ca02ba11f99b42e0a66ea5cea07b1175c625d723e60742808f2676d066c1d6",
"snapshotter": "overlayfs",
"runtimeType": "io.containerd.runc.v2",
"runtimeOptions": {
"systemd_cgroup": true
},
...
{
...
"envs": [
{
"key": "password",
"value": "123456abcdef" # The secret here
},
{
"key": "APP1_PORT",
"value": "tcp://10.105.1.235:80"
},
{
"key": "APP1_PORT_80_TCP",
"value": "tcp://10.105.1.235:80"
},
...
],
...
"mounts": [
{ # Here the secret mount within the host used by kubelet
"container_path": "/etc/secret1",
"host_path": "/var/lib/kubelet/pods/842f7ae7-ac2b-4654-88b4-b26ed6ac315c/volumes/kubernetes.io~secret/secret1",
"readonly": true
},
...
}

# Checking the other secret mounted on the host by the mount point
root@cks-worker:~# cat /var/lib/kubelet/pods/842f7ae7-ac2b-4654-88b4-b26ed6ac315c/volumes/kubernetes.io~secret/secret1/user
admin
root@cks-worker:~#

# Or through the pid we can go directly to the process filesystem
root@cks-worker:~# cat /proc/6488/root/etc/secret1/user
admin

If someone has access to the container runtime or root on a node, it's enough to get access to secrets. It's not considered a security issue, the problem is giving that power to access nodes as root or container runtime permission to those who don't need it.

Another way to get access to secrets is to get access to ETCD.

# Checking where the etcd certificates are
root@cks-master:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379

# Using the certificates to test the connection
# We don't need to specify the endpoint because we're on the same host
root@cks-master:~# ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt endpoint health
127.0.0.1:2379 is healthy: successfully committed proposal: took = 21.755638ms

# As the secrets are in the default namespace, we pass /registry/secrets/namespace_name/secret_name
root@cks-master:~# ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/default/secret1
/registry/secrets/default/secret1
k8s


v1Secret�

secret1�default"*$0857e5b5-0553-4977-b61b-de3016a62c242�����a
kubectl-createUpdate�v����FieldsV1:-
+{"f:data":{".":{},"f:user":{}},"f:type":{}}B
useradmin�Opaque�" #<<<<<

root@cks-master:~# ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/default/secret2
/registry/secrets/default/secret2
k8s


v1Secret�

secret2�default"*$ae064b8d-eac6-4c76-8f29-21db3c9d125b2�����e
kubectl-createUpdate�v����FieldsV1:1
/{"f:data":{".":{},"f:password":{}},"f:type":{}}B
password
123456abcdef�Opaque�" #<<<<<
root@cks-master:~#

And we see that the secrets are stored without encryption. What we can do is encrypt the secrets so that if someone gets access to ETCD, they won't get this plaintext information as we see here. Kubernetes can have ETCD completely separate anywhere else.

In this case, the apiserver can be responsible for encrypting and decrypting keys stored in ETCD. It will be necessary to pass an argument (--encryption-provider-config) in the apiserver manifest or service for it to perform this action.

The apiserver will look for the EncryptionConfig object that will define what and how it should be encrypted.

Let's do an analysis of the object.

apiVersion: v1
kind: EncryptionConfig
resources:
- resources: # What resources do we want to encrypt??
- secrets
providers: # An array of multiple possible providers that are executed in order.
- identity: {} # This provider is the default and nothing should be encrypted, but stored in plaintext
- aesgcm: # Encryption algorithm
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- aescbc: # Encryption algorithm
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==

One important thing is that providers work in order, with the first one being used for encryption when new resources are created.

Resources that have been saved before will remain as they were.

Looking at the yaml above, new secrets will not be stored encrypted because - identity: {} is the first on the list. However, for reading, we can read decrypted secrets and encrypted with aesgcm and aescbc algorithms trying to decrypt with the defined keys.

Just to be clearer, in the example below we have the opposite.


```yaml
apiVersion: v1
kind: EncryptionConfig
resources:
- resources: # What resources do we want to encrypt??
- secrets
providers:
- aesgcm: # Encryption algorithm that will be used to save resource json, in this case only secrets.
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- identity: {}

If we don't define - identity: {} as the last one, it won't be possible to read secrets that are decrypted. This is a classic example of when we start encrypting resources, but we already have secrets created decrypted in ETCD.

To encrypt all secrets, once the above object is defined, we can recreate all secrets and they will be encrypted with aesgcm

kubectl get secrets --all-namespaces -o json | kubectl replace -f -

If we want to decrypt, we just change the order and reapply the command again and all of them will be stored without encryption.

apiVersion: v1
kind: EncryptionConfig
resources:
- resources:
- secrets
providers:
- identity: {}
- aescbc: # another example only
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET> # remembering that the secret here needs to be in base64

Let's test and encrypt all the cluster secrets with aescbc and a password of our choice.

Let's create the manifest with a password already in base64 and with aescbc as the first on the list.

# The password must be 32 bits, that is, 32 characters
root@cks-master:/etc/kubernetes/etcd# echo -n "1234567890abcdefghijklmnopqrstuv" | base64
MTIzNDU2Nzg5MGFiY2RlZmdoaWprbG1ub3BxcnN0dXY=

# Can also be generated with the command below as shown in the documentation.
root@cks-master:/etc/kubernetes/manifests# head -c 32 /dev/urandom | base64
sxpf6fmJM6KLYEtx5FbeypRInerEMcOarM+bPx8ep6I=

root@cks-master:/etc/kubernetes/manifests# echo "sxpf6fmJM6KLYEtx5FbeypRInerEMcOarM+bPx8ep6I=" | base64 --decode
��_���3��`Kq�V�ʔH���1Ú�ϛ?��root@cks-master:/etc/kubernetes/manifests#

root@cks-master:~# cd /etc/kubernetes/

root@cks-master:/etc/kubernetes# mkdir etcd

root@cks-master:/etc/kubernetes# cd etcd/

root@cks-master:/etc/kubernetes/etcd# vim ec.yaml

root@cks-master:/etc/kubernetes/etcd# cat ec.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: MTIzNDU2Nzg5MGFiY2RlZmdoaWprbG1ub3BxcnN0dXY=
- identity: {}

# Now let's add the parameter in kube-apiserver.yaml and mount the directory where we have the file.
root@cks-master:/etc/kubernetes/etcd# cd ../manifests/
# Changing what's necessary
root@cks-master:/etc/kubernetes/manifests# vim kube-apiserver.yaml
root@cks-master:/etc/kubernetes/manifests# cat kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.128.0.5:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --encryption-provider-config=/etc/kubernetes/etcd/ec.yaml # Pointing to the manifest
- --advertise-address=10.128.0.5
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.k8s.io/kube-apiserver:v1.31.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 10.128.0.5
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 10.128.0.5
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 10.128.0.5
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
# Mounting the volume that contains the ec.yaml manifest
- mountPath: /etc/kubernetes/etcd
name: etcd # << this volume will be mounted at the path above
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
# Volume mapped to be mounted
- hostPath:
path: /etc/kubernetes/etcd
type: DirectoryOrCreate
name: etcd
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}

# If kube-apiserver doesn't come up, it's because the key didn't have 32 characters

Let's check now if we can read the secrets

# We still can...
root@cks-master:~# ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/default/secret1
/registry/secrets/default/secret1
k8s


v1Secret�

secret1�default"*$0857e5b5-0553-4977-b61b-de3016a62c242�����a
kubectl-createUpdate�v����FieldsV1:-
+{"f:data":{".":{},"f:user":{}},"f:type":{}}B
useradmin�Opaque�"


# Let's apply only to one secret and check
root@cks-master:~# kubectl get secrets secret1 -o json | kubectl replace -f -
secret/secret1 replaced

# And now we have our secret encrypted in etcd
root@cks-master:~# ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/default/secret1
/registry/secrets/default/secret1
k8s:enc:aescbc:v1:key1:ɔ;c8���%��R|����x��{���5L�]��(�̓�SE�B�A�bϷ������^�>�_a��*; 36@���w..�j����jP��K�����d/j�����n�v��:|�7�I�V�b�죈��Q
�H��|��[|z�]�����9�Ԋ�.�L�?��'�[VѾz��<��{FN]ӏq�2��%��A�� zr���+}��ȫN�l��'�5�|/u�l�2��)d�t����VX�Il�sm

# Analyzing the secret we can see that kube apiserver did the decryption and brought it in base64 which is the standard it was before.
root@cks-master:~# k get secrets secret1 -o yaml
apiVersion: v1
data:
user: YWRtaW4=
kind: Secret
metadata:
creationTimestamp: "2024-08-27T13:33:24Z"
name: secret1
namespace: default
resourceVersion: "1031144"
uid: 0857e5b5-0553-4977-b61b-de3016a62c24
type: Opaque
root@cks-master:~# echo "YWRtaW4=" | base64 --decode
adminroot@cks-master:~#

Of course, if you create a new secret it will already be encrypted in ETCD.

root@cks-master:~# k get secrets --all-namespaces -o yaml | kubectl replace -f -
...

root@cks-master:~# k get secrets -n kube-system bootstrap-token-xny0k4
NAME TYPE DATA AGE
bootstrap-token-xny0k4 bootstrap.kubernetes.io/token 5 11d

# See that it already starts warning which encryption is aescbc
root@cks-master:~# ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/kube-system/bootstrap-token-xny0k4
/registry/secrets/kube-system/bootstrap-token-xny0k4
k8s:enc:aescbc:v1:key1:l"E��>!А��/J�����넞:A
?�P4���.
y�����
`s ��?|%5�i���m'aU����}�i=b�H�jF�Ŧ��r����f'7�c�u�� ®�YҜ��=����CH�--,��4�M)
j
��ι�&DP��O>^��K�U�����'ء���8�4�
��z[3�˗x�=�?t���l��h�CZb�͠��nU�R�v�|"�*�� d`�a��9`ӽ�5��09��F� �Gϸ�3.�{��ZD�jAK�<�'����@���|�,�]�R��R��&���SxB�z�5�9���b����U��"��6�h�zbޥ1�Ž4����TޢD�Q��0i��>�ע�<6)���[`���Z-����g�Okؑ�!��E}��l%���(E�g�E@�)
������j+�
�$��M�v�F��Zʤ՛Ӷ�r��]�Ư�wG�:�'H���(�-)��;�K����_�o�ا�D�-$��>/�Dui��wbE���v��֯�'3�U7)y���k
A�C����=e�=�+"�8�q(
root@cks-master:~#

Now that we have everything encrypted and recreated, we can remove identity if we want.

Although possible, we don't need to encrypt all resources. Encrypting and decrypting resources can cause problems. Do this only for resources that really have credential information.

We used encryption with a static key mounted directly in the apiserver which is not the best solution for managing the key. If an attacker gets access to the control plane and can read the filesystem, it will still be possible to read the secrets as we saw above.

In production, secrets usually depend on some third-party tool. In the exam, it probably won't be asked because it's out of scope.

alt text

Just a detail to remember, base64 is not encryption.