
Keeping secrets secure is paramount in any Kubernetes deployment. Traditionally, managing secrets in GKE involved injecting them into pods as environment variables or volumes. While functional, this approach can be cumbersome and raises security concerns. Thankfully, Google Cloud offers a powerful solution: the Secret Manager add-on for GKE.
In this Post I will explains how you can use the Secret Manager add-on for GKE to access the secrets stored in Secret Manager as volumes mounted in GKE Pods.
:: As of May/2024 this feature is still in preview , so use it with caution.
What is Secret Manager?
Secret Manager is a managed service within Google Cloud Platform (GCP) designed for storing and retrieving sensitive information like API keys, passwords, and certificates. It offers several advantages:
- Centralized Management: Secrets are stored in a single location, simplifying access control and auditing.
- Improved Security: Secrets are encrypted at rest and in transit, minimizing the risk of exposure.
- Simplified Workload Identity: Workloads can securely access secrets using workload identity without requiring additional credentials.
Introducing the Secret Manager add-on
The Secret Manager add-on for Google Kubernetes Engine (GKE) simplifies how you access and manage secrets in your pods. Here’s how it benefits you:
- Easy Access: No need to write custom code, you can directly access secrets stored in Secret Manager from your GKE pods.
- Centralised Management: Store and control all your secrets in one place (Secret Manager) and grant access to specific secrets for your GKE pods.
- Enhanced Security: Leverage Secret Manager’s features like encryption, access control, rotation, and audit logs, alongside Kubernetes features like volume mounts for secrets.
- Broad Compatibility: Works with both Standard and Autopilot GKE clusters, and supports deployments on both AMD and ARM architectures.
The Add-on is derived from the open source Kubernetes Secrets Store CSI Driver and the Google Secret Manager provider. If you’re using the open source Secrets Store CSI Driver to access secrets, you can migrate to the Secret Manager add-on. For information, see Migrate from the existing Secrets Store CSI Driver.
Today as the add-on is still on preview it doesn’t provide the following features:
Shouldn’t I use External Secrets Operator instead?
In a nutshell, If you need to follow any compliance and need to avoid having Kubernetes native secrets in your cluster, you need to go with Secrets Store CSI driver and GKE Secrets Add-On, then you can mount secrets from Secrets Manager directly as volumes in your pods. If you don’t need that, you can probably give ESO a go.
There is an interesting comparation between External Secrets Operator and Secrets Store CSI driver that a GitHub User called Lucas posted in t his comment that I will include to this article
ESO synchronizes secrets from a cloud provider to secrets in k8s, so you can keep using k8s secrets if you are used to that.
SSCSID mounts the external secret as a volume in a pod directly, and having a k8s secret in the cluster is optional
ESO focuses on having configuration on the CRD, what you create on the secret store in the provider is only the secret value itself
SSCSID requires the entire config/secret to be stored in the provider directly as the application will need to consume it. This may be too difficult to use with larger configurations that have some secrets embedded.
ESO secrets can be used with any resource in k8s natively, that’s obvious, but 👇
SSCSID needs a pod webhook to really have it work well. You can not easily use SSCSID secrets to reference them in ingress, or dockerconfig for pulling images, since their goal is just to mount to a pod. Even if you want to enable k8s secret sync, you need to first mount the secret to a pod to sync it.
In ESO, Since we sync secrets with k8s native secrets, if you have connectivity problems, you can still access the secret that is present in your cluster, when you re-connect, it will just continue to re-sync
with SSCSID, if you loose connectivity, your csi driver mounts stop working if you get some restarts and all that. They are thinking about that, not sure what is the progress there: doc. You would need to check with them.
ESO will be just one operator deployment in your cluster
SSCSID will have a privileged provider daemonset that will be responsible to make the mounts in your pods
Are there any other options?
If you are looking for an alternative and safe way to fetch secrets from secret management services directly to a volume and without using Kubernetes Secrets Store CSI Driver you should definitely check this out.
Also, in 2001 Abdellfetah SGHIOUAR, Dev Advocate from Google wrote an interesting article that covers a few other options on how to store secrets on GKE. You can check this out here.
**Getting Started with** Secret Manager Add-on for GKE
To get started we need to enable the secret manager Add on in your cluster. You should be able to create a new cluster with this flag as well. Make sure you replace the CLUSTER_NAME and REGION/ZONE itens.
gcloud beta container clusters update <CLUSTER_NAME> \
--enable-secret-manager \
--location=<ZONE/REGION>
Make sure you have W orkload Identity Federation in your cluster as well.
After enabling the Secret Manager Add On, you should be able to see two new APIs available with the api version secrets-store.csi.x-k8s.io/v1
$ kubectl api-resources | grep secrets-store
secretproviderclasses secrets-store.csi.x-k8s.io/v1 true SecretProviderClass
secretproviderclasspodstatuses secrets-store.csi.x-k8s.io/v1 true SecretProviderClassPodStatus
In order to communicate with Secret Manager we need to grant the required permissions to the Kubernetes service account we will use.
Let’s first create the k8s namespace and service account to use in our cluster.
kubectl create ns secret-manager-access
kubectl create sa secret-manager-access-sa
Next, we will use Workload Identity Federation to grant the right permissions to our K8s service account. Make sure you are replacing the Project ID and Project Number from the variables
PROJECT_NUMBER=<ADD_YOUR_PROJECT_NUMBER>
PROJECT_ID=<ADD_YOUR_PROJECT_ID>
NAMESPACE=secret-manager-access
gcloud projects add-iam-policy-binding projects/star-sorceress \
--role=roles/secretmanager.secretAccessor \
--member=principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${PROJECT_ID}.svc.id.goog/subject/ns/${NAMESPACE}/sa/secret-manager-access-sa \
--condition=None
PS:. Workload Identity Federation has recently changed the way it is set up. If you want know more please check this blog post a fellow Doe’r explains this in depth.
Below we are creating our super secret on Google Secret manager and also creating our first version:
echo "This is my awesome secret, please don't share!" > super-secret.txt
gcloud secrets create super-secret --replication-policy="automatic" --data-file="super-secret.txt"
Now we need to create the SecretProviderClass Custom Resource. This will be bound the GCP secret we created previously. Don’t forget to include your Project ID.
kubectl apply -f - <<EOF
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: super-secret
spec:
provider: gke
parameters:
secrets: |
- resourceName: "projects/<PROJECT_ID>/secrets/super-secret/versions/1"
path: "super-secret.txt"
EOF
Configure Pod Volume
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: secret-manager-access
spec:
serviceAccountName: secret-manager-access-sa
containers:
- name: test-pod
image: google/cloud-sdk:slim
command: ["sleep","infinity"]
volumeMounts:
- mountPath: "/var/secrets"
name: super-secret
volumes:
- name: super-secret
csi:
driver: secrets-store-gke.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: super-secret
EOF
Let’s check if the secret is there now:
kubectl exec -it test-pod -- cat /var/secrets/super-secret.txt
This is my awesome secret, please don't share
Advantages of the Secret Manager CSI Driver:
- Enhanced Security: Secrets are never stored within pods, reducing the attack surface.
- Simplified Workload Management: Workload identity eliminates the need for managing credentials within pods.
- Improved Configurability: Granular control over secret access allows for fine-grained permissions.
- Centralized Audit Logging: All secret access attempts are logged in Secret Manager for easier auditing.
Conclusion
By adopting the Secret Manager Add On for GKE you benefit for using a fully managed Secret Manager CSI Driver, significantly enhancing your GKE security posture and simplify secret management within your applications. With centralized control, automated mounting, and robust access control, the CSI Driver empowers you to focus on building and deploying your applications with confidence.
References:
https://github.com/kubernetes-sigs/secrets-store-csi-driver
https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp
https://cloud.google.com/secret-manager/docs/secret-manager-managed-csi-component#migrate
https://github.com/external-secrets/external-secrets/issues/478#issuecomment-964413129
https://medium.com/google-cloud/consuming-google-secret-manager-secrets-in-gke-911523207a79
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
https://github.com/doitintl/kube-secrets-init
I hope you found this post informative! If you have any questions, please feel free to leave a comment below.
If you don’t know DoiT International yet you should definitely check us out. Here, our team is ready to learn more about you and your cloud engineering needs. Staffed exclusively with senior engineering talent, we specialise in providing advanced cloud consulting architectural design and debugging advice. Get in touch, and let’s chat!