IDP Lab
After everything you've read about IDP, we need to set up a pre-production scenario to try out our experiments.
This entire scenario will be used in the Backstage study. We can leverage the same scenario for Port study, but I'll make some remarks during the installation, so pay attention.
- Install docker as it will be necessary to run Kind later.
- Install kubectl as it's necessary to run Kind and apply manifests to the cluster.
- Install helm as we'll use ready-made charts for tool installation.
- Install Kind as it will be used to create a local cluster.
- Install argocd cli. It will be used to generate tokens and password for ArgoCD.
For Kind particularly, I like to have 3 nodes available for eventual testing of tools that use replicas without separate node points. To isolate the ingress controller, we'll place it on a separate node. We'll have a total of 5 nodes.
- 1 control-plane
- 1 worker for ingress
- 3 workers for applications
Creating the Cluster​
############## KIND LOCAL CLUSTER ##################
### Creating the config for Kind
cat <<EOF > lab-kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: study
networking:
ipFamily: ipv4
# disableDefaultCNI: true
kubeProxyMode: "ipvs"
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
apiServerAddress: "127.0.0.1"
apiServerPort: 6443
nodes:
- role: control-plane
extraMounts:
- hostPath: /dev
containerPath: /dev
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
extraMounts:
- hostPath: /dev
containerPath: /dev
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
extraMounts:
- hostPath: /dev
containerPath: /dev
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
extraMounts:
- hostPath: /dev
containerPath: /dev
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
extraMounts:
- hostPath: /dev
containerPath: /dev
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
EOF
# Creating the cluster with the configuration above
kind create cluster --config lab-kind-config.yaml
To have a local ingress on the machine, we'll install nginx ingress. Our localhost will be our domain.
############## NGINX INGRESS ##################
# Downloading the manifests and changing the nodeSelector to deploy on the study-worker node, the first node we'll dedicate exclusively to ingress
curl -s https://kind.sigs.k8s.io/examples/ingress/deploy-ingress-nginx.yaml | sed 's/kubernetes.io\/os: linux/kubernetes.io\/hostname: study-worker/g' > deploy-ingress-nginx.yaml
# Installing the ingress with the yaml created above.
kubectl apply -f deploy-ingress-nginx.yaml
Deploying ArgoCD to the Cluster​
Now we'll install ArgoCD in the cluster. To avoid needing to generate tokens for each repository, we'll create a token in the GitHub and GitLab accounts to allow ArgoCD to read all repositories from the accounts. Keep the tokens exported as environment variables to run the commands below.
export GITHUB_ACCOUNT_TOKEN=ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
export GITLAB_ACCOUNT_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx
Do the same for your account.
export GITHUB_ACCOUNT=davidpuziol
export GITLAB_ACCOUNT=davidpuziol
############## ARGOCD ##################
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
# This will be the password used in ArgoCD for the admin user. It needs to be generated through bcrypt.
export ARGO_PWD=devsecops
ARGO_PWD_BCRIPT=$(argocd account bcrypt --password $ARGO_PWD | tr -d ':\n' | sed 's/$2y/$2a/')
## At the moment of deployment, we'll create an account called backstage that only has access using API and set some roles.
## I also added a user for Port just to leverage the lab.
## We'll define the GitHub repositories that ArgoCD will have. To avoid creating tokens repo by repo, we can create a general token for access to all repositories and create a template.
# An important detail: We're giving a lot of permission to this account, since the default ArgoCD plugin only needs read access. However, I'd like in the future to build some plugins that could do more than just show an ArgoCD card in Backstage.
cat <<EOF > argo-values.yaml
global:
domain: argo.localhost
server:
ingress:
enabled: true
ingressClassName: nginx
hostname: argo.localhost
path: /
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
configs:
secret:
argocdServerAdminPassword: "$ARGO_PWD_BCRIPT"
params:
server.insecure: true
credentialTemplates:
github-$GITLAB_ACCOUNT:
url: https://github.com/davidpuziol
password: "$GITHUB_ACCOUNT_TOKEN"
username: "$GITHUB_ACCOUNT"
gitlab-$GITLAB_ACCOUNT:
url: https://gitlab.com/davidpuziol
password: "$GITLAB_ACCOUNT_TOKEN"
username: "$GITLAB_ACCOUNT"
cm:
accounts.backstage: "apiKey"
accounts.crossplane: "apiKey"
accounts.port: "apiKey"
rbac:
policy.csv: |
p, backstage, applications, get, */*, allow
p, backstage, applications, create, */*, allow
p, backstage, applications, update, */*, allow
p, backstage, applications, delete, */*, allow
p, backstage, clusters, get, *, allow
p, backstage, clusters, create, *, allow
p, backstage, clusters, update, *, allow
p, backstage, clusters, delete, *, allow
p, port, applications, get, */*, allow
p, port, applications, create, */*, allow
p, port, applications, update, */*, allow
p, port, applications, delete, */*, allow
p, port, clusters, get, *, allow
p, port, clusters, create, *, allow
p, port, clusters, update, *, allow
p, port, clusters, delete, *, allow
p, crossplane, applications, get, */*, allow
p, crossplane, applications, create, */*, allow
p, crossplane, applications, update, */*, allow
p, crossplane, applications, delete, */*, allow
p, crossplane, clusters, get, *, allow
p, crossplane, clusters, create, *, allow
p, crossplane, clusters, update, *, allow
p, crossplane, clusters, delete, *, allow
EOF
helm install argocd argo/argo-cd --namespace argocd --create-namespace -f argo-values.yaml
If you're doing the lab for Backstage, then we need the token for the Backstage account, but if you're only going to use Port, you can ignore this step.
argocd login argo.localhost --username admin --password devsecops --insecure --grpc-web
# Saving the token we'll use later in the integration with Backstage.
argocd account generate-token --account backstage > argocd-backstage-token
The ArgoCD plugin in Backstage asks to add **argocd.token=**yourtoken before the token.
export ARGOCD_AUTH_TOKEN_BACKSTAGE="argocd.token=$(cat argocd-backstage-token)
Example of how it would look in the terminal. Place it in your shell as an environment variable.
ARGOCD_AUTH_TOKEN_BACKSTAGE=argocd.token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJhcmdvY2QiLCJzdWIiOiJiYWNrc3RhZ2U6YXBpS2V5IiwibmJmIjoxNzMzMTY5MjIyLCJpYXQiOjE3MzMxNjkyMjIsImp0aSI6IjA3YWEyZmVkLTI1NmQtNGIyNi04NTUyLWMxYzgyNjU3ZTI1OCJ9.yWQ6LKg3T_4WJSrW3CYuBKATpkB_qHADVUppW3gu3Cw
We'll take the opportunity to create a token for Crossplane. The reason for leaving an account for Crossplane in ArgoCD is that after understanding the Backstage tool and Port, we'll use Crossplane in both tools.
argocd login argo.localhost --username admin --password devsecops --insecure --grpc-web
# Saving the token we'll use later in the integration with Crossplane.
argocd account generate-token --account crossplane > argocd-crossplane-token
# Add it to your terminal to avoid forgetting, as we won't use it now.
export ARGOCD_AUTH_TOKEN_CROSSPLANE="$(cat argocd-crossplane-token)"
If you're going to use the lab for Port study, then let's take the opportunity to create a token for Port.
argocd login argo.localhost --username admin --password devsecops --insecure --grpc-web
# Saving the token we'll use later in the integration with Crossplane.
argocd account generate-token --account port > argocd-port-token
# Add it to your terminal to avoid forgetting, as we won't use it now.
export ARGOCD_AUTH_TOKEN_PORT="$(cat argocd-port-token)"
Creating a Service Account in the Cluster for Backstage​
If you're using the Lab for Port study, do not execute this step
In the future, we'll deploy Backstage to Kubernetes and we'll leave the namespace created, although we could create it at a later time. Creating it now will allow us to test with Kubernetes even before deploying Backstage to Kubernetes, which will be the last step of the process.
For Backstage to access a Kubernetes cluster, it needs access and permission. Backstage could access a cluster anywhere, not necessarily the one it will be deployed to in the future. To represent this, we'll create a service account in the cluster for Backstage, imagining that this cluster could be any other. Even though it will be deployed in the last step to the cluster, it will use this service account.
######################## BACKSTAGE ##############################
# Creating a namespace for Backstage and a service account with appropriate permission.
kubectl create namespace backstage
kubectl create sa backstage-sa -n backstage
cat <<EOF > backstage-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: backstage-role
rules:
- apiGroups:
- '*'
resources:
- configmaps
- services
- deployments
- replicasets
- horizontalpodautoscalers
- ingresses
- statefulsets
- limitranges
- resourcequotas
- daemonsets
verbs:
- get
- list
- watch
- apiGroups:
- '*'
resources:
- pods
verbs:
- get
- list
- watch
- delete
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: backstage-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: backstage-role
subjects:
- kind: ServiceAccount
name: backstage-sa
namespace: backstage
EOF
kubectl create -n backstage clusterrolebinding rolebinding-backstage --clusterrole admin --serviceaccount backstage:backstage-sa
# We'll create a token for this account to be used by Backstage
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: backstage-sa-token
namespace: backstage
annotations:
kubernetes.io/service-account.name: "backstage-sa"
type: kubernetes.io/service-account-token
EOF
kubectl get secret backstage-sa-token -n backstage -o jsonpath='{.data.token}' | base64 --decode > backstage_token
Database for Backstage​
If you're using the Lab for Port study, do not execute this step
When Backstage is in production, we need a PostgreSQL database. The best thing to do when on a cloud is to outsource the database to the cloud instead of deploying it inside Kubernetes. I don't really like this approach, but if we're going to create it, then let's use an efficient method. I like to deploy PostgreSQL using the Bitnami chart which has plenty of configuration for scalability.
############## POSTGRES ##################
# PostgreSQL will be used for Backstage. We'll install it inside the backstage namespace and only use it for this purpose.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
cat <<EOF > postgres-values.yaml
global:
postgresql:
auth:
postgresPassword: "devsecops"
username: admin
password: devsecops
database: backstage
primary:
persistence:
enabled: true
size: 5Gi
service:
type: ClusterIP
EOF
helm install backstage-postgres bitnami/postgresql -f postgres-values.yaml --namespace backstage
kubectl port-forward --namespace backstage svc/backstage-postgres-postgresql 5432:5432 &
To access this database if you want to use it in development, it's necessary to do the port-forward above. The entire development process will use an in-memory database, so we'll lose data with each application restart.
Kubernetes Dashboard (Optional)​
If you're using the Lab for Port, do not execute this step
To manage the cluster visually, we can install the Kubernetes Dashboard. The idea is to be able to see this dashboard inside Backstage itself using an iframe.
cat <<EOF > dashboard-values.yaml
app:
ingress:
enabled: true
hosts:
- dash.localhost
ingressClassName: nginx
EOF
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm repo update
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard -f dashboard-values.yaml
Use the same backstage-sa token to access.