# How to Use the Developer Experience (DX) Setup
# Repositories
# dx-terraform
https://gitlab.com/datopian/experiments/dx-terraform (opens new window)
A Terraform project. Use it for provisioning Datopian’s Kubernetes cluster on Google Cloud.
# dx-argocd
https://gitlab.com/datopian/experiments/dx-argocd (opens new window)
Helm Chart containing Argo CD. It monitors Git repositories with other Helm Charts, deploying, and maintaining them up-to-date in the Kubernetes cluster.
The folder argo-cd
is a copy of the open source Helm Chart maintained by the community (opens new window).
# dx-helm-template
https://gitlab.com/datopian/experiments/dx-helm-template (opens new window)
Template for projects meant to be deployed in the Kubernetes cluster.
# dx-helm-national-grid
https://gitlab.com/datopian/experiments/dx-helm-national-grid (opens new window)
Helm Chart for National Grid. Use it for deploying it to a Kubernetes cluster.
# Create the Cluster
Before running applications in the cluster, you need something: a cluster.
- Create a new Google Cloud project.
- Create a Service Account (opens new window) inside this new project. It should have the roles listed in the module’s documentation (opens new window).
- Cloud SQL Admin.
- Compute Admin.
- Compute Network Admin.
- Compute Storage Admin
- Download the credentials file (in JSON) for this service account.
- Enable the following Google Cloud APIs:
- Create Secondary IP ranges (opens new window) for the selected network/region. These values are based on the first cluster created with this Terraform setup.
gke-ckan-cloud-cluster-pods 10.60.0.0/14 gke-ckan-cloud-cluster-services 10.0.16.0/20
- Create a Terraform Cloud account.
- Create a workspace.
- Authorize GitLab. Terraform Cloud will monitor a repository containing the Terraform setup.
- Select datopian/experiments/dx-terraform (opens new window) as the workspace repository.
- Customize Terraform variables following definitions in
envs/dev/variables.tf
(opens new window). The following ones are specially relevant when using a new Google Cloud project:project_id datopian-dx compute_engine_service_account [email protected] region europe-west1 master_zone europe-west1-b
- In Terraform Cloud, create an environment variable called
GOOGLE_CREDENTIALS
with the content of the Service Account JSON. Since it does not accept new line characters, you should remove them before. In VIM, do it with%s;\n; ;g
. - Set “Terraform Working Directory” to
envs/dev
in the settings (opens new window).
# Install Argo CD in the Cluster
We wrap projets meant to be deployed to the Kubernetes cluster using Helm Charts (opens new window). Argo CD (opens new window) is a tool that monitors Git repositories for changes to these packages, and, when needed, creates new deployments with the latest versions.
- From your local machine, install the Helm CLI (opens new window).
- From your local machine, connect your Docker Engine with the cluster in the cloud.
gcloud container clusters get-credentials ckan-cloud-cluster \ --region=europe-west1 \ --project=datopian-dx
- From your local machine, use the Helm CLI to install Argo in the cluster.
helm repo add argo https://argoproj.github.io/argo-helm helm install my-release argo/argo-cd
- Temporarily, while we don’t expose Argo CD to the internet: Forward a local port to Argo CD Web UI:
kubectl port-forward service/my-release-argocd-server -n default 8080:443
- The default username is
admin
, while the password is the name of the Pod running Argo CD. Get its value with the following command:kubectl get pods -n default -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2
Now, Argo CD Web UI should be accessible in https://localhost:8080/ (opens new window).