GCP (GKE)

These instructions will walk you through spinning up a GKE cluster and a CloudSQL database in GCP using Terraform. Afterwards, we will use helm and kubectl to deploy a Graph Node (+ other helper services) on top of them.

Prerequisites

  • A GCP project: https://cloud.google.com/resource-manager/docs/creating-managing-projects

  • Terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/gcp-get-started

  • GCloud CLI: https://cloud.google.com/sdk/docs/install

  • Kubectl: https://kubernetes.io/docs/tasks/tools/

  • Helm: https://helm.sh/docs/intro/install/

  • A Klaytn API Endpoint

This GCP deployment guide, completed with examples can be found on the Klaytn Indexing repo published and deployed by Bware Labs

Setup

  • Configure your GCloud CLI (and implicitly Terraform): https://cloud.google.com/sdk/docs/initializing

    $ gcloud auth application-default login
    $ gcloud config set project <PROJECT_ID>
  • Confirm gcloud is configured correctly:

    $ gcloud config list

    Additionally, you should have credentials stored under $HOME/.config/gcloud/application_default_credentials.json

  • Create the resources in GCP using Terraform:

    $ terraform init
    $ terraform apply --auto-approve -var="project=<YOUR_PROJECT_ID>"
  • Verify that the new resources have been created:

    • From CLI:

    gcloud container clusters list
    gcloud sql instances list
    • From UI:

      • GKE Cluster: https://console.cloud.google.com/kubernetes/list/overview?referrer=search&project=<PROJECT_ID>

      • CloudSQL Database: https://console.cloud.google.com/sql/instances?referrer=search&project=<PROJECT_ID>

  • Configure kubectl:

$ gcloud components install gke-gcloud-auth-plugin
$ gcloud container clusters get-credentials graph-indexer --region us-central1 --project <PROJECT_ID>
  • Confirm kubectl is configured:

kubectl get pods --all-namespaces
  • In the helm directory, within the indexing repo, find the helm/values.yaml and fill in the following missing values (search for # UPDATE THE VALUE comments):

    • The database host should be the PRIVATE IP printed by the terraform apply command. Alternatively you can find it by running gcloud sql instances describe graph-indexer.

    • The Klaytn network API endpoint should be something you have.

  • Deploy the services to Kubernetes:

kubectl create -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/bundle.yaml
helm install graph-indexer . --create-namespace --namespace=graph-indexer
  • Confirm services were deployed:

helm list --all-namespaces
kubectl get pods --all-namespaces
  • Get the external IP of the Ingress controller:

kubectl get all -n ingress-controller
  • Navigate to the http://<EXTERNAL_IP>/subgraphs/graphql url in a browser to confirm it is working correctly

NOTE: : To destroy everything, simply run terraform destroy --auto-approve -var="project=<YOUR_PROJECT_ID>". If you get an error, run the same command again.

NOTE: : You can now return to the root documentation and continue the guide.

(OPTIONAL) Making everything production-ready

  • Terraform uses a local statefile. To make it persistent, you would have to create a GCS Bucket manually following the instructions on this page: https://www.terraform.io/language/settings/backends/gcs

NOTE: : The additional configuratios should go in provider.tf. After updating the terraform configs, you would have to run terraform init to start storing the state remotely.

  • Restrict network access

    • Modify the gke_management_ips variable in the infrastructure/gcp/variables.tf file to only allow access from your network.

    • Modify the nginx.ingress.kubernetes.io/whitelist-source-range variable in the helm/values.yaml file to only allow access from your network.

NOTE: : After updating the configs you would have to run both terraform apply and helm upgrade graph-indexer . --namespace=graph-indexer to apply them.

  • The database credentials are currently stored in plain text.

    • Remove the default values of postgresql_admin_user and postgresql_admin_password from infrastructure/gcp/variables.tf.

    • Define new values in a .tfvars file in the same directory like this:

    postgresql_admin_user = "<your-desired-username>"
    postgresql_admin_password = "<your-desired-password>"

    NOTE: : Do NOT commit .tfvars to source control.

    • Terraform apply the changes: terraform apply --auto-approve

    • Create a Kubernetes secret:

      kubectl create secret generic postgresql.credentials \
      --namespace=graph-indexer \
      --from-literal=username="<your-desired-username>" \
      --from-literal=password="<your-desired-password>"
    • Update the Helm charts to use the new secret:

      • Remove the username and password variables from values.yaml

      • Edit deployment-index-node and deployment-query-node and replace this part:

          - name: postgres_user
            value: {{ index  .Values.CustomValues "postgress" "indexer" "username" }}
          - name: postgres_pass
            value: {{ index  .Values.CustomValues "postgress" "indexer" "password" }}

      with:

          - name: postgres_user
            valueFrom:
              secretKeyRef:
                  name: postgresql.credentials
                  key: username
          - name: postgres_pass
            valueFrom:
              secretKeyRef:
                  name: postgresql.credentials
                  key: password
    • Apply the changes: helm upgrade graph-indexer . --namespace=graph-indexer

  • Configure a DNS entry and set up certificates for the kubernetes nginx-ingress: https://cert-manager.io/docs/tutorials/acme/nginx-ingress/

NOTE: : We already have many components set up, like a Kubernetes cluster, the nginx-ingress controller, services, etc.

  • For monitoring:

    • A prometheus node which scrapes metrics from indexer nodes is available at http://<EXTERNAL_IP>/prometheus/graph. You could configure it as a data source in Grafana Cloud. Follow the instructions in this guide.

    • An alertmanager node is available at http://<EXTERNAL_IP>/alertmanager. You could configure it to send Prometheus alerts to Pagerduty.

      • Create a Pagerduty API key and configure it the alertmanager.yaml file. More information at: https://www.pagerduty.com/docs/guides/prometheus-integration-guide/

      • For creating alerts, use the prometheusRules.yaml snippetfile.

    NOTE: : Consider storing the Pagerduty API key in a Kubernetes secret.

NOTE: : You have to run helm upgrade graph-indexer . --namespace=graph-indexer to apply the changes.

Last updated