GCP (GKE)
These instructions will walk you through spinning up a GKE cluster and a CloudSQL database in GCP using Terraform. Afterwards, we will use helm and kubectl to deploy a Graph Node (+ other helper services) on top of them.
Prerequisites
A GCP project: https://cloud.google.com/resource-manager/docs/creating-managing-projects
Terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/gcp-get-started
GCloud CLI: https://cloud.google.com/sdk/docs/install
Kubectl: https://kubernetes.io/docs/tasks/tools/
Helm: https://helm.sh/docs/intro/install/
A Klaytn API Endpoint
This GCP deployment guide, completed with examples can be found on the Klaytn Indexing repo published and deployed by Bware Labs
Setup
Configure your GCloud CLI (and implicitly Terraform): https://cloud.google.com/sdk/docs/initializing
Confirm gcloud is configured correctly:
Additionally, you should have credentials stored under
$HOME/.config/gcloud/application_default_credentials.json
Create the resources in GCP using Terraform:
Verify that the new resources have been created:
From CLI:
From UI:
GKE Cluster: https://console.cloud.google.com/kubernetes/list/overview?referrer=search&project=<PROJECT_ID>
CloudSQL Database: https://console.cloud.google.com/sql/instances?referrer=search&project=<PROJECT_ID>
Configure kubectl:
Confirm kubectl is configured:
In the helm directory, within the indexing repo, find the
helm/values.yaml
and fill in the following missing values (search for# UPDATE THE VALUE
comments):The database host should be the PRIVATE IP printed by the
terraform apply
command. Alternatively you can find it by runninggcloud sql instances describe graph-indexer
.The Klaytn network API endpoint should be something you have.
Deploy the services to Kubernetes:
Confirm services were deployed:
Get the external IP of the Ingress controller:
Navigate to the
http://<EXTERNAL_IP>/subgraphs/graphql
url in a browser to confirm it is working correctly
NOTE: : To destroy everything, simply run
terraform destroy --auto-approve -var="project=<YOUR_PROJECT_ID>"
. If you get an error, run the same command again.
NOTE: : You can now return to the root documentation and continue the guide.
(OPTIONAL) Making everything production-ready
Terraform uses a local statefile. To make it persistent, you would have to create a GCS Bucket manually following the instructions on this page: https://www.terraform.io/language/settings/backends/gcs
NOTE: : The additional configuratios should go in
provider.tf
. After updating the terraform configs, you would have to runterraform init
to start storing the state remotely.
Restrict network access
Modify the
gke_management_ips
variable in theinfrastructure/gcp/variables.tf
file to only allow access from your network.Modify the
nginx.ingress.kubernetes.io/whitelist-source-range
variable in thehelm/values.yaml
file to only allow access from your network.
NOTE: : After updating the configs you would have to run both
terraform apply
andhelm upgrade graph-indexer . --namespace=graph-indexer
to apply them.
The database credentials are currently stored in plain text.
Remove the default values of
postgresql_admin_user
andpostgresql_admin_password
frominfrastructure/gcp/variables.tf
.Define new values in a
.tfvars
file in the same directory like this:
NOTE: : Do NOT commit
.tfvars
to source control.Terraform apply the changes:
terraform apply --auto-approve
Create a Kubernetes secret:
Update the Helm charts to use the new secret:
Remove the
username
andpassword
variables fromvalues.yaml
Edit
deployment-index-node
anddeployment-query-node
and replace this part:
with:
Apply the changes:
helm upgrade graph-indexer . --namespace=graph-indexer
Configure a DNS entry and set up certificates for the kubernetes nginx-ingress: https://cert-manager.io/docs/tutorials/acme/nginx-ingress/
NOTE: : We already have many components set up, like a Kubernetes cluster, the nginx-ingress controller, services, etc.
For monitoring:
A prometheus node which scrapes metrics from indexer nodes is available at
http://<EXTERNAL_IP>/prometheus/graph
. You could configure it as a data source in Grafana Cloud. Follow the instructions in this guide.An alertmanager node is available at
http://<EXTERNAL_IP>/alertmanager
. You could configure it to send Prometheus alerts to Pagerduty.Create a Pagerduty API key and configure it the
alertmanager.yaml
file. More information at: https://www.pagerduty.com/docs/guides/prometheus-integration-guide/For creating alerts, use the
prometheusRules.yaml
snippetfile.
NOTE: : Consider storing the Pagerduty API key in a Kubernetes secret.
NOTE: : You have to run
helm upgrade graph-indexer . --namespace=graph-indexer
to apply the changes.
Last updated