These instructions will walk you through spinning up an EKS cluster and an RDS database in AWS using Terraform. Afterwards, we will use helm and Kubectl to deploy a Graph Node (+ other helper services) on top of them.


  • An AWS account:

  • Terraform:

  • AWS CLI:

  • Kubectl:

  • Helm:

  • A Klaytn API Endpoint

This AWS deployment guide, completed with examples can be found on the Klaytn Indexing repo published and deployed by Bware Labs


  • Configure your AWS CLI (and implicitly Terraform):

    $ aws configure
    AWS Access Key ID [None]: <Your-Key-ID>
    AWS Secret Access Key [None]: <Your-Secret-Access-Key>
    Default region name [None]: us-west-2
    Default output format [None]: json

    NOTE: : Instructions for getting the credentials are in the same user guide.

    NOTE: : At the end of this step you should have credentials configured in your $HOME/.aws/credentials

  • Create the resources in AWS using Terraform:

    $ terraform init
    $ terraform apply --auto-approve
  • Verify that the new resources have been created:

    • From CLI:

    $ aws eks list-clusters --region=us-west-2
    $ aws rds describe-db-instances --region=us-west-2
    • From UI:

      • EKS Cluster:

      • RDS Database:

  • Configure kubectl:

aws eks update-kubeconfig --name graph-indexer --region=us-west-2
  • Confirm kubectl is configured:

kubectl get pods --all-namespaces
  • In the helm directory, within the indexing repo, find the helm/values.yaml and fill in the following missing values (search for # UPDATE THE VALUE comments):

    • The database hostname was printed by the terraform apply command and by the aws rds describe-db-instances --region=us-west-2 command (the Address field)

    • The Klaytn network API endpoint should be something you have, otherwise you can contact the Bware Labs team at [email protected] and require a custom Klaytn node deployment.

  • Deploy the services to Kubernetes:

kubectl create -f
helm install graph-indexer . --create-namespace --namespace=graph-indexer
  • Confirm services were deployed:

helm list --all-namespaces
kubectl get pods --all-namespaces
  • Get the external IP of the Ingress controller:

kubectl get all -n ingress-controller
  • Navigate to the http://<EXTERNAL_IP>/subgraphs/graphql url in a browser to confirm it is working correctly

NOTE: : To destroy everything, simply run terraform destroy --auto-approve

NOTE: : You can now return to the root documentation and continue the guide.

(OPTIONAL) Making everything production-ready

  • Terraform uses a local statefile. In order to make it persistent, you would have to create an S3 Bucket and a DynamoDB table manually following the instructions on this page:

NOTE: : The additional configuratios should go in After updating the terraform configs, you would have to run terraform init to start storing the state remotely.

  • Restrict network access

    • Modify the eks_management_ips variable in the infrastructure/aws/ file to only allow access from your network.

    • Modify the variable in the helm/values.yaml file to only allow access from your network.

NOTE: : After updating the configs you would have to run both terraform apply and helm upgrade graph-indexer . --namespace=graph-indexer to apply them.

  • The database credentials are currently stored in plain text.

    • Remove the default values of postgresql_admin_user and postgresql_admin_password from infrastructure/aws/

    • Define new values in a .tfvars file in the same directory like this:

    postgresql_admin_user = "<your-desired-username>"
    postgresql_admin_password = "<your-desired-password>"

    NOTE: : Do NOT commit .tfvars to source control.

    • Terraform apply the changes: terraform apply --auto-approve

    • Create a Kubernetes secret:

      kubectl create secret generic postgresql.credentials \
      --namespace=graph-indexer \
      --from-literal=username="<your-desired-username>" \
    • Update the Helm charts to use the new secret:

      • Remove the username and password variables from values.yaml

      • Edit deployment-index-node and deployment-query-node and replace this part:

          - name: postgres_user
            value: {{ index  .Values.CustomValues "postgress" "indexer" "username" }}
          - name: postgres_pass
            value: {{ index  .Values.CustomValues "postgress" "indexer" "password" }}


          - name: postgres_user
                  name: postgresql.credentials
                  key: username
          - name: postgres_pass
                  name: postgresql.credentials
                  key: password
    • Apply the changes: helm upgrade graph-indexer . --namespace=graph-indexer

  • Configure a DNS entry and set up certificates for the kubernetes nginx-ingress:

NOTE: : We already have an nginx-ingress deployed, just configure DOMAIN_NAME when you get to that step.

NOTE: : Consider creating Terraform resources for the new IAM resources and Helm configurations for the external-dns pod.

  • For monitoring:

    • A prometheus node which scrapes metrics from indexer nodes is available at http://<EXTERNAL_IP>/prometheus/graph. You could configure it as a data source in Grafana Cloud. Follow the instructions in this guide.

    • An alertmanager node is available at http://<EXTERNAL_IP>/alertmanager. You could configure it to send Prometheus alerts to Pagerduty.

      • Create a Pagerduty API key and configure it the alertmanager.yaml file. More information at:

      • For creating alerts, use the prometheusRules.yaml snippetfile.

    NOTE: : Consider storing the Pagerduty API key in a Kubernetes secret.

NOTE: : You have to run helm upgrade graph-indexer . --namespace=graph-indexer to apply the changes.

Last updated