Skip to main content

Starknet Mainnet

AWS:

  • c5.xlarge or any equivalent instance type (ARM based instance)

Bare Metal:

  • 8GB RAM
  • 4 vCPUs
  • At least 500 GB SSD of storage - make sure it's extendable

Assumptions

We're going to assume you are already logged into your Virtual Machine as a privileged user or as the root user.

Setup

Installing a pathfinder node via building the source is the easieast way to spin up your Starknet Mainnet or Testnet node. It will take care of installing all the required system dependencies, set all the configuration files in place.

Building pathfinder requires both a python version that must be 3.8 and a rust version that must be 1.64 or higher. You can install them using your favourite package manager. For that you can refer to: * Github Repository Pathfinder install from source

info

From pathfinder version v0.5.0 onwards, python version requirement will change, so you must have at least 3.9 installed on your machine, otherwise it will not work.

Once the dependencies are installed, run:

cargo build --release --bin pathfinder

Below you will find the pathfinder flags that you should use to make sure your node will pass onboarding and integrity checks:

  • --ethereum.url This should point to the HTTP RPC endpoint of your Ethereum entry-point
  • --http-rpc HTTP-RPC listening address(default: 127.0.0.1:9545)
  • --data-directory Directory where the node should store its data
  • --monitor-address The address at which pathfinder will serve monitoring related information
  • --poll-pending Enable polling pending block
cargo run --release --bin pathfinder -- --ethereum.url eth_rpc_URL --http-rpc IP:PORT --data-directory /path/to/datastore --monitor-address IP:PORT --poll-pending TRUE'

You now have a running Starknet Mainnet RPC node. All you need to do now is wait for it to sync. You can check if the node is synced by running the API Call listed below from inside your environment. You are going to need to have the curl and jq packages installed for this, so make sure to install them beforehand.

curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "starknet_syncing","params": []}' localhost:9545

If the result is current_block_num = highest_block_num, it means that your node is fully synced.

Another way to check which block the node is at would be running:

curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "starknet_blockNumber","params": []}' localhost:9545

The result should be a decimal number (i.e 22148). If you want, you can compare it to the latest block listed on the Starknet Mainnet explorer: https://voyager.online/

The Starknet node exports only RPC on port 9545.

info

Please make sure to also check the Official Documentation and the Github Repository posted above in order to make sure you are keeping your node up to date.

Monitoring Guidelines

In order to maintain a healthy node that passes the Integrity Protocol's checks, you should have a monitoring system in place. Blockchain nodes usually offer metrics regarding the node's behaviour and health - a popular way to offer these metrics is Prometheus-like metrics. The most popular monitoring stack, which is also open source, consists of:

  • Prometheus - scrapes and stores metrics as time series data (blockchain nodes cand send the metrics to it);
  • Grafana - allows querying, visualization and alerting based on metrics (can use Prometheus as a data source);
  • Alertmanager - handles alerting (can use Prometheus metrics as data for creating alerts);
  • Node Exporter - exposes hardware and kernel-related metrics (can send the metrics to Prometheus).

We will assume that Prometheus/Grafana/Alertmanager are already installed (we will provide a detailed guide of how to set up monitoring and alerting with the Prometheus + Grafana stack at a later time; for now, if you do not have the stack already installed, please follow this official basic guide here).

We recommend installing the Node Exporter utilitary since it offers valuable information regarding CPU, RAM & storage. This way, you will be able to monitor possible hardware bottlenecks, or to check if your node is underutilized - you could use these valuable insights to take decisions regarding scaling up/down the allocated hardware resources.

Below, you can find a script that installs Node Exporter as a system service.

#!/bin/bash

# set the latest version
VERSION=1.3.1

# download and untar the binary
wget https://github.com/prometheus/node_exporter/releases/download/v${VERSION}/node_exporter-${VERSION}.linux-amd64.tar.gz
tar xvf node_exporter-*.tar.gz
sudo cp ./node_exporter-${VERSION}.linux-amd64/node_exporter /usr/local/bin/

# create system user
sudo useradd --no-create-home --shell /usr/sbin/nologin node_exporter

# change ownership of node exporter binary
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter

# remove temporary files
rm -rf ./node_exporter*

# create systemd service file
cat > /etc/systemd/system/node_exporter.service <<EOF
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
EOF

# enable the node exporter service and start it
sudo systemctl daemon-reload
sudo systemctl enable node_exporter.service
sudo systemctl start node_exporter.service

As a reminder, Node Exporter uses port 9100 by default, so be sure to expose this port to the machine which holds the Prometheus server. The same should be done for the metrics port(s) of the blockchain node (in this case, we should expose port 6060 - for monitoring the starknet node).

Having installed Node Exporter and having already exposed the node's metrics, these should be added as targets under the scrape_configs section in your Prometheus configuration file (i.e. /etc/prometheus/prometheus.yml), before reloading the new config (either by restarting or reloading the config - please check the official documentation). This should look similar to this:

  - job_name: 'starknet-node-exporter'
scrape_interval: 10s
metrics_path: /metrics
static_configs:
- targets:
- '<NODE0_IP>:9100'
- '<NODE1_IP>:9100' # you can add any number of nodes as targets

For the moment, Starknet does not expose peers or chain_head_block related metrics. In the configuration file above, please replace:

  • <NODE0_IP> - node 0's IP
  • <NODE1_IP> - node 1's IP (you can add any number of nodes as targets)
  • ...
  • <NODEN_IP> - node N's IP (you can add any number of nodes as targets)

That being said, the most important metrics that should be checked are:

  • node_cpu_seconds_total - CPU metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
    • 100 - (avg by (instance) (rate(node_cpu_seconds_total{job="starknet-node-exporter",mode="idle"}[5m])) * 100), which means the average percentage of CPU usage over the last 5 minutes;
  • node_memory_MemTotal_bytes/node_memory_MemAvailable_bytes - RAM metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
    • (node_memory_MemTotal_bytes{job="starknet-node-exporter"} - node_memory_MemAvailable_bytes{job="starknet-node-exporter"}) / 1073741824, which means the amount of RAM (in GB) used, excluding cache/buffers;
  • node_network_receive_bytes_total - network traffic metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
    • rate(node_network_receive_bytes_total{job="starknet-node-exporter"}[1m]), which means the average network traffic received, per second, over the last minute (in bytes);
  • node_filesystem_avail_bytes - FS metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
    • node_filesystem_avail_bytes{job="starknet-node-exporter",device="<DEVICE>"} / 1073741824, which means the filesystem space available to non-root users (in GB) for a certain device <DEVICE> (i.e. /dev/sda or wherever the blockchain data is stored) - this can be used to get an alert whenever the available space left is below a certain threshold (please be careful how you choose this threshold: if you have storage that can easily be increased - for example, EBS storage from AWS, you can set a lower threshold, but if you run your node on a bare metal machine which is not easily upgradable, you should set a higher treshold just to be sure you are able to find a solution before it fills up);

You can use the above metrics to create both Grafana dashboards and Alertmanager alerts.

info

Please make sure to also check the Official Documentation and the Github Repository posted above in order to make sure you are keeping your node up to date.