Moonbeam

This guide is a recommended way of running your node, specifically as a systemd service on a Linux-based machine (be it virtual, or bare metal) and is complementary to the official documentation available here. You could also start your node as a Docker container, but we would recommend using this option only if you are familiar with container technologies and we strongly encourage leveraging the power of containers with an orchestration software such as Kubernetes.

AWS:

  • c6i.2xlarge or any equivalent instance type

Bare Metal:

  • 16GB RAM

  • 8 (v)CPUs

  • At least 500 GB of storage - make sure it's extendable

Assumptions

We're going to assume you are already logged into your Virtual Machine as a privileged user or as the root user.

Setup

After making sure your operating system is up to date we will need to install a couple of packages before actually getting the node started.

sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install git curl wget cmake build-essential clang ufw jq net-tools lz4 -y

Once the installation is done, as a best practice, we will create a service account that will be responsible for running the actual node binary. We will also create a separate directory where we will store the binary and data (we might to be a privileged user to run the following commands).

adduser moonbeam_service --system --no-create-home
mkdir /var/lib/moonbeam-data

After doing this, we should now get the binary. There are 2 ways of achieving this:

  • compiling the binary ourselves (as shown here), which usually takes some time, but can turn out to be useful if you use the proper Rust compiling options for optimizing the binary for your processor;

  • downloading the binary from the official releases, which is much easier if you are not experienced with compiling Rust code - another interesting aspect is that the Moonbeam team offers 3 binaries per release: one unoptimized general use-case version (moonbeam), one optimized for the Intel Skylake processor family (moonbeam-skylake) and one optimized for the AMD Zen 3 processor family (moonbeam-znver3) - this is very helpful if you are aware of the underlying processor of your machine.

For simplicity, we will use the pre-compiled binary made available by the Moonbeam team (please, be aware that it has the proper permissions and owner):

RELEASE_VERSION=v0.30.0 # at the time of writing, this is the latest release

cd /var/lib/moonbeam-data
wget https://github.com/PureStake/moonbeam/releases/download/${RELEASE_VERSION}/moonbeam # download the binary 
sudo chown -R moonbeam_service /var/lib/moonbeam-data # ensure proper ownership on the data directory
sudo chmod ugo+x /var/lib/moonbeam-data/moonbeam # ensure proper execution permissions on the binary

# for trace/debug/txpool api support
git clone https://github.com/PureStake/moonbeam-runtime-overrides.git
mv moonbeam-runtime-overrides/wasm /var/lib/moonbeam-data
rm /var/lib/moonbeam-data/wasm/moonbase-runtime-* &&  rm /var/lib/moonbeam-data/wasm/moonriver-runtime-*

sudo chmod ugo+x /var/lib/moonbeam-data/wasm/*
sudo chown -R moonbeam_service /var/lib/moonbeam-data/wasm # ensure proper ownership on the directory

After performing the previous steps, you can check that everything is in place by running the command: /var/lib/moonbeam-data/moonbeam --version and if the output lists the Moonbeam binary version and latest commit hash, we're clear to move forward.

IMPORTANT NOTE: Make sure you update your tracing runtime overrides each time there is a new runtime upgrade, otherwise you won't be able to support the trace/debug/txpool API properly.

# for trace/debug/txpool API support
cd <DOWNLOAD_DIR>/moonbeam-runtime-overrides
git fetch && git pull

sudo systemctl stop moonriver.service
rm -rf /var/lib/moonbeam-data/wasm
mv moonbeam-runtime-overrides/wasm /var/lib/moonbeam-data
rm /var/lib/moonbeam-data/wasm/moonbase-runtime-* &&  rm /var/lib/moonbeam-data/wasm/moonriver-runtime-*

sudo chmod ugo+x /var/lib/moonbeam-data/wasm/*
sudo chown -R moonbeam_service /var/lib/moonbeam-data/wasm # ensure proper ownership on the directory
sudo systemctl restart moonbeam.service

The next step is to create the systemd configuration file. These are recommended flags that we are running on our production services (for more information regarding all the available flags, please refer to the official documentation here).

First, create the systemd configuration file:

sudo touch /etc/systemd/system/moonbeam.service # create the file
sudo vi /etc/systemd/system/moonbeam.service # open the file for writing - we prefer vi as our text editor, but feel free to use what suits you best

The contents of the systemd configuration file should be:

[Unit]
Description="Moonbeam systemd service"
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=on-failure
RestartSec=10
User=moonbeam_service
SyslogIdentifier=moonbeam
SyslogFacility=local7
KillSignal=SIGHUP
ExecStart=/var/lib/moonbeam-data/moonbeam \
     --port 30333 \
     --rpc-port 9933 \
     --ws-port 9944 \ # RPC SUPPORT HAS MOVED TO THE WS PORT
     --execution wasm \
     --wasm-execution compiled \
     --state-pruning=archive \
     --trie-cache-size 4 \
     --runtime-cache-size 64 \
     --max-past-logs 100000 \
     --rpc-max-response-size 128 \
     --ethapi debug,trace,txpool \
     --wasm-runtime-overrides /var/lib/moonbeam-data/wasm \
     --ws-max-connections 10000 \
     --unsafe-rpc-external \ # necessary if you don't use a proxy (RPC reachable on 0.0.0.0)
     --unsafe-ws-external \ # necessary if you don't use a proxy (WS reachable on 0.0.0.0)
     --rpc-cors all \
     --prometheus-external \ # necessary if you don't use a proxy (metrics endpoint reachable on 0.0.0.0)
     --prometheus-port 9615 \
     --db-cache <DB_CACHE_SIZE> \
     --base-path /var/lib/moonbeam-data \
     --chain moonbeam \
     --name "<NODE_NAME>" \
     -- \
     --port 30334 \
     --execution wasm \
     --pruning=1000 \
     --prometheus-external \ # necessary if you don't use a proxy (metrics endpoint reachable on 0.0.0.0)
     --prometheus-port 9616 \
     --name="<NODE_NAME>-embedded-relay"

[Install]
WantedBy=multi-user.target

In the configuration file above, please replace:

  • <NODE_NAME> - your preferred node name

  • <DB_CACHE_SIZE> - 50% of the actual RAM your server has. For example, for 32 GB RAM, the value must be set to 16000. The minimum value is 2000, but it is below the recommended specs.

Please make sure that the P2P ports are accessible (30333 for the parachain, 30334 for the relaychain), as well as the RPC and WS ports (9944 for both parachain RPC and parachain WS).

That's pretty much it. We can now start the service, and implicitly the node.

sudo systemctl daemon-reload
sudo systemctl enable moonbeam.service
sudo systemctl start moonbeam.service

You can check if the service is running properly as follows:

sudo systemctl status moonbeam.service # check if the service is active and running
sudo journalctl -f -u moonbeam.service # check the logs of the node

The node should now be syncing with the network. If you do not wish to sync from scratch (which should take a few days), you can use the Moonbeam Snapshots provided by CertHum, which we've successfully used in the past for various use cases. It is important to know that your node should be stopped while downloading the snapshots and it is important that the database files are cleaned up before downloading to ensure data integrity.

You can check if the node is synced by running the API call listed below from inside your environment. You are going to need to have the curl and jq packages installed for this, so make sure to install them beforehand.

# for the parachain
curl -s -H "Content-Type: application/json" --data '{ "jsonrpc":"2.0", "method":"system_health", "
params":[],"id":1 }' localhost:9944 | jq .result.isSyncing

# for the relaychain
curl -s -H "Content-Type: application/json" --data '{ "jsonrpc":"2.0", "method":"system_health", "
params":[],"id":1 }' localhost:9934 | jq .result.isSyncing

If the result is false, it means that your node is fully synced.

Another way to check which block the node is at would be running:

# for the parachain
curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "eth_blockNumber","params": []}' localhost:9944

The result should be a hex number (i.e 0x10c5815). If you convert it to a decimal number, you can compare it to the latest block listed on the explorer: https://moonbeam.subscan.io

In order to test the WS endpoint, we will need to install a package called node-ws:

sudo apt-get install node-ws

An example WS call would look like this:

wscat --connect ws://localhost:9944
> {"id":1, "jsonrpc":"2.0", "method": "eth_blockNumber","params": []}

Monitoring Guidelines

In order to maintain a healthy node that passes the Integrity Protocol's checks, you should have a monitoring system in place. Blockchain nodes usually offer metrics regarding the node's behaviour and health - a popular way to offer these metrics is Prometheus-like metrics. The most popular monitoring stack, which is also open source, consists of:

  • Prometheus - scrapes and stores metrics as time series data (blockchain nodes cand send the metrics to it);

  • Grafana - allows querying, visualization and alerting based on metrics (can use Prometheus as a data source);

  • Alertmanager - handles alerting (can use Prometheus metrics as data for creating alerts);

  • Node Exporter - exposes hardware and kernel-related metrics (can send the metrics to Prometheus).

We will assume that Prometheus/Grafana/Alertmanager are already installed (we will provide a detailed guide of how to set up monitoring and alerting with the Prometheus + Grafana stack at a later time; for now, if you do not have the stack already installed, please follow this official basic guide here).

We recommend installing the Node Exporter utilitary since it offers valuable information regarding CPU, RAM & storage. This way, you will be able to monitor possible hardware bottlenecks, or to check if your node is underutilized - you could use these valuable insights to take decisions regarding scaling up/down the allocated hardware resources.

Below, you can find a script that installs Node Exporter as a systemd service.

#!/bin/bash

# set the latest version
VERSION=1.5.0

# download and untar the binary
wget https://github.com/prometheus/node_exporter/releases/download/v${VERSION}/node_exporter-${VERSION}.linux-amd64.tar.gz
tar xvf node_exporter-*.tar.gz
sudo cp ./node_exporter-${VERSION}.linux-amd64/node_exporter /usr/local/bin/

# create system user
sudo useradd --no-create-home --shell /usr/sbin/nologin node_exporter

# change ownership of node exporter binary
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter

# remove temporary files
rm -rf ./node_exporter*

# create systemd service file
cat > /etc/systemd/system/node_exporter.service <<EOF
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
EOF

# enable the node exporter service and start it
sudo systemctl daemon-reload
sudo systemctl enable node_exporter.service
sudo systemctl start node_exporter.service

As a reminder, Node Exporter uses port 9100 by default, so be sure to expose this port to the machine which holds the Prometheus server. The same should be done for the metrics port(s) of the blockchain node (in this case, we should expose ports 9615 - for monitoring the parachain - and 9616 - for monitoring the relaychain).

Having installed Node Exporter and having already exposed the node's metrics, these should be added as targets under the scrape_configs section in your Prometheus configuration file (i.e. /etc/prometheus/prometheus.yml), before reloading the new config (either by restarting or reloading the config - please check the official documentation). This should look similar to this:

scrape_configs:
  - job_name: 'moonbeam-node-parachain'
    scrape_interval: 10s
    metrics_path: /metrics
    static_configs:
      - targets:
        - '<NODE0_IP>:9615'
        - '<NODE1_IP>:9615' # you can add any number of nodes as targets
  - job_name: 'moonbeam-node-relaychain'
    scrape_interval: 10s
    metrics_path: /metrics
    static_configs:
      - targets:
        - '<NODE0_IP>:9616'
        - '<NODE1_IP>:9616' # you can add any number of nodes as targets
  - job_name: 'moonbeam-node-exporter'
    scrape_interval: 10s
    metrics_path: /metrics
    static_configs:
      - targets:
        - '<NODE0_IP>:9100'
        - '<NODE1_IP>:9100' # you can add any number of nodes as targets

In the configuration file above, please replace:

  • <NODE0_IP> - node 0's IP

  • <NODE1_IP> - node 1's IP (you can add any number of nodes as targets)

  • ...

  • <NODEN_IP> - node N's IP (you can add any number of nodes as targets)

That being said, the most important metrics that should be checked are:

  • node_cpu_seconds_total - CPU metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:

    • 100 - (avg by (instance) (rate(node_cpu_seconds_total{job="moonbeam-node-exporter",mode="idle"}[5m])) * 100), which means the average percentage of CPU usage over the last 5 minutes;

  • node_memory_MemTotal_bytes/node_memory_MemAvailable_bytes - RAM metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:

    • (node_memory_MemTotal_bytes{job="moonbeam-node-exporter"} - node_memory_MemAvailable_bytes{job="moonbeam-node-exporter"}) / 1073741824, which means the amount of RAM (in GB) used, excluding cache/buffers;

  • node_network_receive_bytes_total - network traffic metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:

    • rate(node_network_receive_bytes_total{job="moonbeam-node-exporter"}[1m]), which means the average network traffic received, per second, over the last minute (in bytes);

  • node_filesystem_avail_bytes - FS metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:

    • node_filesystem_avail_bytes{job="moonbeam-node-exporter",device="<DEVICE>"} / 1073741824, which means the filesystem space available to non-root users (in GB) for a certain device <DEVICE> (i.e. /dev/sda or wherever the blockchain data is stored) - this can be used to get an alert whenever the available space left is below a certain threshold (please be careful how you choose this threshold: if you have storage that can easily be increased - for example, EBS storage from AWS, you can set a lower threshold, but if you run your node on a bare metal machine which is not easily upgradable, you should set a higher treshold just to be sure you are able to find a solution before it fills up);

  • up - Prometheus automatically generated metrics - for monitoring purposes, you could use the following expressions:

    • up{job="moonbeam-node-relaychain"}, which has 2 possible values: 1, if the node is up, or 0, if the node is down - this can be used to get an alert whenever the node goes down (i.e. it can be triggered at each restart of the node);

    • up{job="moonbeam-node-parachain"}, which has 2 possible values: 1, if the node is up, or 0, if the node is down - this can be used to get an alert whenever the node goes down (i.e. it can be triggered at each restart of the node);

  • substrate_block_height & moonbeam_substrate_block_height - metrics exposed by the relaychain and the parachain - for monitoring purposes, you could use the following expressions:

    • substrate_block_height{job="moonbeam-node-relaychain",status="sync_target"} - on(instance) substrate_block_height{job="moonbeam-node-relaychain",status="finalized"}, which means what is the difference between the latest proposed block and the latest finalized block on the relaychain - this can be used to get an alert whenever there is a finalization problem on the blockchain, or if the node has fallen behind by comparing with a certain threshold (you should start worrying if the difference is greater than 5-10 for a long period of time);

    • moonbeam_substrate_block_height{job="moonbeam-node-parachain",status="sync_target"} - on(instance) moonbeam_substrate_block_height{job="moonbeam-node-parachain",status="finalized"}, which means what is the difference between the latest proposed block and the latest finalized block on the parachain - this can be used to get an alert whenever there is a finalization problem on the blockchain, or if the node has fallen behind by comparing with a certain threshold (you should start worrying if the difference is greater than 5-10 for a long period of time);

    • increase(substrate_block_height{job="moonbeam-node-relaychain",status="finalized"}[1m]), which means the amount of new relaychain blocks on the node in the last minute - this can be used to get an alert whenever there is a finalization problem on the blockchain, or if the node is stuck at a certain height (for Polkadot, you should expect around 10 blocks per minute, so this could be the threshold for the alert to be triggered)

    • increase(moonbeam_substrate_block_height{job="moonbeam-node-parachain",status="finalized"}[1m]), which means the amount of new parachain blocks on the node in the last minute - this can be used to get an alert whenever there is a finalization problem on the blockchain, or if the node is stuck at a certain height (for Moonbeam, you should expect around 5-6 blocks per minute, so this could be the threshold for the alert to be triggered)

  • substrate_sync_peers & moonbeam_substrate_sync_peers - metrics exposed by the relaychain and the parachain - for monitoring purposes, you could use the following expressions:

    • substrate_sync_peers{job="moonbeam-node-relaychain"}, which means the number of peers connected to the node on the relaychain side - this can be used to get an alert whenever there are less peers than a certain threshold for a certain period of time (i.e. less than 3 peers for 5 minutes);

    • moonbeam_substrate_sync_peers{job="moonbeam-node-parachain"}, which means the number of peers connected to the node on the parachain side - this can be used to get an alert whenever there are less peers than a certain threshold for a certain period of time (i.e. less than 3 peers for 5 minutes).

You can use the above metrics to create both Grafana dashboards and Alertmanager alerts.

Please make sure to also check the Official Documentation and the Github Repository posted above in order to make sure you are keeping your node up to date.

Last updated