Testnet
This guide is a recommended way of running your node, specifically as a systemd service on a Linux-based machine (be it virtual, or bare metal) and is complementary to the official documentation available here. You could also start your node as a Docker container, but we would recommend using this option only if you are familiar with container technologies and we strongly encourage leveraging the power of containers with an orchestration software such as Kubernetes.
Recommended Hardware Specifications
AWS:
c5.4xlarge
or any equivalent instance type
Bare Metal:
- 32GB RAM
- 16 vCPUs
- At least 250 GB of storage - make sure it's extendable
Assumptions
We're going to assume you are already logged into your Virtual Machine as a privileged user or as the root user.
Setup
After making sure your operating system is up to date we will need to install a couple of packages before actually getting the node started.
sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install git curl wget cmake build-essential clang ufw jq net-tools lz4 -y
After doing this, we should now build the binary:
LATEST_RELEASE="aptos-node-v1.7.0" # at the moment of writing, this was the latest release available
git clone https://github.com/aptos-labs/aptos-core.git # clone the repo
cd aptos-core
./scripts/dev_setup.sh # set up build dependencies
source ~/.cargo/env # update shell environment
git checkout $LATEST_RELEASE # checkout latest release
cargo build -p aptos-node --release # build binary
After performing the previous steps, you can check that everything is in place by running the command: ./target/release/aptos-node --version
and if the output lists the Aptos node binary version, you're clear to move forward. You should also copy the aptos-node
binary to one of the executable paths, for example:
sudo cp target/release/aptos-node /usr/local/bin
The next step is to create the directory structure for the data, configure the node via a configuration file and download the genesis and waypoint files:
mkdir -p ~/.aptos/data # create root Aptos directory & data directory
cd ~/.aptos
curl -O https://raw.githubusercontent.com/aptos-labs/aptos-networks/main/testnet/genesis.blob # download genesis
curl -O https://raw.githubusercontent.com/aptos-labs/aptos-networks/main/testnet/waypoint.txt # download waypoint
touch ~/.aptos/fullnode.yaml
vi ~/.aptos/fullnode.yaml # open the file for writing - we prefer vi as our text editor, but feel free to use what suits you best
The contents of the configuration file should be:
base:
data_dir: "<ROOT_DIR>/.aptos/data" # replace <ROOT_DIR> with your root full path
role: "full_node"
waypoint:
from_file: "<ROOT_DIR>/.aptos/waypoint.txt" # replace <ROOT_DIR> with your root full path
execution:
genesis_file_location: "<ROOT_DIR>/.aptos/genesis.blob" # replace <ROOT_DIR> with your root full path
full_node_networks:
- network_id: "public"
discovery_method: "onchain"
listen_address: "/ip4/127.0.0.1/tcp/6182"
state_sync:
state_sync_driver:
bootstrapping_mode: ApplyTransactionOutputsFromGenesis
continuous_syncing_mode: ApplyTransactionOutputs
api:
enabled: true
address: "0.0.0.0:8080"
The next step is to create the systemd configuration file:
sudo touch /etc/systemd/system/aptos.service # create the file
sudo vi /etc/systemd/system/aptos.service # open the file for writing - we prefer vi as our text editor, but feel free to use what suits you best
The contents of the systemd configuration file should be:
[Unit]
Description=Aptos Node Service
[Service]
Type=simple
User=<USER> # replace <USER> with your user
Environment=RUST_LOG=info
WorkingDirectory=<ROOT_DIR>/aptos-core # replace <ROOT_DIR> with your root full path
ExecStart=/usr/local/bin/aptos-node --config <ROOT_DIR>/.aptos/fullnode.yaml # replace <ROOT_DIR> with your root full path
LimitNOFILE=65536
Restart=on-failure
RestartSec=30
TimeoutStopSec=45
[Install]
WantedBy=multi-user.target
Please make sure that the API port is accessible (in the default case - 8080).
That's pretty much it. We can now start the service, and implicitly the node.
sudo systemctl daemon-reload
sudo systemctl enable aptos.service
sudo systemctl start aptos.service
You can check if the service is running properly as follows:
sudo systemctl status aptos.service # check if the service is active and running
sudo journalctl -f -u aptos.service # check the logs of the node
The node should now be syncing with the network. If you do not wish to sync from scratch (which should take a few days), you can use the Aptos Snapshots provided by Bware Labs, which we've successfully used in the past for various use cases. It is important to know that your node should be stopped while downloading the snapshots and it is important that the database files are cleaned up before downloading to ensure data integrity. There are also other options to help you with syncing the network faster, so please check the official docs here and here.
You can check if the node is synced by running the API call listed below from inside your environment. You are going to need to have the curl
and jq
packages installed for this, so make sure to install them beforehand.
curl -s localhost:8080/v1 | jq -r .ledger_version
The result should be a decimal number and you can compare it to the latest block listed on the explorer.
Monitoring Guidelines
In order to maintain a healthy node that passes the Integrity Protocol's checks, you should have a monitoring system in place. Blockchain nodes usually offer metrics regarding the node's behaviour and health - a popular way to offer these metrics is Prometheus-like metrics. The most popular monitoring stack, which is also open source, consists of:
- Prometheus - scrapes and stores metrics as time series data (blockchain nodes cand send the metrics to it);
- Grafana - allows querying, visualization and alerting based on metrics (can use Prometheus as a data source);
- Alertmanager - handles alerting (can use Prometheus metrics as data for creating alerts);
- Node Exporter - exposes hardware and kernel-related metrics (can send the metrics to Prometheus).
We will assume that Prometheus/Grafana/Alertmanager are already installed (we will provide a detailed guide of how to set up monitoring and alerting with the Prometheus + Grafana stack at a later time; for now, if you do not have the stack already installed, please follow this official basic guide here).
We recommend installing the Node Exporter utilitary since it offers valuable information regarding CPU, RAM & storage. This way, you will be able to monitor possible hardware bottlenecks, or to check if your node is underutilized - you could use these valuable insights to take decisions regarding scaling up/down the allocated hardware resources.
Below, you can find a script that installs Node Exporter as a systemd service.
#!/bin/bash
# set the latest version
VERSION=1.6.1
# download and untar the binary
wget https://github.com/prometheus/node_exporter/releases/download/v${VERSION}/node_exporter-${VERSION}.linux-amd64.tar.gz
tar xvf node_exporter-*.tar.gz
sudo cp ./node_exporter-${VERSION}.linux-amd64/node_exporter /usr/local/bin/
# create system user
sudo useradd --no-create-home --shell /usr/sbin/nologin node_exporter
# change ownership of node exporter binary
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
# remove temporary files
rm -rf ./node_exporter*
# create systemd service file
cat > /etc/systemd/system/node_exporter.service <<EOF
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
EOF
# enable the node exporter service and start it
sudo systemctl daemon-reload
sudo systemctl enable node_exporter.service
sudo systemctl start node_exporter.service
As a reminder, Node Exporter uses port 9100 by default, so be sure to expose this port to the machine which holds the Prometheus server. The same should be done for the metrics port(s) of the blockchain node (in this case, we should expose ports 9101 - for monitoring the parachain).
Having installed Node Exporter and having already exposed the node's metrics, these should be added as targets under the scrape_configs
section in your Prometheus configuration file (i.e. /etc/prometheus/prometheus.yml
), before reloading the new config (either by restarting or reloading the config - please check the official documentation). This should look similar to this:
scrape_configs:
- job_name: 'aptos-testnet-node'
scrape_interval: 10s
metrics_path: /metrics
static_configs:
- targets:
- '<NODE0_IP>:9101'
- '<NODE1_IP>:9101' # you can add any number of nodes as targets
- job_name: 'aptos-testnet-node-exporter'
scrape_interval: 10s
metrics_path: /metrics
static_configs:
- targets:
- '<NODE0_IP>:9100'
- '<NODE1_IP>:9100' # you can add any number of nodes as targets
In the configuration file above, please replace:
- <NODE0_IP> - node 0's IP
- <NODE1_IP> - node 1's IP (you can add any number of nodes as targets)
- ...
- <NODEN_IP> - node N's IP (you can add any number of nodes as targets)
That being said, the most important metrics that should be checked are:
- node_cpu_seconds_total - CPU metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
100 - (avg by (instance) (rate(node_cpu_seconds_total{job="aptos-testnet-node-exporter",mode="idle"}[5m])) * 100)
, which means the average percentage of CPU usage over the last 5 minutes;
- node_memory_MemTotal_bytes/node_memory_MemAvailable_bytes - RAM metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
(node_memory_MemTotal_bytes{job="aptos-testnet-node-exporter"} - node_memory_MemAvailable_bytes{job="aptos-testnet-node-exporter"}) / 1073741824
, which means the amount of RAM (in GB) used, excluding cache/buffers;
- node_network_receive_bytes_total - network traffic metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
rate(node_network_receive_bytes_total{job="aptos-testnet-node-exporter"}[1m])
, which means the average network traffic received, per second, over the last minute (in bytes);
- node_filesystem_avail_bytes - FS metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
node_filesystem_avail_bytes{job="aptos-testnet-node-exporter",device="<DEVICE>"} / 1073741824
, which means the filesystem space available to non-root users (in GB) for a certain device <DEVICE> (i.e./dev/sda
or wherever the blockchain data is stored) - this can be used to get an alert whenever the available space left is below a certain threshold (please be careful how you choose this threshold: if you have storage that can easily be increased - for example, EBS storage from AWS, you can set a lower threshold, but if you run your node on a bare metal machine which is not easily upgradable, you should set a higher treshold just to be sure you are able to find a solution before it fills up);
- up - Prometheus automatically generated metrics - for monitoring purposes, you could use the following expressions:
up{job="aptos-testnet-node"}
, which has 2 possible values: 1, if the node is up, or 0, if the node is down - this can be used to get an alert whenever the node goes down (i.e. it can be triggered at each restart of the node);
- aptos_state_sync_version & aptos_data_client_highest_advertised_data - metrics exposed by the node - for monitoring purposes, you could use the following expressions:
aptos_data_client_highest_advertised_data{job="aptos-testnet-node",data_type="states"} - on(instance) aptos_state_sync_version{job="aptos-testnet-node",type="synced"}
, which means what is the difference between the highest advertised ledger version on the blockchain and the local synced version - this can be used to get an alert whenever the node has fallen behind by comparing with a certain threshold (you should start worrying if the difference is greater than +/- 100-200 versions for a long period of time);increase(aptos_state_sync_version{job="aptos-testnet-node",type="synced"}[1m])
, which means the amount of ledgers versions processed by the storage synchronizer operations on the node in the last minute - this can be used to get an alert whenever the node is stuck (not syncing anymore) or the blockchain has issues (e.g. is halted) (you should expect hundreds of new versions per minute, so this could be the threshold for the alert to be triggered);
- aptos_connections - metrics exposed by the node - for monitoring purposes, you could use the following expressions:
sum by (instance) (aptos_connections{job="aptos-testnet-node"})
, which means the number of peers connected to the node - this can be used to get an alert whenever there are less peers than a certain threshold for a certain period of time (i.e. less than 3 peers for 5 minutes - you should expect to have 3-4 peers at all times).
You can use the above metrics to create both Grafana dashboards and Alertmanager alerts.
Please make sure to also check the Official Documentation and the Github Repository posted above in order to make sure you are keeping your node up to date.