Testnet
Recommended Hardware Specifications
AWS:
m5a.large
or any equivalent instance type
Bare Metal:
- 8GB RAM
- 2 vCPUs
- At least 100 GB of storage - make sure it's extendable
Prerequisites
- Hyperledger
Besu
installed curl
(or similar web sevice client)
Running a Palm Tesnet Node is almost the same like running one on mainnet. There are still a couple of differences though:
Genesis file
The following curl commands download the genesis file for the testnet environment.
curl -O https://genesis-files.palm.io/uat/genesis.json
Besu
configuration file
The following configuration file examples include the bootnode addresses for the testnet environment.
# Palm Testnet genesis file
genesis-file="genesis.json"
# Network bootnodes
bootnodes=["enode://7c6e935eca89b230002294420c10d645844419ac50c5fc03fa53bf24fd82600508f5a4d5b89f7690c7e8f9c5dc833605d60bb1dd35997669ab7f1fc274683803@54.162.14.76:30303","enode://2f5d0489e2bbbc495e3d38ae3df9cc0a47faf42818057d193f0f4863d44505277c3d1b9a863f7ad961830ef15a8f8b72ec52791f3cca5ef84284a29f82f2dd73@18.235.20.166:30303"]
# Data directory
data-path="<PATH>/palm-node"
#Enable the JSON-RPCs
rpc-http-enabled=true
After you bring this modifications to the setup presented in the Mainnet section, you are good to start the Palm Testnet Node with the same flags as in Mainnet case.
besu --config-file=/path/to/config.toml --sync-mode=FULL --random-peer-priority-enabled=true --rpc-http-enabled=true --rpc-http-api=ETH,NET,WEB3,ADMIN,IBFT,TXPOOL,DEBUG,TRACE --rpc-ws-api=ETH,NET,WEB3,ADMIN,IBFT,TXPOOL,DEBUG,TRACE --rpc-ws-enabled --rpc-http-host=0.0.0.0 --rpc-ws-host=0.0.0.0 --host-allowlist=* --metrics-enabled --metrics-host=0.0.0.0 --rpc-http-cors-origins=* --rpc-http-max-active-connections=10000 --rpc-ws-max-active-connections=10000 --max-peers=100
Monitoring Guidelines
In order to maintain a healthy node that passes the Integrity Protocol's checks, you should have a monitoring system in place. Blockchain nodes usually offer metrics regarding the node's behaviour and health - a popular way to offer these metrics is Prometheus-like metrics. The most popular monitoring stack, which is also open source, consists of:
- Prometheus - scrapes and stores metrics as time series data (blockchain nodes cand send the metrics to it);
- Grafana - allows querying, visualization and alerting based on metrics (can use Prometheus as a data source);
- Alertmanager - handles alerting (can use Prometheus metrics as data for creating alerts);
- Node Exporter - exposes hardware and kernel-related metrics (can send the metrics to Prometheus).
We will assume that Prometheus/Grafana/Alertmanager are already installed (we will provide a detailed guide of how to set up monitoring and alerting with the Prometheus + Grafana stack at a later time; for now, if you do not have the stack already installed, please follow this official basic guide here).
We recommend installing the Node Exporter utilitary since it offers valuable information regarding CPU, RAM & storage. This way, you will be able to monitor possible hardware bottlenecks, or to check if your node is underutilized - you could use these valuable insights to take decisions regarding scaling up/down the allocated hardware resources.
Below, you can find a script that installs Node Exporter as a systemd service.
#!/bin/bash
# set the latest version
VERSION=1.3.1
# download and untar the binary
wget https://github.com/prometheus/node_exporter/releases/download/v${VERSION}/node_exporter-${VERSION}.linux-amd64.tar.gz
tar xvf node_exporter-*.tar.gz
sudo cp ./node_exporter-${VERSION}.linux-amd64/node_exporter /usr/local/bin/
# create system user
sudo useradd --no-create-home --shell /usr/sbin/nologin node_exporter
# change ownership of node exporter binary
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
# remove temporary files
rm -rf ./node_exporter*
# create systemd service file
cat > /etc/systemd/system/node_exporter.service <<EOF
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
EOF
# enable the node exporter service and start it
sudo systemctl daemon-reload
sudo systemctl enable node_exporter.service
sudo systemctl start node_exporter.service
As a reminder, Node Exporter uses port 9100 by default, so be sure to expose this port to the machine which holds the Prometheus server. The same should be done for the metrics port(s) of the blockchain node (in this case, we should expose port 9545 - for monitoring the palm node).
Having installed Node Exporter and having already exposed the node's metrics, these should be added as targets under the scrape_configs
section in your Prometheus configuration file (i.e. /etc/prometheus/prometheus.yml
), before reloading the new config (either by restarting or reloading the config - please check the official documentation). This should look similar to this:
scrape_configs:
- job_name: 'palm-node'
scrape_interval: 10s
metrics_path: /metrics
static_configs:
- targets:
- '<NODE0_IP>:9545'
- '<NODE1_IP>:9545' # you can add any number of nodes as targets
- job_name: 'palm-node-exporter'
scrape_interval: 10s
metrics_path: /metrics
static_configs:
- targets:
- '<NODE0_IP>:9100'
- '<NODE1_IP>:9100' # you can add any number of nodes as targets
In the configuration file above, please replace:
- <NODE0_IP> - node 0's IP
- <NODE1_IP> - node 1's IP (you can add any number of nodes as targets)
- ...
- <NODEN_IP> - node N's IP (you can add any number of nodes as targets)
That being said, the most important metrics that should be checked are:
- node_cpu_seconds_total - CPU metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
100 - (avg by (instance) (rate(node_cpu_seconds_total{job="palm-node-exporter",mode="idle"}[5m])) * 100)
, which means the average percentage of CPU usage over the last 5 minutes;
- node_memory_MemTotal_bytes/node_memory_MemAvailable_bytes - RAM metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
(node_memory_MemTotal_bytes{job="palm-node-exporter"} - node_memory_MemAvailable_bytes{job="palm-node-exporter"}) / 1073741824
, which means the amount of RAM (in GB) used, excluding cache/buffers;
- node_network_receive_bytes_total - network traffic metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
rate(node_network_receive_bytes_total{job="palm-node-exporter"}[1m])
, which means the average network traffic received, per second, over the last minute (in bytes);
- node_filesystem_avail_bytes - FS metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
node_filesystem_avail_bytes{job="palm-node-exporter",device="<DEVICE>"} / 1073741824
, which means the filesystem space available to non-root users (in GB) for a certain device <DEVICE> (i.e./dev/sda
or wherever the blockchain data is stored) - this can be used to get an alert whenever the available space left is below a certain threshold (please be careful how you choose this threshold: if you have storage that can easily be increased - for example, EBS storage from AWS, you can set a lower threshold, but if you run your node on a bare metal machine which is not easily upgradable, you should set a higher treshold just to be sure you are able to find a solution before it fills up);
- up - Prometheus automatically generated metrics - for monitoring purposes, you could use the following expressions:
up{job="palm-node"}
, which has 2 possible values: 1, if the node is up, or 0, if the node is down - this can be used to get an alert whenever the node goes down (i.e. it can be triggered at each restart of the node);
- ethereum_blockchain_height - metrics exposed by the palm node - for monitoring purposes, you could use the following expressions:
increase(ethereum_blockchain_height{job="palm-node"}[1m])
, which means how many blocks palm has been producing in the last 1 minute - this can be used to get an alert whenever the node has fallen behind by comparing with a certain threshold (you should start worrying if the difference is greater than 2-3 for the last 5 minutes);
- besu_peers_connected_total - metrics exposed by the palm node - for monitoring purposes, you could use the following expressions:
rate(besu_peers_connected_total{job="palm-node"}[5m])
, which means the number of peers connected to the node on the palm side - this can be used to get an alert whenever there are less peers than a certain threshold for a certain period of time (i.e. less than 3 peers for 5 minutes);
You can use the above metrics to create both Grafana dashboards and Alertmanager alerts.
Please make sure to also check the Official Documentation and the Github Repository posted above in order to make sure you are keeping your node up to date.
IMPORTANT UPDATE
Palm Network is transitioning to a Proof-of-Stake network. For important details and actions required for developers and RPC node operators please refer to the Official Docs. Palm Testnet will transition on October 2nd, 2023.
The changes affecting existing and new node operators running their own JSON-RPC API services:
- RPC API node software changes from Hyperledger Besu to Polygon Edge
- JSON-RPC API call differences(https://docs.palm.io/json-rpc-api-changes)
Recommended Hardware Specifications
Bare Metal:
- 16GB RAM
- 8 vCPUs
- At least 200 GB of storage - make sure it's extendable
As of August 2023, Palm Testnet currently requires 70 GB of storage, so the minimum storage requirement may need to be adjusted accordingly.
Setup
Installing a polygon-edge
node via building the source is the easiest way to spin up your Palm Testnet node. It will take care of installing all the required system dependencies, and set all the configuration files in place.
Building polygon-edge
requires both a Go (version 1.20) and a C compiler. You can install them using your favourite package manager. Once the dependencies are installed, follow these instructions:
git clone https://github.com/gateway-fm/polygon-edge
git checkout 1.1.33
make build
Operating your node
Once you have followed the steps above in the installation section you can run the following to create keys for your node.
./polygon-edge polybft-secrets --insecure --data-dir="/path/to/your/datadir"
The next step is to copy the genesis file for the relevant network to your data-dir and name it genesis.json
. Genesis files can be found in the repo in the root folder called genesis-testnet.json
.
Now run the following to start the node. This will start up the server, join the devp2p network / libp2p network depending on being pre/post the fork point, and start syncing.
./polygon-edge server --data-dir=/path/to/your/datadir --chain=/path/to/your/genesis.json --libp2p "0.0.0.0:30301" --devp2p "0.0.0.0:30302" --jsonrpc="0.0.0.0:8545" --grpc-address="0.0.0.0:9632"
Monitoring will change because of the fact that node software changed. You can keep the node exporter part though, as it will help a lot in this case. Our advice regarding this it's that monitoring the latest block and checking the timestamp to determine how far behind your node is enough to see if block production has halted for any reason and prompt an investigation into the node. There is also a healthcheck endpoint to ensure that the node is alive available at http://your-address:8545/health a GET request to this endpoint will return a status 200 response with a body of OK if the node is healthy.