Mainnet
Mainnet
Recommended Hardware Specifications
AWS:
m5a.xlarge
or any equivalent instance type
Bare Metal:
- 16GB RAM
- 4 vCPUs
- At least 1Tb of storage - make sure it's extendable
Assumptions
We're going to assume you are already logged into your Virtual Machine as a privileged user or as the root user.
Setup
You can use the method described below which uses the Official Optimism Github Repository or follow the instructions here: https://github.com/smartcontracts/simple-optimism-node - this is a pretty well-written guide which will also take care of the monitoring part for you.
This tutorial will guide you through the process of setting up and running an Optimism node, which consists of two main components: op-node and op-geth. The op-node acts as the consensus layer, while op-geth serves as the RPC node.
In order to start optimism bedrock on mainnet first of all you will need to download the archive: It can be found here https://community.optimism.io/docs/useful-tools/networks/#api-options under the Bedrock Data Directory
wget https://storage.googleapis.com/oplabs-mainnet-data/mainnet-bedrock.tar
tar -xvf mainnet-bedrock.tar
The second step would be to ensure we have the needed packages to build the binaries. In order to do this, we will need build-essential
and go
.
wget https://go.dev/dl/go1.19.3.linux-amd64.tar.gz
sudo tar -xvf go1.19.3.linux-amd64.tar.gz
sudo mv go /usr/local
Once the installation is done, we will need to set up the Go paths, so please add the lines below to the ~/.bash_aliases
file.
export GOROOT=/usr/local/go
export GOPATH=$HOME/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
To get this done we will start with op-geth:
git clone https://github.com/ethereum-optimism/op-geth.git
cd op-geth
git checkout v1.101105.2
make geth
Now that we have the geth binary ready, we will need to create jwt secret. You can generate a jwt secret using the following:
openssl rand -hex 32 > jwt.txt
Once the jwt secret is generated you can start the geth node, here is an example on how to start it:
geth \
--ws --ws.port=9992 --ws.addr=0.0.0.0 --ws.origins="*" \
--http --http.port=9991 --http.addr=0.0.0.0 \
--http.vhosts="*" \
--http.corsdomain="*" \
--authrpc.addr=localhost \
--authrpc.jwtsecret=<path to the jwt secret>\
--authrpc.port=8551 \
--authrpc.vhosts="*" \
--datadir=<path to point to the bedrock archive> \
--verbosity=3 \
--rollup.disabletxpoolgossip=true \
--rollup.sequencerhttp=https://mainnet-sequencer.optimism.io/ \
--nodiscover \
--syncmode=full \
--maxpeers=0 --http.api=eth,rollup,net,web3,debug --ws.api=eth,rollup,net,web3,debug \
--gcmode=archive
Now we can move to the second part of the optimism node, op-node
git clone https://github.com/ethereum-optimism/optimism.git
cd optimism
git checkout d826cb018955d342779ccc0ebdf878cfa746e765
cd op-node
make op-node
Once you succesfully build the binary, you can start the op-node. Here is an example on how to start it. Since Optimism is a Layer 2 network, you will need to provide a Layer 1 Endpoint. You can get an L1 Ethereum Endpoint for this for free via https://blastapi.io
.
op-node --l1.trustrpc=true --l1=<L1 Ethereum Endpoint> \
--l2=http://localhost:8551 --network=mainnet --rpc.addr=0.0.0.0 --rpc.port=9545 --l2.jwt-secret=<path to the jwt secret> --metrics.enabled
That's pretty much it. Your Optimism Mainnet node is now up and running. All you need to do now is wait for it to sync. You can check if the node is synced by running the API Call listed below from inside your environment. You are going to need to have the curl
and jq
packages installed for this, so make sure to install them beforehand.
Usually, we can use eth_syncing
to check if a geth-based node (as Optimism L2Geth component is) is synced, but in this case, the call is going to return "false" whether the node is synced or not.
In this case, we're just going to use the method below:
curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "eth_blockNumber","params": []}' localhost:9991
The result should be a hex number (i.e 0x10c5815
). If you convert it to a decimal number, you can compare it to the latest block listed on the Optimism Mainnet explorer.
The usual RPC port for Optimism Chain using this docker-compose setup is 9991
and the WS port is 9992
.
In order to test the WS endpoint, we will need to install a package called node-ws
.
An example WS call would look like this:
wscat --connect ws://localhost:9992
> {"id":1, "jsonrpc":"2.0", "method": "eth_blockNumber","params": []}
Monitoring guidelines
In order to maintain a healthy node that passes the Integrity Protocol's checks, you should have a monitoring system in place. Blockchain nodes usually offer metrics regarding the node's behaviour and health - a popular way to offer these metrics is Prometheus-like metrics. The most popular monitoring stack, which is also open source, consists of:
- Prometheus - scrapes and stores metrics as time series data (blockchain nodes cand send the metrics to it);
- Grafana - allows querying, visualization and alerting based on metrics (can use Prometheus as a data source);
- Alertmanager - handles alerting (can use Prometheus metrics as data for creating alerts);
- Node Exporter - exposes hardware and kernel-related metrics (can send the metrics to Prometheus).
We will assume that Prometheus/Grafana/Alertmanager are already installed (we will provide a detailed guide of how to set up monitoring and alerting with the Prometheus + Grafana stack at a later time; for now, if you do not have the stack already installed, please follow this official basic guide here).
We recommend installing the Node Exporter utility since it offers valuable information regarding CPU, RAM & storage. This way, you will be able to monitor possible hardware bottlenecks, or to check if your node is underutilized - you could use these valuable insights to make decisions regarding scaling up/down the allocated hardware resources.
Below, you can find a script that installs Node Exporter as a systemd service.
#!/bin/bash
# set the latest version
VERSION=1.3.1
# download and untar the binary
wget https://github.com/prometheus/node_exporter/releases/download/v${VERSION}/node_exporter-${VERSION}.linux-amd64.tar.gz
tar xvf node_exporter-*.tar.gz
sudo cp ./node_exporter-${VERSION}.linux-amd64/node_exporter /usr/local/bin/
# create system user
sudo useradd --no-create-home --shell /usr/sbin/nologin node_exporter
# change ownership of node exporter binary
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
# remove temporary files
rm -rf ./node_exporter*
# create systemd service file
cat > /etc/systemd/system/node_exporter.service <<EOF
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
EOF
# enable the node exporter service and start it
sudo systemctl daemon-reload
sudo systemctl enable node_exporter.service
sudo systemctl start node_exporter.service
As a reminder, Node Exporter uses port 9100 by default, so be sure to expose this port to the machine which holds the Prometheus server. The same should be done for the metrics port(s) of the blockchain node (in this case, we should expose port 7878).
Having installed Node Exporter and having already exposed the node's metrics, these should be added as targets under the scrape_configs
section in your Prometheus configuration file (i.e. /etc/prometheus/prometheus.yml
), before reloading the new config (either by restarting or reloading the config - please check the official documentation). This should look similar to this:
Unfortunately, the L2Geth component of Optimism does not expose metrics in Prometheus format. The only way to monitor its stats is to host an InfluxDB and write all the metrics into it and then configure Grafana to use that InfluxDB Instance as a Datasource. We will only cover the DTL Monitoring in this section for now.
scrape_configs:
- job_name: 'optimism-node-dtl'
scrape_interval: 10s
metrics_path: /metrics
static_configs:
- targets:
- '<NODE0_IP>:7878'
- '<NODE1_IP>:7878' # you can add any number of nodes as targets
- job_name: 'optimism-node-exporter'
scrape_interval: 10s
metrics_path: /metrics
static_configs:
- targets:
- '<NODE0_IP>:9100'
- '<NODE1_IP>:9100' # you can add any number of nodes as targets
In the configuration file above, please replace:
- <NODE0_IP> - node 0's IP
- <NODE1_IP> - node 1's IP (you can add any number of nodes as targets)
- ...
- <NODEN_IP> - node N's IP (you can add any number of nodes as targets)
That being said, the most important metrics that should be checked are:
- node_cpu_seconds_total - CPU metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
100 - (avg by (instance) (rate(node_cpu_seconds_total{job="optimism-node-exporter",mode="idle"}[5m])) * 100)
, which means the average percentage of CPU usage over the last 5 minutes;
- node_memory_MemTotal_bytes/node_memory_MemAvailable_bytes - RAM metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
(node_memory_MemTotal_bytes{job="optimism-node-exporter"} - node_memory_MemAvailable_bytes{job="optimism-node-exporter"}) / 1073741824
, which means the amount of RAM (in GB) used, excluding cache/buffers;
- node_network_receive_bytes_total - network traffic metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
rate(node_network_receive_bytes_total{job="optimism-node-exporter"}[1m])
, which means the average network traffic received, per second, over the last minute (in bytes);
- node_filesystem_avail_bytes - FS metrics exposed by Node Exporter - for monitoring purposes, you could use the following expression:
node_filesystem_avail_bytes{job="optimism-node-exporter",device="<DEVICE>"} / 1073741824
, which means the filesystem space available to non-root users (in GB) for a certain device <DEVICE> (i.e./dev/sda
or wherever the blockchain data is stored) - this can be used to get an alert whenever the available space left is below a certain threshold (please be careful how you choose this threshold: if you have storage that can easily be increased - for example, EBS storage from AWS, you can set a lower threshold, but if you run your node on a bare metal machine which is not easily upgradable, you should set a higher threshold just to be sure you are able to find a solution before it fills up);
- up - Prometheus automatically generated metrics - for monitoring purposes, you could use the following expression:
up{job="optimism-node-dtl"}
, which has 2 possible values: 1, if the node is up, or 0, if the node is down - this can be used to get an alert whenever the node goes down (i.e. it can be triggered at each restart of the node);
- data_transport_layer_highest_synced_l2_block - this is a metric that can be used in order to check if the node is currently syncing with the network - for monitoring purposes, you could use the following expression:
increase(data_transport_layer_highest_synced_l2_block{job="optimism-node"}[1m])
, which is going to show the latest block that has been received by the node - this can be used to get an alert whenever the node is not syncing blocks anymore (i.e less than 5 blocks in the past 5 minutes);
You can use the above metrics to create both Grafana dashboards and Alertmanager alerts.
Please make sure to also check the Official Documentation and the Github Repository posted above in order to make sure you are keeping your node up to date.