Running a full node allows to query the Centrifuge Chain blocks through it's RPC endpoints, wether you're a Dapp developer or you just want to be fully trustless and run your own node this guide will teach you how to setup your own full or archive node
Note: Syncing and Runtime Upgrades might put extra load on the node. It is recommended to increase the resources until the node is fully synced. Use a process manager to restart the process if it reaches memory limits, hangs, or crashes.
In this section, we'll go over the recommended arguments for running a full node.
Some of our recommended settings are commented for clarification, the rest can be found in Parity's node documentation
1--port=30333 # p2p listening port2--rpc-port=9933 # RPC listening port3--rpc-external # To listen on public interfaces4--rpc-cors=all # Adjust depending on your needs5--rpc-max-request-size=40 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load6--rpc-max-response-size=40 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load7--rpc-max-connections=512 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load8--in-peers=100 # Max connections ingress9--out-peers=100 # Max connections egress10--db-cache=2048 # DB MB on RAM - Adjust to your hardware setup11--chain=centrifuge12--parachain-id=203113--base-path=/data14--log=main,info,xcm=trace,xcm-executor=trace15--database=rocksdb16--execution=wasm17--wasm-execution=compiled18--bootnodes=/ip4/35.198.171.148/tcp/30333/ws/p2p/12D3KooWDXDwSdqi8wB1Vjjs5SVpAfk6neadvNTPAik5mQXqV7jF19--bootnodes=/ip4/34.159.117.205/tcp/30333/ws/p2p/12D3KooWMspZo4aMEXWBH4UXm3gfiVkeu1AE68Y2JDdVzU723QPc20--bootnodes=/dns4/node-7010781199623471104-0.p2p.onfinality.io/tcp/23564/ws/p2p/12D3KooWSN6VXWPvo1hoT5rb5hei5B7YdTWeUyDcc42oTPwLGF2p21--name=YOUR_NODE_NAME22--23--execution=wasm24--wasm-execution=compiled25--chain=polkadot
Notes
- The arguments above the
--
are for the parachain and the ones below for the relay chain.- Bootnodes, parachain-id, and chain options will change for each network.
- Use a descriptive NODE_NAME
- Choose log levels based on your setup
Centrifuge nodes support fast synching using --sync=warp
and --sync=fast
for both the parachain and the relay chain arguments
Everything same as above but adding --prune=archive
before the --
on the CLI arguments.
Archive nodes do not support fast synching and thus the --sync=
options can only be added to the section below the --
The specific format will depend on how you deploy your node:
Docker/Kubernetes
1- "--port=30333"2- "--rpc-port=9933"3...4- "--chain=polkadot"5- "--sync=fast"
Systemd
1ExecStart=/var/lib/centrifuge-data/centrifuge-chain \2 --port=30333 \3 --rpc-port=9933 \4 ...5 -- \6 ...7 --sync=fast
Bootnodes:
1--bootnodes=/ip4/35.198.171.148/tcp/30333/ws/p2p/12D3KooWDXDwSdqi8wB1Vjjs5SVpAfk6neadvNTPAik5mQXqV7jF2--bootnodes=/ip4/34.159.117.205/tcp/30333/ws/p2p/12D3KooWMspZo4aMEXWBH4UXm3gfiVkeu1AE68Y2JDdVzU723QPc3--bootnodes=/dns4/node-7010781199623471104-0.p2p.onfinality.io/tcp/23564/ws/p2p/12D3KooWSN6VXWPvo1hoT5rb5hei5B7YdTWeUyDcc42oTPwLGF2p
Chain args:
1--chain=centrifuge2--parachain-id=20313--4--chain=polkadot
Bootnodes:
1- --bootnodes=/ip4/35.246.168.210/tcp/30333/p2p/12D3KooWCtdW3HWLuxDLD2fuTZfTspCJDHWxnonKCEgT5JfGsoYQ2- --bootnodes=/ip4/34.89.182.4/tcp/30333/p2p/12D3KooWETyS1VZTS4fS7dBZpXbPKMP129dy4KpFSWoErBWJ5i5d3- --bootnodes=/ip4/35.198.144.90/tcp/30333/p2p/12D3KooWMJPzvEp5Jhea8eKsUDufBbAzGrn265GcaCmcnp3koPk4
Chain args:
1--chain=/resources/demo-spec-raw.json2--parachain-id=20313--4--chain=/resources/westend-alphanet-raw-specs.json
demo-spec-raw.json
and westend-alphanet-raw-specs.json
can be found either in the path above
for the docker container or in the node/res/
folder in the codebase
You can use the container published on the Centrifuge Docker Hub repo or be fully trustless by cloning the Centrifuge Chain repository and using the Dockerfile (2-4h build time on an average machine). If you are building the image yourself, make sure you have checked out the latest tag for the most recent release:
1git clone https://github.com/centrifuge/centrifuge-chain.git2git checkout vX.Y.Z3docker buildx build -f docker/centrifuge-chain/Dockerfile . -t YOUR_TAG
Create a docker-compose.yml
file with the contents below, adjusting the following:
ports
based on your network setup./mnt/my_volume/data
with the volume and/or data folder you want to use."--pruning=archive"
before ---name
1version: '3'2services:3centrifuge:4 container_name: centrifuge-chain5 image: "centrifugeio/centrifuge-chain:[INSERT_RELEASE_HERE]"6 platform: "linux/amd64"7 restart: on-failure8 ports:9 - "30333:30333"10 - "9944:9933"11 volumes:12 # Mount your biggest drive13 - /mnt/my_volume/data:/data14 command:15 - "--port=30333"16 ...17 - "--"18 ...19 - "--chain=polkadot"20 - "--sync=fast"
Refer to the CLI arguments on section 1.
Runing the container
1docker-compose pull --policy always && docker-compose up -d
We recommend using a stateful set to run multiple replicas and balance the load between them via an ingress.
WARNING: using these K8 manifests as-is will not work, it has been included in this guide to give experienced Kubernetes operators a starting point. Centrifuge cannot provide Kubernetes support to node operators, use at your own risk.
StatefulSet Example
1apiVersion: apps/v12kind: StatefulSet3metadata:4 labels:5 app: fullnode-cluster6 name: fullnode-cluster7spec:8 serviceName: "fullnode-cluster"9 replicas: 210 selector:11 matchLabels:12 app: fullnode-cluster13 template:14 metadata:15 labels:16 app: fullnode-cluster17 spec:18 nodeSelector:19 cloud.google.com/gke-nodepool: fullnodes1620 containers:21 - args:22 - --rpc-cors=all23 - --rpc-methods=unsafe24 ...25 - --execution=wasm26 - --wasm-execution=compiled27 - --28 ...29 - --sync=fast30 image: centrifugeio/centrifuge-chain:[DOCKER_TAG]31 imagePullPolicy: IfNotPresent32 name: fullnodes-cluster33 livenessProbe:34 httpGet:35 path: /health36 port: 993337 initialDelaySeconds: 6038 periodSeconds: 12039 ports:40 - containerPort: 993341 protocol: TCP42 - containerPort: 3033343 protocol: TCP44 volumeMounts:45 - mountPath: /data/46 name: storage-volume47 - name: rpc-health48 image: paritytech/ws-health-exporter49 env:50 - name: WSHE_NODE_RPC_URLS51 value: "ws://127.0.0.1:9933"52 - name: WSHE_NODE_MIN_PEERS53 value: "2"54 - name: WSHE_NODE_MAX_UNSYNCHRONIZED_BLOCK_DRIFT55 value: "2"56 ports:57 - containerPort: 800158 name: http-ws-he59 resources:60 limits:61 cpu: "250m"62 memory: 0.5Gi63 requests:64 cpu: "250m"65 memory: 0.5Gi66 readinessProbe:67 httpGet:68 path: /health/readiness69 port: 800170 initialDelaySeconds: 3071 periodSeconds: 272 successThreshold: 373 failureThreshold: 174 initContainers:75 - name: fix-permissions76 command:77 - sh78 - -c79 - |80 chown -R 1000:1000 /data81 image: busybox82 imagePullPolicy: IfNotPresent83 volumeMounts:84 - mountPath: /data/85 name: storage-volume86 volumeClaimTemplates:87 - metadata:88 name: storage-volume89 spec:90 accessModes: ["ReadWriteOnce"]91 resources:92 requests:93 storage: 1200G94 storageClassName: standard-rwo
NOTE: The example below does not include SSL or any other advanced proxy settings. Adjust to your own needs. Networking
1---2# Service to balance traffic between replicas:3apiVersion: v14kind: Service5metadata:6 name: fullnode-cluster-ha7 namespace: centrifuge8spec:9 selector:10 app: fullnode-cluster11 ports:12 - protocol: TCP13 port: 993314---15apiVersion: v116kind: Service17metadata:18 name: fullnode-cluster19 namespace: centrifuge20spec:21 clusterIP: None22 selector:23 app: fullnode-cluster24 ports:25 - name: tcp26 port: 993327 targetPort: 993328---2930apiVersion: networking.k8s.io/v131kind: Ingress32metadata:33 annotations:34 <ADD_YOUR_OWN>35 name: fullnode-ha-proxy36 namespace: centrifuge37spec:38 ingressClassName: nginx-v239 rules:40 - host: <YOUR_FQDN_HERE>41 http:42 paths:43 - backend:44 service:45 name: fullnode-cluster46 port:47 number: 993348 path: /49 pathType: ImplementationSpecific
1adduser centrifuge_service --system --no-create-home2mkdir /var/lib/centrifuge-data # Or use a folder location of you choosing. But replace the all occurences of `/var/lib/centrifuge-data` below accordingly3chown -R centrifuge_service /var/lib/centrifuge-data
-> Replace [INSERT_RELEASE_HERE] with the latest release vX.Y.Z
1# This dependencies install is only for Debian Distros:2sudo apt-get install cmake pkg-config libssl-dev git clang libclang-dev protobuf-compiler3git clone https://github.com/centrifuge/centrifuge-chain.git4cd centrifuge-chain5git checkout [INSERT_RELEASE_HERE]6curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh7./scripts/install_toolchain.sh8cargo build --release9cp ./target/release/centrifuge-chain /var/lib/centrifuge-data
Pick an appropriate mainnet image for mainnet binaries. Keep in mind that the retrieved binary is build for Linux.
1docker run --rm --name centrifuge-cp -d centrifugeio/centrifuge-chain:[INSERT_RELEASE_HERE] --chain centrifuge2docker cp centrifuge-cp:/usr/local/bin/centrifuge-chain /var/lib/centrifuge-data
We are now ready to start the node, but to ensure it is running in the background and auto-restarts in case of a server failure, we will set up a service file using systemd.
Change the ports
based on your network setup.
Notes
It is important to leave the --bootnodes $ADDR
in one line as otherwise the arguments are not parsed correctly,
making it impossible for the chain to find peers as no bootnodes will be present.
To run it as an archive node, add --pruning=archive \\
before --name
below.
1sudo tee <<EOF >/dev/null /etc/systemd/system/centrifuge.service2[Unit]3Description="Centrifuge systemd service"4After=network.target5StartLimitIntervalSec=067[Service]8Type=simple9Restart=always10RestartSec=1011User=centrifuge_service12SyslogIdentifier=centrifuge13SyslogFacility=local714KillSignal=SIGHUP15ExecStart=/var/lib/centrifuge-data/centrifuge-chain \16 --port=30333 \17 --rpc-port=9933 \18 ...19 -- \20 ...21 --sync=fast2223[Install]24WantedBy=multi-user.target25EOF
Refer to the CLI arguments on section 1.
Actually enable the previously generated service and start it.
1sudo systemctl enable centrifuge.service2sudo systemctl start centrifuge.service
If everything was set-up correctly, your node should now start the process of synchronization. This will take several hours, depending on your hardware. To check the status of the running service or to follow the logs, use:
1sudo systemctl status centrifuge.service2sudo journalctl -u centrifuge.service -f
Once your node is fully synced, you can run a cURL request to see the status of your node. If your node is externally available, replace localhost
for your URL.
1curl -H "Content-Type: application/json" \2-d '{"id":1, "jsonrpc":"2.0", "method": "eth_syncing", "params":[]}' \3localhost:9933
Expected output if node is synced is {"jsonrpc":"2.0","result":false,"id":1}
.
You can monitor your node to make sure it is ready to serve RPC calls using parity's ws-health-exporter
.
More info on the parity's Docker Hub page.
As it happens with any blockchain, the storage will run out eventually. It is recommended to monitor your storage or use any kind of auto-scaling storage to account for this. It is also recommended to setup a reverse proxy or an API gateway to monitor the API calls and see the response rate and the response codes to look for errors over time. How to do this is out of the scope of this documentation.
During fast syncing it is expected to see the following error messages on the [Relaychain]
side.
1ERROR tokio-runtime-worker sc_service::client::client: [Relaychain] Unable to pin block for finality notification. hash: 0x866f…387c, Error: UnknownBlock: State already discarded [...]2WARN tokio-runtime-worker parachain::runtime-api: [Relaychain] cannot query the runtime API version: Api called for an unknown Block: State already discarded [...]
As long as the following logs are seen
1INFO tokio-runtime-worker substrate: [Relaychain] ⚙️ Syncing, target=#18279012 (9 peers), best: #27674 (0x28a4…6fe6), finalized #27648 (0x406d…b89e), ⬇ 1.1MiB/s ⬆ 34.6kiB/s2INFO tokio-runtime-worker substrate: [Parachain] ⚙️ Syncing 469.4 bps, target=#4306117 (15 peers), best: #33634 (0x79d2…0a45), finalized #0 (0xb3db…9d82), ⬇ 1.3MiB/s ⬆ 2.0kiB/s
everything is working correctly. Once the chain is fully synced, the errors are expected to vanish.
If the chain stops syncing, often due to unavailable blocks, please restart your node. The reason is in most cases that the p2p-view of your node is incorrect at the moment. Resulting in your node dropping the peers and being unable to further sync. A restart helps in theses cases.
Example logs will look like the following:
1WARN tokio-runtime-worker sync: [Parachain] 💔 Error importing block 0x88591cb0cb4f66474b189a34abab560e335dc508cb8e7926343d6cf8db6840b7: consensus error: Import failed: Database
It is common that bootnodes change their p2p-identity, leading to the following logs:
1WARN tokio-runtime-worker sc_network::service: [Relaychain] 💔 The bootnode you want to connect to at `/dns/polkadot-bootnode.polkadotters.com/tcp/30333/p2p/12D3KooWCgNAXvn3spYBeieVWeZ5V5jcMha5Qq1hLMtGTcFPk93Y` provided a different peer ID `12D3KooWPAVUgBaBk6n8SztLrMk8ESByncbAfRKUdxY1nygb9zG3` than the one you expect `12D3KooWCgNAXvn3spYBeieVWeZ5V5jcMha5Qq1hLMtGTcFPk93Y`.
These logs can be safely ignored.