Quick Upgrade

Validators perform critical functions for the network, and as such, have a strict uptime requirement. Validators may have to go offline for short periods to upgrade client software or the host machine. Usually, standard client upgrades will only require you to stop the service, replace the binary (or the Docker container) and restart the service. This operation can be executed within a session (4 hours) with minimum downtime.

Docker Container

For a node running in a Docker container, follow these steps to upgrade your node client version:

  1. Stop and rename the Docker container so it can be re-created again with the latest image.
docker stop <CONTAINER_ID>
docker rename <CONTAINER_ID> prev-<CONTAINER_ID>
  1. Pull the latest Docker image from Github:
docker image rm ghcr.io/futureversecom/seed:latest
docker image pull ghcr.io/futureversecom/seed:latest
  1. Re-create the validator container again using the latest image:
docker run ...

Info: Re-creating the Docker container will not cause your node to re-sync from block 0 again, as long as your Docker container is setup with correct volume mapping as per the instructions.

  1. Confirm your node is working correctly and running the latest version on Telemetry, then remove the old container:
docker rm prev-<CONTAINER_ID>

Binary Source

For a node running using binary built from the source, you can follow the steps here to re-build the binary at the latest version and restart your node again.

Long-lead Upgrade

Validators may also need to perform long-lead maintenance tasks that will span more than one session. Under these circumstances, an active validator may choose to chill their stash, perform the maintenance and request to validate again. Alternatively, the validator may substitute the active validator server with another allowing the former to undergo maintenance activities with zero downtime.

This section will provide an option to seamlessly substitute Node A, an active validator server, with Node B, a substitute validator node, to allow for upgrade/maintenance operations for Node A.

Step 1: At Session N

  1. Follow step 1 and 2 from the Setup Instructions above to setup the new Node B.
  2. Go to the Portal Extrinsics page and input the values as per the screenshot below, with the session keys value from step 2 of the Setup Instructions.
  1. Go to the the Portal Chain State page, query the session.currentIndex() to take note of the session that this extrinsic was executed in.
  1. Allow the current session to elapse and then wait for two full sessions.

Info:
You must keep Node A running during this time. session.setKeys does not have an immediate effect and requires two full sessions to elapse before it does. If you do switch off Node A too early you may risk being removed from the active set.

Step 2: At Session N+3

Verify that Node A has changed the authority to Node B by inspecting its log with messages like the ones below:

2024-02-29 21:28:26 πŸ‘΄ Applying authority set change scheduled at block #541
2024-02-29 21:28:26 πŸ‘΄ Applying GRANDPA set change to new set...

And in Node B, you should see log messages like these:

2024-02-29 20:32:04 πŸ™Œ Starting consensus session on top of parent 0xa692ad56e2fb5601fc04e4e9cd41615b227fef0e93129601c5143ba8c723291c
2024-02-29 20:32:04 🎁 Prepared block for proposing at 12055 (3 ms) [hash: 0x8f1496f960e7e32a952fcbc335eb188c776e147e8facf3b7ca2aeb12f2c2a82c; parent_hash: 0xa692…291c; extrinsics (1): [0x47ac…c7f3]]
2024-02-29 20:32:04 πŸ”– Pre-sealed block for proposal at 12055. Hash now 0xe3dbc6f48223029d0e0d281c3c05994beebb9fca4b5e8a0befdb4938b36133b2, previously 0x8f1496f960e7e32a952fcbc335eb188c776e147e8facf3b7ca2aeb12f2c2a82c.

Once confirmed, you can safely perform any maintenance operations on Node A and switch back to it by following the step above for Node A. Again, don’t forget to keep Node B running until the current session finishes and two additional full sessions have elapsed.