Cloud
NOTE: Eluvio provides the following information to make deployments easier. These steps have not been fully tested and may not be the optimal method for deploying in the targeted cloud.
The preceding documentation provided general steps that can be used to deploy a validator in Linux servers. Many cloud providers offer direct container image hosting, and this document can point to methods to take advantage of this.
Google Cloud Platform (GCP)
Google’s VM as a service offering, Computer Engine, allows running a container directly.
-
This section presumes the use of
gcloud
and the project is set accordingly. Console configuration steps may be added at a later date. -
The zone used in these examples is
us-central1-a
, but any zone should work.
Create the disks
The first step is to create a persistent disk to house the validator configuration and data. If the disk is new and attached without being mounted/manipulated beforehand, it will be formatted to ext4
, which is what we want. Since an SSD is desirable for performance, use the gcloud compute disk-types list
command to get a list of disk types. For this example, pd-ssd
is used. The data resiliency requirements are customer dependent.
$ gcloud compute disk-types list --zones us-central1-a | grep ssd
local-ssd us-east1-a 375GB-375GB
pd-ssd us-east1-a 10GB-65536GB
Now use gcloud
to create the disk, which will be eluvio-validator-1
for this example.
gcloud compute disks create eluvio-validator-1 --type=pd-ssd --size=300GB --zone=us-central1-a
A 300GB size in this example disk is sufficient for this example. See the Specifications documentation for full guidance.
A second disk is needed for the configuration. 10GB is overkill, but it is the smallest size allowed. This uses pd-standard
since the config does not need the speed SSDs provide.
The config will store the keys for the validator. It may be advisable to use a Customer Supplied Encryption on this disk.
gcloud compute disks create validator-1-config --type=pd-standard --size=10GB --zone=us-central1-a
There are now 2 disks for use in the container:
validator-1-config
to be mounted as/conf
eluvio-validator-1
to be mounted as/data
Launching the containers
For full details on the commands and concepts used, see the Containers on Compute Engine documentation, with attention to the persistent disk mounting options.
Two containers will be used/named. One is ephemeral to just get the node setup. These steps could be combined in 2 steps on the same VM, but 2 containers used to illustrate the differences in utility. The container names used in this section of the document:
my-validator-1-setup
: Used to setup the config filemy-validator-1-running
: Used tp run the container. This will be a larger host and should have a static IP
The setup container
Since the Eluvio published container allows a setup phase and a daemon running phase, the first container to start is the one that will format the disks and setup the configuration.
gcloud compute instances create-with-container my-validator-1-setup --zone us-central1-a \
--container-image us-docker.pkg.dev/eluvio-containers-339519/public/elvmasterd-validator:latest \
--disk name=eluvio-validator-1 \
--disk name=validator-1-config \
--container-mount-disk mount-path="/data",name=eluvio-validator-1,mode=rw \
--container-mount-disk mount-path="/conf",name=validator-1-config,mode=rw \
--container-arg "setup"
Not the steps to denote and setup the disk mounts in the container.
This will only take a few minutes. Because the container shutdown does not shut down the VM, the VM needs to be shutdown/deleted via gcloud
.
gcloud compute instances delete my-validator-1-setup --zone=us-central1-a
The “daemon” container
Since the elvmasterd
service needs to allow inbound connections, setup a firewall rule to allow this traffic.
gcloud compute firewall-rules create allow-eluvio-chain \
--allow tcp:40304,udp:40304 --target-tags eluvio-blockchain
Note, the previous container was the default machine type. When running this as a daemon, it should be run as a larger server, such as the e2-standard-16
machine shape specified in the Specifications documentation.
The command below will look the same, but it does have additional steps to publish the correct ports and used the daemon-start
command.
gcloud compute instances create-with-container my-validator-1-running --zone us-central1-a \
--machine-type e2-standard-16 \
--container-image us-docker.pkg.dev/eluvio-containers-339519/public/elvmasterd-validator:latest \
--disk name=eluvio-validator-1 \
--disk name=validator-1-config \
--container-mount-disk mount-path="/data",name=eluvio-validator-1,mode=rw \
--container-mount-disk mount-path="/conf",name=validator-1-config,mode=rw \
--tags eluvio-blockchain \
--container-arg "daemon-start"
Getting container info
Log in via ssh
to the VM running the container. gcloud
makes this easy.
gcloud compute ssh my-validator-1-running --zone us-central1-a
Once logged in, docker
commands can be run to see the logs:
docker ps -a # to get the container name
docker logs -f <container-name> # to see the container logs
Finally, just as in the containers documentation, the /usr/local/bin/print_validator_info
script can be run to get the node info:
docker exec -ti <container-name> /usr/local/bin/print_validator_info
The IP will also need to be shared. It can be found via gcloud
with:
gcloud compute instances list --filter="name=('my-validator-1-running')" --zones=us-central1-a
joe-user@my-validator-1-running ~ $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b2d71bf3f01 us-docker.pkg.dev/eluvio-containers-339519/public/elvmasterd-validator:latest "/usr/local/bin/elvm…" 6 minutes ago Up 6 minutes klt-my-validator-1-running-btat
51cf6fdffdea gcr.io/stackdriver-agents/stackdriver-logging-agent:1.8.9 "/entrypoint.sh /usr…" 6 minutes ago Up 6 minutes stackdriver-logging-agent
joe-user@my-validator-1-running ~ $ docker logs -f klt-my-validator-1-running-btat
DEBUG: /usr/local/bin/elvmasterd-wrapper: arg -> starting daemon with defaults
DEBUG: /usr/local/bin/elvmasterd-wrapper: Starting daemon with config in /conf
...
joe-user@my-validator-1-running ~ $ docker exec -ti klt-my-validator-1-running-btat /usr/local/bin/print_validator_info
Details to provide other validators:
enode: bfeac4...6
address: 0x9...F
Keep this info for your records to recreate wallet:
mnemonic: ichi dul three ... douze
Amazon Web Services (AWS)
TBD