Kubernetes Staking Setup
Hardware Requirements
To ensure optimal performance of your Kubernetes cluster, we recommend the following minimum configurations:
- Nodes: Your Kubernetes cluster should include a minimum of three nodes. Each node should be equipped with a configuration of 16 CPU/32 GB RAM.
- CPU: Each node should have at least 16 cores running at a speed of 2.8 GHz or more.
- Memory: Each node should be equipped with a minimum of 32GB RAM.
- Storage: SSD storage is required, with a minimum of 1000GB for each execution client and 500GB for each consensus client.
- Network: Broadband connection of at least 100 MBit/sec is required.
- Helm: We recommend using version 3.10 or later. Although our charts may work with earlier versions of Helm, we have only tested them with version 3.10 and above.
- Kubernetes: We recommend using version 1.22 or later. While our charts may work with earlier versions of Kubernetes, we have only tested them with version 1.22 and above.
- PV Provisioner: Ensure that the underlying infrastructure supports PV provisioner.
Please ensure your setup meets these requirements for the best experience.
Setting up Kubernetes
Before proceeding with the setup, ensure you have installed all the necessary Helm repositories:
helm repo add stakewise https://charts.stakewise.io
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
Step 1: Monitoring Configuration
Skip if Already Configured
Note: If Prometheus is already set up in your Kubernetes cluster, feel free to skip this step.
For robust system monitoring and alert management, our supported charts come with built-in capabilities to enable Prometheus, Grafana, and Alertmanager.
To install Prometheus, Grafana, and Alertmanager, follow these instructions:
- Default
- For GKE/EKS
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
--set='grafana.sidecar.dashboards.enabled=true' \
--set='grafana.sidecar.dashboards.searchNamespace=true' \
--set='prometheus.prometheusSpec.ruleSelectorNilUsesHelmValues=false' \
--set='prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false' \
--set='prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false' \
--set='prometheus.prometheusSpec.probeSelectorNilUsesHelmValues=false' \
--create-namespace \
--namespace monitoring \
--version 52.1.0 \
-f prom.yaml
prom.yaml
:
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: "{REPLACE_ME_WITH_STORAGE_CLASS_NAME}"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
grafana:
persistence:
enabled: true
type: pvc
storageClassName: "{REPLACE_ME_WITH_STORAGE_CLASS_NAME}"
accessModes: ["ReadWriteOnce"]
size: 10Gi
finalizers:
- kubernetes.io/pvc-protection
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
--set='kubeControllerManager.enabled=false' \
--set='kubeEtcd.enabled=false' \
--set='kubeScheduler.enabled=false' \
--set='kubeProxy.enabled=false' \
--set='defaultRules.rules.etcd=false' \
--set='defaultRules.rules.kubernetesSystem=false' \
--set='defaultRules.rules.kubeScheduler=false' \
--set='grafana.sidecar.dashboards.enabled=true' \
--set='grafana.sidecar.dashboards.searchNamespace=true' \
--set='prometheus.prometheusSpec.ruleSelectorNilUsesHelmValues=false' \
--set='prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false' \
--set='prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false' \
--set='prometheus.prometheusSpec.probeSelectorNilUsesHelmValues=false' \
--create-namespace \
--namespace monitoring \
--version 52.1.0 \
-f prom.yaml
prom.yaml
:
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: "gp2"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
grafana:
persistence:
enabled: true
type: pvc
storageClassName: "gp2"
accessModes: ["ReadWriteOnce"]
size: 10Gi
finalizers:
- kubernetes.io/pvc-protection
Optional (Grafana Dashboards)
Import dashboards into Grafana manually or automatically with Helm:
helm upgrade --install grafana-stakewise-dashboards stakewise/grafana-stakewise-dashboards \
--namespace monitoring
Step 2: Configuring the Execution Node
Execution nodes play a critical role in the Ethereum ecosystem as they are employed by validators for proposing new blocks. Therefore, to run validator and beacon nodes, a stable connection to the Ethereum chain is essential.
It is important to note that the deployment of execution nodes must be done before other processes. Currently, we support execution nodes such as Geth, Erigon, Besu, and Nethermind.
Gnosis Chain Support
Only Nethermind and Erigon are supporting Gnosis Chain
Prior to deployment, you'll need to generate a JSON Web Token (JWT) which is used for securing the communication between the beacon node and the execution client. You can generate a JWT by using a command line tool. For instance:
export JWT=`openssl rand -hex 32`
To proceed with the deployment, choose the client you prefer and run the corresponding command:
- Geth
- Erigon
- Besu
- Nethermind
helm upgrade --install geth stakewise/geth \
--set="global.replicaCount=3" \
--set="global.network=mainnet" \
--set="global.metrics.enabled=true" \
--set="global.metrics.serviceMonitor.enabled=true" \
--set="global.metrics.prometheusRule.enabled=true" \
--set="global.JWTSecret=${JWT}" \
--create-namespace \
--namespace chain
helm upgrade --install erigon stakewise/erigon \
--set="global.replicaCount=3" \
--set="global.network=mainnet" \
--set="global.metrics.enabled=true" \
--set="global.metrics.serviceMonitor.enabled=true" \
--set="global.metrics.prometheusRule.enabled=true" \
--set="global.JWTSecret=${JWT}" \
--create-namespace \
--namespace chain
helm upgrade --install besu stakewise/besu \
--set="replicaCount=3" \
--set="network=mainnet" \
--set="metrics.serviceMonitor.enabled=true" \
--set="metrics.prometheusRule.enabled=true" \
--set="global.JWTSecret=${JWT}" \
--create-namespace \
--namespace chain
helm upgrade --install nethermind stakewise/nethermind \
--set="global.replicaCount=3" \
--set="global.network=mainnet" \
--set="global.metrics.enabled=true" \
--set="global.metrics.serviceMonitor.enabled=true" \
--set="global.metrics.prometheusRule.enabled=true" \
--set="global.JWTSecret=${JWT}" \
--create-namespace \
--namespace chain
Step 3: Setting Up the Consensus Node
The consensus beacon node is essential for managing the Proof-of-Stake blockchain (Beacon Chain), using distributed consensus to validate and agree on blocks across the network. Validators connect to these beacon nodes to receive their block attestation and proposal assignments.
Since Ethereum's Merge ↗ upgrade, execution clients can no longer operate independently as full nodes. They now require pairing with a consensus client to maintain network synchronization. This creates a clear division of responsibilities:
- Execution client: Handles transaction processing, transaction gossiping, state management, and Ethereum Virtual Machine (EVM) operations
- Consensus client: Manages block building, block gossiping, and consensus logic
Gnosis Network Client Compatibility
For stable Gnosis network support, the compatible clients are as follows:
- For execution: Nethermind ↗ and Erigon ↗
- For consensus: Teku ↗ and Lighthouse ↗
Important Configuration Note
While setting up consensus charts, keep in mind that they don't have a replicaCount
parameter. Instead, you are required to specify a list of execution endpoints. For each endpoint you specify, a separate stateful set will be created.
Choose one or two clients to install and deploy:
- Prysm
- Lighthouse
- Teku
helm upgrade --install prysm stakewise/prysm \
--set="global.network=mainnet" \
--set="global.JWTSecret=${JWT}" \
--set="global.executionEndpoints[0]=http://nethermind-0.nethermind:8545" \
--set="global.executionEndpoints[1]=http://nethermind-1.nethermind:8545" \
--set="global.executionEndpoints[2]=http://nethermind-2.nethermind:8545" \
--set="global.metrics.enabled=true" \
--set="global.metrics.serviceMonitor.enabled=true" \
--set="global.metrics.prometheusRule.enabled=true" \
--create-namespace \
--namespace chain
helm upgrade --install lighthouse stakewise/lighthouse \
--set="global.network=mainnet" \
--set="global.JWTSecret=${JWT}" \
--set="global.executionEndpoints[0]=http://geth-0.geth:8551" \
--set="global.executionEndpoints[1]=http://geth-1.geth:8551" \
--set="global.executionEndpoints[2]=http://geth-2.geth:8551" \
--set="global.metrics.enabled=true" \
--set="global.metrics.serviceMonitor.enabled=true" \
--set="global.metrics.prometheusRule.enabled=true" \
--create-namespace \
--namespace chain
helm upgrade --install teku stakewise/teku \
--set="global.network=mainnet" \
--set="global.JWTSecret=${JWT}" \
--set="global.executionEndpoints[0]=http://erigon-0.erigon:8551" \
--set="global.executionEndpoints[1]=http://erigon-1.erigon:8551" \
--set="global.executionEndpoints[2]=http://erigon-2.erigon:8551" \
--set="global.metrics.enabled=true" \
--set="global.metrics.serviceMonitor.enabled=true" \
--set="global.metrics.prometheusRule.enabled=true" \
--create-namespace \
--namespace chain
Recommended Configuration
The recommended configuration involves deploying two replicas of the primary consensus client and one replica of the standby consensus client. Validators will establish connections evenly across all primary replicas and will automatically switch to another primary replica if their current connection fails.
In the event that the primary client encounters an issue, validators can transition to the standby client. This ensures a seamless operation as they won't have to wait for the standby client to synchronize with the chain.
Step 4: Prepare Validator Keys
Deploy PostgreSQL
PostgreSQL Setup
Installing and configuring PostgreSQL is beyond the scope of this guide, and we hope that operators will be able to choose and implement the correct reliable solution on their own. PostgreSQL is used to store the validators' keys in encrypted form, as well as to store the slashing history of the web3signer database.
After the database is deployed, two databases and two users must be created:
web3signer
- which stores web3signer's dataoperator
- which stores validator keys and configs generated viav3-operator
Prepare Operator
Complete the following steps before proceeding:
Setup Database
The command creates tables and generates encryption key for the database:
./v3-operator remote-db \
--db-url=postgresql://postgres:postgres@localhost/operator \
--vault=0x8189aF89A7718C1baB5628399FC0ba50C6949bCc \
setup
Successfully configured remote database.
Encryption key: D/6CbpJen3J0ue0tWcd+d4KKHpT4kaSz3IzG5jz5LFI=
NB! You must store your encryption in a secure cold storage!
NB! You must store the generated encryption key in a secure cold storage. You would have to re-do the setup if you lose it.
Load Keystores to the Database
The command loads encrypted keystores and operator config to the remote DB:
./v3-operator remote-db \
--db-url=postgresql://postgres:postgres@localhost/operator \
--vault=0x8189aF89A7718C1baB5628399FC0ba50C6949bCc \
upload-keypairs \
--encrypt-key=D/6CbpJen3J0ue0tWcd+d4KKHpT4kaSz3IzG5jz5LFI= \
--execution-endpoints=http://localhost:8545
Loading keystores from /Users/user/.stakewise/0x8189af89a7718c1bab5628399fc0ba50c6949bcc/keystores...
Encrypting 10000 keystores...
Uploading updates to the remote db...
Successfully uploaded keypairs for the 0x8189aF89A7718C1baB5628399FC0ba50C6949bCc vault.
Step 5: Web3Signer
Web3Signer is an open-source signing service developed under the Apache 2.0 license and written in Java. Web3Signer is capable of signing on multiple platforms using private keys stored in an external vault, or encrypted on a disk.
Deploy Web3Signer
Once you've successfully deployed the database, deploy web3signer service:
helm upgrade --install web3signer stakewise/web3signer \
--set='global.network=mainnet' \
--set='global.vault=\{VAULT_ADDRESS\}' \
--set='replicaCount=3' \
--set='dbUrl=jdbc:postgresql://cloudsqlproxy.default/web3signer' \
--set='dbUsername=username' \
--set='dbPassword=password' \
--set='dbKeystoreUrl=postgresql://example:example@cloudsqlproxy.default/operator' \
--set='decryptionKey=\<decryption key from the operator CLI\>' \
--create-namespace \
--namespace validators
Step 6: Validators
Validators are responsible for storing data, processing transactions, and adding new blocks to the blockchain. This will keep Ethereum secure for everyone and earn new ETH in the process.
Before deploying the validators, make sure you have deployed Web3Signer and synchronized validator keys in the steps above.
Deploy the chart, after specifying all required parameters:
helm upgrade --install validators stakewise/web3signer-validators \
--set='global.network=mainnet' \
--set='global.vault=\{VAULT_ADDRESS\}' \
--set='type=lighthouse' \
--set='validatorsCount=8' \
--set='beaconChainRpcEndpoints[0]=http://lighthouse-0.chain:5052' \
--set='beaconChainRpcEndpoints[1]=http://lighthouse-1.chain:5052' \
--set='beaconChainRpcEndpoints[2]=http://lighthouse-2.chain:5052' \
--set='web3signerEndpoint=http://web3signer:6174' \
--set='dbKeystoreUrl=postgresql://example:example@cloudsqlproxy.default/operator' \
--set='graffiti=StakeWise' \
--set='metrics.enabled=true' \
--set='metrics.serviceMonitor.enabled=true' \
--set='metrics.prometheusRule.enabled=true' \
--set='suggestedFeeRecipient=\{FEE_RECIPIENT_ADDRESS\}' \
--create-namespace \
--namespace validators
Validator Restart Required
Make sure you have the right number of validators running and restart them so that they will synchronize the latest changes from the Web3Signer.
Understanding validatorsCount
validatorsCount
: This determines the total number of validators you're going to use. The validator keys will be distributed equally among all validators. For example, if you have 1000 keys and 10 validators, each validator will have 100 keys. If you have 20 validators, each will have 50 keys and so on.
Address Configuration
\{FEE_RECIPIENT_ADDRESS\}
- address from the vault page in the details section: Validator fee recipient
\{VAULT_ADDRESS\}
- address from the vault page in the details section: Contract address
Step 7: Deploy Operator
Before deploying v3-operator service create kubernetes secret with operator wallet:
kubectl create secret --namespace operator generic v3-operator-wallet-data --from-file=/home/username/.stakewise/<vault>/wallet
Create values.yaml
for the operator deployment:
settings:
verbose: "false"
network: "mainnet"
vault: "\{VAULT_ADDRESS\}"
executionEndpoints: "https://node.example.com/execution"
consensusEndpoints: "https://node.example.com/consensus"
walletSecretName: "v3-operator-wallet-data"
remoteDbConfig:
enabled: true
dbUrl: "postgresql://example:example@cloudsqlproxy.default/operator"
remoteSignerUrl: "http://web3signer.validators:6174"
helm upgrade --install v3-operator stakewise/v3-operator \
-f values.yaml \
--namespace operator
Final Configuration
\{VAULT_ADDRESS\}
- address from the vault page in the details section: Contract address