The Obol Stack is a framework to make it easier to distribute and run blockchain networks and decentralised applications (dApps) locally. The stack is built on Kubernetes, with Helm as a package management system.
The Obol Stack provides a deployment-centric approach where you can easily install and manage multiple blockchain network instances (Ethereum, Aztec, etc.) with configurable clients and settings. Each network installation creates a unique deployment instance with its own namespace, resources, and configuration - allowing you to run mainnet and testnet side-by-side, or test different client combinations independently.
Important
The Obol Stack is alpha software. It is not complete, and it may not be working smoothly. If you encounter an issue that does not appear to be documented, please open a GitHub issue if an appropriate one is not already present.
The Obol Stack requires Docker to run a local Kubernetes cluster. Install Docker:
- Linux: Follow the Docker Engine installation guide
- macOS/Windows: Install Docker Desktop
The easiest way to install the Obol Stack is using the obolup bootstrap installer.
Run the installer with:
bash <(curl -s https://stack.obol.org)What the installer does:
- Verifies Docker is running
- Installs the
obolCLI binary to~/.local/bin/obol - Installs required dependencies (kubectl, helm, k3d, helmfile, k9s) to
~/.local/bin/ - Adds
obol.stackto your/etc/hostsfile (requires sudo) to enable local domain access - Prompts you to add
~/.local/binto your PATH by updating your shell profile - Prompts you to start the cluster and open the Obol application in your browser
PATH Configuration:
The installer will detect your shell (bash/zsh) and ask if you want to automatically add ~/.local/bin to your PATH. If you choose automatic configuration, it will add this line to your shell profile (~/.bashrc or ~/.zshrc):
export PATH="$HOME/.local/bin:$PATH"After installation, reload your shell configuration:
# For bash
source ~/.bashrc
# For zsh
source ~/.zshrcManual PATH Configuration:
If you prefer to configure PATH manually, add this line to your shell profile:
# Add to ~/.bashrc (bash) or ~/.zshrc (zsh)
export PATH="$HOME/.local/bin:$PATH"Then reload your shell or start a new terminal session.
Using obol without PATH:
If you haven't added ~/.local/bin to your PATH, you can always run commands directly:
~/.local/bin/obol version
~/.local/bin/obol stack initVerify the installation:
obol versionOnce installed, you can start your local Ethereum stack with a few commands:
# Initialize the stack configuration
obol stack init
# Start the Kubernetes cluster
obol stack up
# Install a blockchain network (creates a unique deployment)
obol network install ethereum
# This creates a deployment like: ethereum-nervous-otter
# Deploy the network to the cluster
obol network sync ethereum/nervous-otter
# Install another network configuration
obol network install ethereum --network=holesky
# This creates a separate deployment like: ethereum-happy-panda
# Deploy the second network
obol network sync ethereum/happy-panda
# View cluster resources (opens interactive terminal UI)
obol k9sThe stack will create a local Kubernetes cluster. Each network installation creates a uniquely-namespaced deployment instance, allowing you to run multiple configurations simultaneously.
Tip
Use obol network list to see all available networks. Customize installations with flags (e.g., obol network install ethereum --network=holesky --execution-client=geth) to create different deployment configurations. After installation, deploy to the cluster with obol network sync <network>/<id>.
Tip
You can also install arbitrary Helm charts as applications using obol app install <chart>. Find charts at Artifact Hub.
The Obol Stack uses a deployment-centric architecture where each network installation creates an isolated deployment instance that can be installed, configured, and removed independently. You can run multiple deployments of the same network type with different configurations.
View all network types that can be installed:
obol network listAvailable networks:
- ethereum - Full Ethereum node (execution + consensus clients)
- helios - Lightweight Ethereum client
- aztec - Aztec rollup network
View installed deployments:
# List all network deployment namespaces
obol kubectl get namespaces | grep -E "ethereum|helios|aztec"
# View resources in a specific deployment
obol kubectl get all -n ethereum-nervous-otterInstall a network with default configuration:
obol network install ethereumEach network installation creates a unique deployment instance with:
- Network configuration templated and saved to
~/.config/obol/networks/ethereum/<namespace>/ - Configuration files ready to be deployed to the cluster
- Isolated persistent storage for blockchain data
After installing a network configuration, deploy it to the cluster:
# Deploy the network to the cluster
obol network sync ethereum/<namespace>
# Or use the namespace format
obol network sync ethereum-nervous-otterThe sync command:
- Reads the configuration from
~/.config/obol/networks/<network>/<namespace>/ - Executes helmfile to deploy resources to a unique Kubernetes namespace
- Creates isolated persistent storage for blockchain data
Multiple deployments:
You can install the same network type multiple times with different configurations. Each deployment is isolated in its own namespace:
# Install mainnet with Geth + Prysm
obol network install ethereum --network=mainnet --execution-client=geth --consensus-client=prysm
# Creates configuration: ethereum-nervous-otter
# Deploy to cluster
obol network sync ethereum/nervous-otter
# Install Holesky testnet with Reth + Lighthouse
obol network install ethereum --network=holesky --execution-client=reth --consensus-client=lighthouse
# Creates configuration: ethereum-laughing-elephant
# Deploy Holesky to cluster
obol network sync ethereum/laughing-elephant
# Install another Holesky instance for testing
obol network install ethereum --network=holesky
# Creates configuration: ethereum-happy-panda
# Deploy second Holesky instance
obol network sync ethereum/happy-pandaEthereum configuration options:
--network: Choose network (mainnet, sepolia, holesky, hoodi)--execution-client: Choose execution client (reth, geth, nethermind, besu, erigon, ethereumjs)--consensus-client: Choose consensus client (lighthouse, prysm, teku, nimbus, lodestar, grandine)
Tip
Use obol network install <network> --help to see all available options for a specific network
Unique deployment instances:
Each network installation creates an isolated deployment with a unique namespace (e.g., ethereum-nervous-otter). This allows you to:
- Run multiple instances of the same network type (e.g., mainnet and testnet)
- Test different client combinations without conflicts
- Independently manage, update, and delete deployments
Per-deployment resources:
- Unique namespace: All Kubernetes resources isolated (e.g.,
ethereum-nervous-otter,ethereum-happy-panda) - Configuration files: Templated helmfile saved to
~/.config/obol/networks/ethereum/<namespace>/ - Persistent volumes: Blockchain data stored in
~/.local/share/obol/<cluster-id>/networks/<network>_<namespace>/ - Service endpoints: Internal cluster DNS per deployment (e.g.,
ethereum-rpc.ethereum-nervous-otter.svc.cluster.local)
Install and Sync workflow:
- Install:
obol network installgenerates configuration files locally from CLI flags - Customize: Edit values and templates in
~/.config/obol/networks/<network>/<id>/(optional) - Sync:
obol network syncdeploys the configuration to the cluster using helmfile - Update: Modify configuration and re-sync to update the deployment
Remove a specific network deployment instance and clean up all associated resources:
# List all deployments to find the namespace
obol kubectl get namespaces | grep ethereum
# Delete a specific deployment
obol network delete ethereum-nervous-otter
# Skip confirmation prompt
obol network delete ethereum-nervous-otter --forceThis command will:
- Remove the deployment configuration from
~/.config/obol/networks/ethereum/<namespace>/ - Delete the Kubernetes namespace and all deployed resources
- Clean up associated persistent volumes and blockchain data
Warning
Deleting a deployment removes all associated data and resources. Use with caution.
Note
You can have multiple deployments of the same network type. Deleting one deployment (e.g., ethereum-nervous-otter) does not affect other deployments (e.g., ethereum-happy-panda).
The Obol Stack supports installing arbitrary Helm charts as managed applications. Each application installation creates an isolated deployment with its own namespace, similar to network deployments.
Install any Helm chart using one of the supported reference formats:
# Install using repo/chart format (resolved via ArtifactHub)
obol app install bitnami/redis
obol app install bitnami/[email protected]
# Install using direct URL
obol app install https://charts.bitnami.com/bitnami/redis-19.0.0.tgz
# Install with custom name and ID
obol app install bitnami/postgresql --name mydb --id productionFind charts at Artifact Hub.
Supported chart reference formats:
repo/chart- Resolved via ArtifactHub (e.g.,bitnami/redis)repo/chart@version- Specific version (e.g.,bitnami/[email protected])https://.../*.tgz- Direct URL to chart archive
What happens during installation:
- Resolves the chart reference (via ArtifactHub for repo/chart format)
- Fetches default values from the chart
- Generates helmfile.yaml that references the chart remotely
- Saves configuration to
~/.config/obol/applications/<app>/<id>/
Installation options:
--name: Application name (defaults to chart name)--version: Chart version (defaults to latest)--id: Deployment ID (defaults to generated petname)--forceor-f: Overwrite existing deployment
After installing an application, deploy it to the cluster:
# Deploy the application
obol app sync postgresql/eager-fox
# Check status
obol kubectl get all -n postgresql-eager-foxThe sync command:
- Reads configuration from
~/.config/obol/applications/<app>/<id>/ - Executes helmfile to deploy resources
- Creates unique namespace:
<app>-<id>
View all installed applications:
# Simple list
obol app list
# Detailed information
obol app list --verboseRemove an application deployment and its cluster resources:
# Delete with confirmation prompt
obol app delete postgresql/eager-fox
# Skip confirmation
obol app delete postgresql/eager-fox --forceThis command will:
- Remove the Kubernetes namespace and all deployed resources
- Delete the configuration directory and chart files
After installation, you can modify the values file to customize your deployment:
# Edit application values
$EDITOR ~/.config/obol/applications/postgresql/eager-fox/values.yaml
# Re-deploy with changes
obol app sync postgresql/eager-foxLocal files:
helmfile.yaml: Deployment configuration (references chart remotely)values.yaml: Configuration values (edit to customize)
Start the stack:
obol stack upStop the stack:
obol stack downView cluster (interactive UI):
obol k9sView cluster logs:
obol kubectl logs -n <namespace> <pod-name>Remove everything (including data):
obol stack purge -fWarning
The purge command permanently deletes all cluster data and configuration. The -f flag is required to remove persistent volume claims (PVCs) owned by root. Use with caution.
The obol CLI includes convenient wrappers for common Kubernetes tools. These automatically use the correct cluster configuration:
# Kubectl (Kubernetes CLI)
obol kubectl get pods --all-namespaces
# Helm (Kubernetes package manager)
obol helm list --all-namespaces
# K9s (interactive cluster manager)
obol k9s
# Helmfile (declarative Helm releases)
obol helmfile listThe Obol Stack is configured to run on ports 80 and 443 by default. If you have another service using these ports, the cluster may fail to start.
To fix this:
-
Edit the k3d configuration file:
$EDITOR ~/.config/obol/k3d.yaml
-
Find the ports section that looks like this:
ports: - port: 80:80 nodeFilters: - loadbalancer - port: 8080:80 nodeFilters: - loadbalancer - port: 443:443 nodeFilters: - loadbalancer - port: 8443:443 nodeFilters: - loadbalancer
-
Remove the
80:80and443:443entries (keep the 8080 and 8443 entries):ports: - port: 8080:80 nodeFilters: - loadbalancer - port: 8443:443 nodeFilters: - loadbalancer
-
Restart the cluster:
obol stack down obol stack up
After this change, access the Obol Stack frontend using port 8080:
- Obol Stack: http://obol.stack:8080 (or http://localhost:8080)
Tip
If ports 8080 or 8443 are also in use, you can change them to any available port. For example, change 8080:80 to 9090:80 and 8443:443 to 9443:443. Then access the application at http://obol.stack:9090 or http://localhost:9090
The Obol Stack follows the XDG Base Directory specification:
- Configuration:
~/.config/obol/- Cluster config, kubeconfig, default resources, network deployments - Data:
~/.local/share/obol/- Persistent volumes for network blockchain data - Binaries:
~/.local/bin/- TheobolCLI and dependencies
Configuration directory structure:
~/.config/obol/
├── k3d.yaml # Cluster configuration
├── .cluster-id # Unique cluster identifier
├── kubeconfig.yaml # Kubernetes access configuration
├── defaults/ # Default stack resources
│ ├── helmfile.yaml # Base stack configuration
│ ├── base/ # Base Kubernetes resources
│ └── values/ # Configuration templates (ERPC, frontend)
├── networks/ # Installed network deployments
│ ├── ethereum/ # Ethereum network deployments
│ │ ├── <namespace-1>/ # First deployment instance
│ │ └── <namespace-2>/ # Second deployment instance
│ ├── helios/ # Helios network deployments
│ └── aztec/ # Aztec network deployments
└── applications/ # Installed application deployments
├── redis/ # Redis deployments
│ └── <id>/ # Deployment instance
│ ├── helmfile.yaml # Deployment configuration
│ └── values.yaml # Configuration values
└── postgresql/ # PostgreSQL deployments
└── <id>/ # Deployment instance
Data directory structure:
~/.local/share/obol/
└── <cluster-id>/ # Per-cluster data
└── networks/ # Network blockchain data
├── ethereum_<namespace>/ # Ethereum deployment instance data
├── helios_<namespace>/ # Helios deployment instance data
└── aztec_<namespace>/ # Aztec deployment instance data
To completely remove the Obol Stack from your system:
1. Stop and remove the cluster:
~/.local/bin/obol stack purge -fNote
The -f flag is required to remove persistent volume claims (PVCs) that are owned by root. Without this flag, data volumes will remain on your system.
2. Remove Obol binaries:
rm -f ~/.local/bin/obol \
~/.local/bin/kubectl \
~/.local/bin/helm \
~/.local/bin/k3d \
~/.local/bin/helmfile \
~/.local/bin/k9s \
~/.local/bin/obolup.sh3. Remove Obol directories:
rm -rf ~/.config/obol \
~/.local/share/obol \
~/.local/state/obolNote
This process removes Obol binaries from ~/.local/bin/. If you installed kubectl, helm, k3d, helmfile, or k9s separately before installing Obol, make sure not to delete those binaries. The PATH configuration in your shell profile is left unchanged.
To update to the latest version, simply run the installer again:
curl -fsSL https://raw.githubusercontent.com/ObolNetwork/obol-stack/main/obolup.sh | bashThe installer will detect your existing installation and upgrade it safely.
If you're contributing to the Obol Stack or want to run it from source, you can use development mode.
Setting up development mode:
-
Clone the repository:
git clone https://github.com/ObolNetwork/obol-stack.git cd obol-stack -
Run the installer in development mode:
OBOL_DEVELOPMENT=true ./obolup.sh
What development mode does:
- Uses a local
.workspace/directory instead of XDG directories (~/.config/obol, etc.) - Installs a wrapper script that runs the
obolCLI usinggo run(no compilation needed) - Code changes are immediately reflected when you run
obolcommands - All cluster data and configuration are stored in
.workspace/
Development workspace structure:
.workspace/
├── bin/ # obol wrapper script and dependencies
├── config/ # Cluster configuration
│ ├── k3d.yaml
│ ├── .cluster-id
│ ├── kubeconfig.yaml
│ ├── defaults/ # Default stack resources
│ │ ├── helmfile.yaml
│ │ ├── base/
│ │ └── values/
│ ├── networks/ # Installed network deployments
│ │ ├── ethereum/ # Ethereum network deployments
│ │ │ ├── <namespace-1>/ # First deployment instance
│ │ │ └── <namespace-2>/ # Second deployment instance
│ │ ├── helios/
│ │ └── aztec/
│ └── applications/ # Installed application deployments
│ ├── redis/
│ └── postgresql/
└── data/ # Persistent volumes (network data)
Making code changes:
Simply edit the Go source files and run obol commands as normal. The wrapper script automatically compiles and runs your changes:
# Edit source files
$EDITOR cmd/obol/main.go
# Run immediately - no build step needed
obol stack upNetwork development:
Networks are embedded in the binary at internal/embed/networks/. Each network uses a two-stage templating approach:
Stage 1: CLI flag templating (Go templates)
# internal/embed/networks/ethereum/helmfile.yaml.gotmpl
values:
# @enum mainnet,sepolia,holesky,hoodi
# @default mainnet
# @description Blockchain network to deploy
- network: {{.Network}}
# @enum reth,geth,nethermind,besu,erigon,ethereumjs
# @default reth
# @description Execution layer client
executionClient: {{.ExecutionClient}}
namespace: {{.Namespace}}Stage 2: Helmfile templating (Helm values)
- CLI flags populate the values section with user choices
- Templated helmfile is saved to
.workspace/config/networks/<network>/<namespace>/ - Helmfile processes the template and deploys resources to the cluster
Adding a new network:
- Create
internal/embed/networks/<network-name>/helmfile.yaml.gotmpl - Add annotations in values section:
@enum,@default,@description - Use Go template syntax for values:
{{.FlagName}} - CLI automatically generates
obol network install <network-name>with flags - Test with
obol network listandobol network install <network-name>
Benefits of two-stage templating:
- Local source of truth: Configuration saved in config directory
- User can edit and re-sync deployments
- Clear separation: CLI flags → values, helmfile → Kubernetes resources
Switching back to production mode:
First, purge the development cluster to remove root-owned PVCs, then remove the .workspace/ directory and reinstall:
obol stack purge -f
rm -rf .workspace
./obolup.shNote
The obol stack purge -f command is necessary to remove persistent volume claims (PVCs) owned by root. Without the -f flag, these files will remain and may cause issues.
The Obol Stack follows a deployment-centric architecture where each network installation creates an isolated, uniquely-namespaced deployment instance.
Each network installation creates a unique deployment instance with:
- Unique namespace: Each deployment gets a generated namespace (e.g.,
ethereum-nervous-otter) - Dedicated resources: CPU, memory, and storage allocated per deployment
- Configuration files: Templated helmfile stored in
~/.config/obol/networks/<network>/<namespace>/ - Persistent volumes: Blockchain data stored in
~/.local/share/obol/<cluster-id>/networks/<network>_<namespace>/ - Service endpoints: Internal cluster DNS unique to each deployment
- Independent lifecycle: Deploy, update, and delete each instance independently
Benefits of unique namespaces:
- Run multiple instances of the same network type (mainnet + testnet)
- Test different client combinations without conflicts
- Isolate resources and prevent deployment collisions
- Simple deletion: remove namespace to clean up all resources
The stack includes default resources deployed in the defaults namespace:
- ERPC (planned): Unified RPC load balancer and proxy
- Obol Frontend (planned): Web interface for stack management
- Base resources: Local path storage provisioner and core services
Default resources are configured via ~/.config/obol/defaults/helmfile.yaml and deployed automatically during obol stack up.
The stack will include eRPC, a specialized Ethereum load balancer that:
- Provides unified RPC endpoints for all network deployments
- Automatically discovers and routes requests to deployment endpoints
- Supports failover and load balancing across multiple clients
- Can be configured with 3rd party RPC fallbacks
Network deployments will register their endpoints with ERPC, enabling seamless access to blockchain data across all deployed instances. For example:
http://erpc.defaults.svc.cluster.local/ethereum/mainnet→ routes to mainnet deploymenthttp://erpc.defaults.svc.cluster.local/ethereum/holesky→ routes to holesky deployment
The obol CLI wraps standard Kubernetes tools for convenience. For advanced use cases, you can use the underlying tools directly:
# Kubernetes CLI - manage pods, services, deployments
obol kubectl get pods --all-namespaces
# Helm - manage Helm charts
obol helm list --all-namespaces
# K9s - interactive cluster manager
obol k9s
# Helmfile - declarative Helm releases
obol helmfile listAll commands automatically use the correct cluster configuration (KUBECONFIG).
This project is currently in alpha, and should not be used in production.
The stack aims to support all popular Kubernetes backends and all Ethereum client types, with a developer experience designed to be useful for local app development, through to production deployment and management.
Please see CONTRIBUTING.md for details.
This project is licensed under the Apache License 2.0.

