Today we’re excited to announce the release of DoltLab v2.5.0 which supports DoltLab Enterprise deployments on Kubernetes!
For this initial release, we targeted a single Kubernetes (K8s) namespace deployment with single-replica services by default. However, if you’re interested in alternative or additional support for DoltLab on K8s, please reach out to us on Discord, we’d love to chat with you.
In today’s blog we’ll cover how you can deploy a DoltLab Enterprise instance to your existing K8s cluster. To do so, you’ll generate K8s manifests for DoltLab using the installer,
and apply everything with a single kubectl command.
Overview#
DoltLab Enterprise enables Kubernetes deployments by generating static manifests you apply with kubectl apply. To generate these static assets, DoltLab’s installer tool is used
with runtime: k8s defined in the installer_config.yaml. Running the installer with this setting writes a ./k8s directory containing all required resources, plus a convenience ./k8s/all.yaml you can use to apply everything at once.
At the time of this writing, the generated resource files are the following, grouped by DoltLab service:
doltlabdb related resources:
- k8s/doltlabdb-pvc-1.yaml, the Persistent Volume Claim for doltlabdb-data.
- k8s/doltlabdb-pvc-2.yaml, the Persistent Volume Claim for doltlabdb-root.
- k8s/doltlabdb-pvc-3.yaml, the Persistent Volume Claim for doltlabdb-backups.
- k8s/passwords-secret.yaml, the Secrets for doltlabdb passwords and the default user password used by doltlabapi.
- k8s/doltlabdb-statefulset.yaml, the doltlabdb Stateful Set.
- k8s/doltlabdb-config-configmap.yaml, the doltlabdb server config file Config Map.
- k8s/doltlabdb-service.yaml, the doltlabdb Service.
doltlabremoteapi related resources:
- k8s/doltlabremoteapi-pvc-1.yaml, the Persistent Volume Claim for doltlabremoteapi-data.
- k8s/doltlabremoteapi-token-secret.yaml, the Secret for the encryption token used by doltlabremotapi.
- k8s/doltlabremoteapi-deployment.yaml, the Deployment for doltlabremoteapi.
- k8s/doltlabremoteapi-service.yaml, the doltlabremoteapi Service.
doltlabfileserviceapi related resources:
- k8s/doltlabfileserviceapi-pvc-1.yaml, the Persistent Volume Claim for doltlabfileserviceapi-data.
- k8s/doltlabfileserviceapi-token-secret.yaml, the Secret for the encryption token used by doltlabfileserviceapi.
- k8s/doltlabfileserviceapi-deployment.yaml, the Deployment for doltlabfileserviceapi.
- k8s/doltlabfileserviceapi-service.yaml, the doltlabfileserviceapi Service.
doltlabapi related resources:
- k8s/doltlab-enterprise-online-secret.yaml, the DoltLab Enterprise licensing Secrets.
- k8s/doltlabapi-token-secret.yaml, the Secret for the encryption token used by doltlabapi.
- k8s/rbac-serviceaccount.yaml, the Service Account for doltlabapi, to enable K8s Job deployments.
- k8s/rbac-clusterrole.yaml, the Cluster Role for doltlabapi-jobs, to enable K8s Job deployments.
- k8s/rbac-rolebinding.yaml, the Role Binding for doltlabapi-jobs-binding, to enable K8s Job deployments.
- k8s/doltlabapi-deployment.yaml, the Deployment for doltlabapi.
- k8s/doltlabapi-service.yaml, the doltlabapi Service.
doltlabgraphql related resources:
- k8s/doltlabgraphql-deployment.yaml, the Deployment for doltlabgraphql.
- k8s/doltlabgraphql-service.yaml, the doltlabgraphql Service.
doltlabui related resources:
- k8s/doltlabui-deployment.yaml, the Deployment for doltlabui.
- k8s/doltlabui-service.yaml, the doltlabui Service.
doltlabenvoy related resources:
- k8s/envoy-config-configmap.yaml, the Envoy Proxy config Config Map.
- k8s/doltlabenvoy-deployment.yaml, the Deployment for doltlabenvoy.
- k8s/doltlabenvoy-service.yaml, the doltlabenvoy Service, of type Load Balancer.
In addition to these service specific resources, the installer also produces:
k8s/namespace.yaml, the Namespace DoltLab Enterprise will be deployed in, defaulting to doltlab.
k8s/all.yaml, the combination of all static resources to enable single apply deployment.
k8s/admin-templates/doltlab-job-overrides.yaml, a template used to allow custom doltlabapi K8s Job spec overrides.
k8s/admin-templates/doltlabapi-config-rbac.yaml, the Role and Role Binding for enabling doltlabapi K8s Job spec overrides.
Though you have the option to apply the individual resource files independently, we recommend that you simply apply the k8s/all.yaml file when ready,
for simplicity.
Let’s dive into an example deployment.
Prerequisites#
Before you begin, ensure you have:
- An active DoltLab Enterprise license.
- The latest DoltLab release (zip).
- An existing K8s cluster and
kubectlaccess. - (Recommended) external-dns K8s controller configured in your cluster.
In our example deployment I’ll be deploying a basic DoltLab Enterprise instance to our managed AWS EKS cluster,
which has external-dns configured. It is important that external-dns is setup so that you can route external traffic to DoltLab’s doltlabenvoy edge proxy,
which is a Service of type Load Balancer. This will allow you to use a stable IP or DNS name for configuring your DoltLab Enterprise instance, which must be supplied as the host
field in the installer_config.yaml.
If you do not have external-dns configured, you should provision the desired IP or DNS name for your DoltLab Enterprise deployment and use this value as the host field in the installer_config.yaml file. After generating the K8s
DoltLab assets and deploying the doltlabenvoy service to your K8s cluster, you may have to add an A record entry for this DNS name to map it to the IP of the doltlabenvoy Load Balancer. For this reason it is best to
simply use external-dns, which can do this automatically for you, you only need to add an external-dns annotation to the doltlabenvoy Load Balancer to do so, which we’ll cover below in our example.
Generating K8s assets#
Download and unzip the latest DoltLab release to access it’s installer tool and installer_config.yaml. To generate the assets you need for a K8s deployment, edit the installer_config.yaml and specify runtime: k8s, along with a value for host,
your DoltLab Enterprise License credentials, and the other required fields. At minimum, your configuration file should look something like this:
version: v2.5.0
host: my.doltlab.com
runtime: k8s
# 'network' becomes the Kubernetes namespace name under k8s runtime (defaults to 'doltlab')
# network: doltlab
services:
doltlabdb:
admin_password: some-admin-pass
dolthubapi_password: some-admin-pass
whitelist_all_users: true
default_user:
email: admin@localhost
password: some-default-pass
enterprise:
online_product_code: XXXXXX-XXX-XXXXXX
online_shared_key: XXXXXX-XXX-XXXXXX
online_api_key: XXXXXX-XXX-XXXXXX
online_license_key: XXXXXX-XXX-XXXXXX
For this example deployment, We’ll plan to use the DNS name my.doltlab.com for our instance, which will be provisioned and configured automatically by external-dns running in our cluster. For now, we only need to generate the DoltLab assets
to use this DNS name, we’ll add the external-dns annotation to our generated output afterward.
After editing the config, we save our changes and run the installer binary, which will generate the ./k8s directory with our static assets.
➜ ./installer
2025-12-01T12:51:47.818-0800 INFO metrics/emitter.go:111 Successfully sent DoltLab usage metrics
2025-12-01T12:51:47.818-0800 INFO cmd/main.go:736 Successfully configured DoltLab Enterprise {"version": "v2.5.0"}
2025-12-01T12:51:47.818-0800 INFO cmd/main.go:746 To create DoltLab, run: {"cmd": "kubectl apply -f /home/ubuntu/doltlab/k8s/all.yaml"}
2025-12-01T12:51:47.818-0800 INFO cmd/main.go:748 To destroy DoltLab, run: {"cmd": "kubectl scale deploy,statefulset --all -n doltlab --replicas=0"}
Deploying DoltLab Enterprise#
If you plan for your DoltLab instance to land on any available host within your cluster, you only need to make one additional edit to k8s/all.yaml so that external-dns will provision and route your selected DNS name or IP address.
In our example that would be my.doltlab.com, so we need to add the following annotation to the doltlabenvoy Load Balancer Service definition:
apiVersion: v1
kind: Service
metadata:
# add external dns annotation
annotations:
external-dns.alpha.kubernetes.io/hostname: my.doltlab.com
creationTimestamp: null
labels:
app: doltlabenvoy
app.kubernetes.io/instance: doltlab
app.kubernetes.io/managed-by: doltlab-installer
app.kubernetes.io/name: doltlab
app.kubernetes.io/part-of: doltlab
name: doltlabenvoy
namespace: doltlab
spec:
# ... ports and selector elided ...
type: LoadBalancer
status:
loadBalancer: {}
Save this edit, then you can simply run kubectl apply -f k8s/all.yaml, and your DoltLab Enterprise deployment will come up, and you’ll be able to use your instance at my.doltlab.com.
Optional: Pin workloads to a node#
Alternatively, in addition to adding the external-dns annotation to the Load Balancer Service, if you want your instance to land on a specific node, you’ll need to make additional edits to the generated K8s files.
For our example deployment we’ve tainted a node in our cluster with doltlab-worker:NoSchedule, so that it only accepts deployments with the matching toleration. As a result, we’ll need to edit
the definition of the Deployments and Stateful Sets in k8s/all.yaml so that they have the proper toleration and node selector to land on this specific host.
To do this, we can use the following helper bash script, add_node_selector.sh, to edit the resources in ./k8s so that they receive the correct node selector:
#!/usr/bin/env bash
set -euo pipefail
# Default directory, can be overridden by passing a path as $1
TARGET_DIR="${1:-./k8s}"
for f in "$TARGET_DIR"/*.yaml; do
# Skip if no files match
[ -e "$f" ] || continue
yq eval -i '(. | select(.kind == "Deployment" or
.kind == "StatefulSet")) |= (
.spec.template.spec.nodeSelector = (.spec.template.spec.nodeSelector // {}) |
.spec.template.spec.nodeSelector."doltlab-worker" = "true" |
.spec.template.spec.tolerations = (
((.spec.template.spec.tolerations // []) +
[{"key":"dedicated","operator":"Equal","value":"doltlab-worker","effect":"NoSchedule"}]
)
| unique_by(.key, .value, .effect)
)
)' "$f"
echo "Updated: $f"
done
Running this script will produce the following output and update all the relevant files in ./k8s:
➜ ./add_node_selector.sh
Updated: /home/dustin/ubuntu/doltlab/k8s/all.yaml
...
This ensures our Deployments and Statefulset will land on the specified host.
After this step we can deploy our DoltLab Enterprise instance by running: kubectl apply -f k8s/all.yaml.
You can view the running services by using the kubectl get pods -n doltlab command:
➜ kubectl get pods -n doltlab
NAME READY STATUS RESTARTS AGE
doltlabapi-574f594747-t9n52 1/1 Running 0 1d
doltlabdb-0 1/1 Running 0 1d
doltlabenvoy-5c498d7bf5-764cq 1/1 Running 0 1d
doltlabfileserviceapi-5fbcf5946-8zqhq 1/1 Running 0 1d
doltlabgraphql-6f49b8c647-lzb5p 1/1 Running 0 1d
doltlabremoteapi-854d9bdbf7-tghdv 1/1 Running 0 1d
doltlabui-7c75f8c574-qnchg 1/1 Running 0 1d
Lastly, because doltlabapi also deploys K8s Jobs within the doltlab namespace, we need to ensure it also adds the correct node selector
to the Jobs it deploys. To do this, we’ll edit k8s/admin-templates/doltlab-job-overrides.yaml and uncomment the all.yaml section and define the node selector
part of the Job spec we want doltlabapi to apply to all of its Jobs. Once we’ve made this edit, the file will look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: doltlab-job-overrides
namespace: doltlab
data:
# Applied to all DoltLab jobs (import/merge/sqlread) by default.
# Uncomment and customize as needed.
all.yaml: |
nodeSelector:
doltlab-worker: "true"
tolerations:
- key: "dedicated"
operator: "Equal"
value: "doltlab-worker"
effect: "NoSchedule"
Next we save these changes, then apply both files in k8s/admin-templates. This will ensure doltlabapi receives the updated ConfigMap to merge into Job specs, and also ensures
it has the permissions to do so. After this, apply your DoltLab Enterprise instance will be able to successfully run Jobs on its dedicated host!
Conclusion#
We hope this new DoltLab release and initial support for K8s deployments gets you excited to try DoltLab. If you’re interested in a free DoltLab Enterprise trial, come by our Discord and ask, we’ll get you set up.
