Skip to main content

Kubernetes Deployment

Deploy Libre WebUI on Kubernetes using Helm.

Quick Start

helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui

This deploys:

  • Libre WebUI
  • Bundled Ollama instance
  • PersistentVolumeClaims for data

Access the Application

After installation, follow the NOTES output. For ClusterIP (default):

kubectl port-forward svc/libre-webui 8080:8080

Then open http://localhost:8080

Configuration

External Ollama

Connect to an existing Ollama instance instead of bundled:

helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui \
--set ollama.bundled.enabled=false \
--set ollama.external.enabled=true \
--set ollama.external.url=http://my-ollama:11434

NVIDIA GPU Support

Enable GPU acceleration for Ollama:

helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui \
--set ollama.bundled.gpu.enabled=true

Ingress

Expose via Ingress:

helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui \
--set ingress.enabled=true \
--set ingress.hosts[0].host=chat.example.com

With TLS:

helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui \
--set ingress.enabled=true \
--set ingress.hosts[0].host=chat.example.com \
--set ingress.tls[0].secretName=chat-tls \
--set ingress.tls[0].hosts[0]=chat.example.com

API Keys

Set cloud provider API keys:

helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui \
--set secrets.openaiApiKey=sk-... \
--set secrets.anthropicApiKey=sk-ant-...

Or create a secret manually:

apiVersion: v1
kind: Secret
metadata:
name: libre-webui-secrets
type: Opaque
stringData:
OPENAI_API_KEY: sk-...
ANTHROPIC_API_KEY: sk-ant-...

Autoscaling

Enable HorizontalPodAutoscaler:

helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui \
--set autoscaling.enabled=true \
--set autoscaling.minReplicas=2 \
--set autoscaling.maxReplicas=10

Values Reference

Key configuration options:

ValueDefaultDescription
replicaCount1Number of replicas
image.repositorylibrewebui/libre-webuiImage repository
image.taglatestImage tag
service.typeClusterIPService type
service.port8080Service port
ingress.enabledfalseEnable Ingress
ollama.bundled.enabledtrueDeploy bundled Ollama
ollama.bundled.gpu.enabledfalseEnable GPU for Ollama
ollama.external.enabledfalseUse external Ollama
ollama.external.url""External Ollama URL
persistence.enabledtrueEnable persistence
persistence.size5GiPVC size
autoscaling.enabledfalseEnable HPA

See values.yaml for all options.

Upgrading

helm upgrade libre-webui oci://ghcr.io/libre-webui/charts/libre-webui

Uninstalling

helm uninstall libre-webui

Note: PersistentVolumeClaims are not deleted by default. Remove manually if needed:

kubectl delete pvc -l app.kubernetes.io/instance=libre-webui

Pulling Models

After deployment, pull models into the bundled Ollama:

kubectl exec -it deployment/libre-webui-ollama -- ollama pull llama3.2

Troubleshooting

Check pod status

kubectl get pods -l app.kubernetes.io/name=libre-webui

View logs

kubectl logs -l app.kubernetes.io/name=libre-webui -f

Check Ollama connection

kubectl exec -it deployment/libre-webui -- wget -qO- http://libre-webui-ollama:11434/api/version