Quick Start
Requirements
Before you begin, make sure you have:
| Requirement | Minimum | Recommended |
|---|---|---|
| RAM | 8GB | 16GB+ |
| GPU VRAM | None (CPU works) | 8GB+ for speed |
| Disk Space | 5GB | 20GB+ for models |
| Node.js | v18+ | v20+ |
No GPU? No Problem
Libre WebUI works with CPU-only inference. Expect 5-15 tokens/second with smaller models (3-7B). For faster performance, see Hardware Requirements.
Install
npx libre-webui
Opens at http://localhost:8080.
Prerequisite: Ollama must be installed and running.
Install Ollama
macOS / Linux:
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.1:8b
Windows: Download from ollama.ai, then:
ollama pull llama3.1:8b
Model Size
llama3.1:8b downloads ~5GB and needs ~5GB VRAM (GPU) or ~8GB RAM (CPU). For smaller systems, try llama3.2:3b (~2GB).
Alternative: Docker
Everything included (Libre WebUI + Ollama):
git clone https://github.com/libre-webui/libre-webui
cd libre-webui
docker-compose up -d
With NVIDIA GPU:
docker-compose -f docker-compose.gpu.yml up -d
If you already have Ollama running:
docker-compose -f docker-compose.external-ollama.yml up -d
Alternative: Kubernetes
helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui
See Kubernetes docs for configuration options.
Add Cloud Providers
Optional: connect OpenAI, Anthropic, and other providers.
Add to backend/.env:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
Enable in Settings → Plugins.
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Cmd/Ctrl + K | New chat |
Cmd/Ctrl + B | Toggle sidebar |
Cmd/Ctrl + , | Settings |
Cmd/Ctrl + D | Toggle dark mode |
? | Show all shortcuts |
Next Steps
- Hardware Requirements - GPU and memory guide
- Working with Models - Model selection and optimization
- Plugins - Cloud providers (OpenAI, Anthropic, etc.)
- Docker - Docker Compose options
- Kubernetes - Helm chart configuration