Skip to main content

Quick Start

Requirements

Before you begin, make sure you have:

RequirementMinimumRecommended
RAM8GB16GB+
GPU VRAMNone (CPU works)8GB+ for speed
Disk Space5GB20GB+ for models
Node.jsv18+v20+
No GPU? No Problem

Libre WebUI works with CPU-only inference. Expect 5-15 tokens/second with smaller models (3-7B). For faster performance, see Hardware Requirements.

Install

npx libre-webui

Opens at http://localhost:8080.

Prerequisite: Ollama must be installed and running.

Install Ollama

macOS / Linux:

curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.1:8b

Windows: Download from ollama.ai, then:

ollama pull llama3.1:8b
Model Size

llama3.1:8b downloads ~5GB and needs ~5GB VRAM (GPU) or ~8GB RAM (CPU). For smaller systems, try llama3.2:3b (~2GB).

Alternative: Docker

Everything included (Libre WebUI + Ollama):

git clone https://github.com/libre-webui/libre-webui
cd libre-webui
docker-compose up -d

With NVIDIA GPU:

docker-compose -f docker-compose.gpu.yml up -d

If you already have Ollama running:

docker-compose -f docker-compose.external-ollama.yml up -d

Alternative: Kubernetes

helm install libre-webui oci://ghcr.io/libre-webui/charts/libre-webui

See Kubernetes docs for configuration options.

Add Cloud Providers

Optional: connect OpenAI, Anthropic, and other providers.

Add to backend/.env:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

Enable in Settings → Plugins.

Keyboard Shortcuts

ShortcutAction
Cmd/Ctrl + KNew chat
Cmd/Ctrl + BToggle sidebar
Cmd/Ctrl + ,Settings
Cmd/Ctrl + DToggle dark mode
?Show all shortcuts

Next Steps