
OpenClaw Self-Hosting Guide: GDPR-Compliant in 30 Minutes
TL;DR: „Self-host OpenClaw in 30 minutes: Docker Compose, persistent volumes, local LLM via Ollama – fully GDPR-compliant because no data leaves your network."
— Till FreitagWhy Self-Hosting?
Cloud AI is convenient – but not always an option. The moment you process personal data (emails, customer records, internal documents), GDPR applies. And GDPR requires a legal basis for every data transfer to third parties.
Self-hosting solves the problem at its root: Your data never leaves your network. No data processing agreement with OpenAI needed, no debates about third-country transfers.
30-second version: Spin up Docker Compose, configure Ollama for local LLMs, done. No data sent to external servers, full GDPR compliance.
Prerequisites
Before you start – here's what you need:
| Component | Minimum | Recommended |
|---|---|---|
| OS | Linux (Ubuntu 22.04+) | Ubuntu 24.04 LTS |
| RAM | 8 GB | 16–32 GB |
| CPU | 4 Cores | 8+ Cores |
| GPU | – | NVIDIA RTX 3060+ |
| Storage | 20 GB | 100 GB SSD |
| Software | Docker Engine + Compose v2 | + Ollama |
Apple Silicon: M1/M2/M3/M4 Macs work great – Ollama uses the Neural Engine natively.
Step 1: Docker Compose Setup
Create a directory and the docker-compose.yml:
mkdir -p ~/openclaw && cd ~/openclaw
# docker-compose.yml
services:
openclaw-gateway:
image: ${OPENCLAW_IMAGE:-openclaw:local}
restart: unless-stopped
environment:
HOME: /home/node
TERM: xterm-256color
OPENCLAW_GATEWAY_TOKEN: ${OPENCLAW_GATEWAY_TOKEN}
volumes:
- openclaw-config:/home/node/.openclaw/config
- ${OPENCLAW_WORKSPACE_DIR:-./workspace}:/home/node/.openclaw/workspace
ports:
- "127.0.0.1:8000:8000" # Localhost only – not public!
security_opt:
- no-new-privileges:true
deploy:
resources:
limits:
memory: 4G
volumes:
openclaw-config:
Key Security Details
127.0.0.1:8000:8000: Port bound to localhost only – no external accessno-new-privileges: Container cannot escalate to root privilegesmemory: 4G: Resource limit prevents OOM on the host- Persistent volumes: Config and workspace survive container restarts
Step 2: Environment Variables
Create a .env file in the same directory:
# .env
OPENCLAW_GATEWAY_TOKEN=your-secure-token-here
OPENCLAW_WORKSPACE_DIR=./workspace
OPENCLAW_IMAGE=openclaw:local
Tip: Generate a secure token with
openssl rand -hex 32.
Step 3: Start the Container
# Build image (or use the official image)
docker compose up -d
# Check logs
docker compose logs -f openclaw-gateway
# Check status
docker compose ps
After a few seconds, OpenClaw is running at http://localhost:8000.
Step 4: Local LLM with Ollama
For full GDPR compliance you need a local LLM – no API calls to OpenAI or Anthropic. That's where Ollama comes in.
Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
Download a Model
# Recommended for a good balance of quality and speed
ollama pull llama3.3
# For code tasks
ollama pull qwen2.5-coder:7b
# For lower-end hardware
ollama pull phi-3:mini
Connect OpenClaw to Ollama
# Set API key (Ollama doesn't need a real key – any string works)
openclaw config set models.providers.ollama.apiKey "ollama-local"
# Set default model
openclaw config set agents.defaults.model.primary "ollama/llama3.3"
Or manually in ~/.openclaw/openclaw.json:
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/llama3.3",
"fallbacks": ["ollama/qwen2.5-coder:7b"]
}
}
}
}
Verify
# Check if OpenClaw detects your Ollama models
openclaw models list
You should see your local models in the list. From now on, not a single token leaves your network.
Step 5: Hybrid Setup (Optional)
Not every task needs a cloud model, but some benefit from it. OpenClaw supports Multi-LLM: you can choose the model per task type.
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/llama3.3"
}
},
"email-triage": {
"model": {
"primary": "ollama/llama3.3"
}
},
"complex-analysis": {
"model": {
"primary": "anthropic/claude-3.5-sonnet"
}
}
}
}
GDPR tip: Use cloud models only for non-personal data. For email triage and customer data → always local.
GDPR Compliance Checklist
To ensure your self-hosting setup is truly GDPR-compliant:
| Requirement | Implementation |
|---|---|
| No third-country transfer | Local LLM via Ollama, no API calls to US servers |
| Data minimization | Only pass necessary data to the agent |
| Deletion concept | Regularly clean workspace data, retention policy |
| Access control | Gateway token, localhost-only binding, firewall |
| Encryption | LUKS/dm-crypt for host disk, TLS for internal communication |
| Logging | Docker logs for audit trail, docker compose logs |
| Records of processing | Document OpenClaw as a processing system |
Backup & Restore
Back up persistent data:
# Backup
docker compose stop
tar -czf openclaw-backup-$(date +%Y%m%d).tar.gz \
./workspace \
$(docker volume inspect openclaw-config --format '{{ .Mountpoint }}')
docker compose start
# Restore
docker compose stop
tar -xzf openclaw-backup-YYYYMMDD.tar.gz
docker compose start
Troubleshooting
| Problem | Solution |
|---|---|
| Ollama models not detected | Is ollama serve running? Port 11434 reachable? |
| Container won't start | Check docker compose logs, increase RAM limit |
| Slow responses | Larger model → more RAM/GPU needed, or choose a smaller model |
| Permission denied | Check volume permissions: chown -R 1000:1000 ./workspace |
Recommended Models by Use Case
| Use Case | Model | RAM Required | Quality |
|---|---|---|---|
| Email triage | Llama 3.3 (8B) | 8 GB | ★★★★☆ |
| Code analysis | Qwen 2.5 Coder (7B) | 8 GB | ★★★★★ |
| Summarization | Llama 3.3 (70B) | 48 GB | ★★★★★ |
| Quick replies | Phi-3 Mini (3.8B) | 4 GB | ★★★☆☆ |
| Offline on RPi | Phi-3 Mini (3.8B) | 4 GB | ★★★☆☆ |
Conclusion
Self-hosting OpenClaw isn't rocket science – with Docker Compose and Ollama, the stack is up and running in 30 minutes. The key advantage: Full data sovereignty, full GDPR compliance. No data shared with third parties, no DPA debates, no vendor lock-in.
For teams that want to boost productivity with AI without giving up control, self-hosting is the only way.
Want to set up OpenClaw on your infrastructure? Talk to us – we help with setup, security hardening, and team training.
More on this topic: What is OpenClaw? · NanoClaw: The lean successor · Our tool philosophy
Related Articles

Local LLMs with OpenClaw: Ollama, Llama 3.3, Qwen 3.5 & MiniMax M2.5 – A Practical Benchmark
Run Llama 3.3, Qwen 3.5, and MiniMax M2.5 locally with OpenClaw and Ollama – performance benchmarks, cloud vs. local cos…
Read more
NanoClaw: The Lean Successor to OpenClaw – An AI Agent That Fits in Your Pocket
NanoClaw is the minimalist successor to OpenClaw – an AI agent that runs on a Raspberry Pi, is controllable via WhatsApp…
Read more
Deep DiveThe Best OpenClaw Alternatives 2026 – from NanoClaw to NullClaw
OpenClaw has 160,000+ GitHub stars – but not everyone needs 430,000 lines of code. We compare the best alternatives in 2…
Read more