OpenClaw Self-Hosting Guide: GDPR-Compliant in 30 Minutes

    OpenClaw Self-Hosting Guide: GDPR-Compliant in 30 Minutes

    Till FreitagTill Freitag28. Februar 20264 min read
    Till Freitag

    TL;DR: „Self-host OpenClaw in 30 minutes: Docker Compose, persistent volumes, local LLM via Ollama – fully GDPR-compliant because no data leaves your network."

    — Till Freitag

    Why Self-Hosting?

    Cloud AI is convenient – but not always an option. The moment you process personal data (emails, customer records, internal documents), GDPR applies. And GDPR requires a legal basis for every data transfer to third parties.

    Self-hosting solves the problem at its root: Your data never leaves your network. No data processing agreement with OpenAI needed, no debates about third-country transfers.

    30-second version: Spin up Docker Compose, configure Ollama for local LLMs, done. No data sent to external servers, full GDPR compliance.

    Prerequisites

    Before you start – here's what you need:

    Component Minimum Recommended
    OS Linux (Ubuntu 22.04+) Ubuntu 24.04 LTS
    RAM 8 GB 16–32 GB
    CPU 4 Cores 8+ Cores
    GPU NVIDIA RTX 3060+
    Storage 20 GB 100 GB SSD
    Software Docker Engine + Compose v2 + Ollama

    Apple Silicon: M1/M2/M3/M4 Macs work great – Ollama uses the Neural Engine natively.

    Step 1: Docker Compose Setup

    Create a directory and the docker-compose.yml:

    mkdir -p ~/openclaw && cd ~/openclaw
    # docker-compose.yml
    services:
      openclaw-gateway:
        image: ${OPENCLAW_IMAGE:-openclaw:local}
        restart: unless-stopped
        environment:
          HOME: /home/node
          TERM: xterm-256color
          OPENCLAW_GATEWAY_TOKEN: ${OPENCLAW_GATEWAY_TOKEN}
        volumes:
          - openclaw-config:/home/node/.openclaw/config
          - ${OPENCLAW_WORKSPACE_DIR:-./workspace}:/home/node/.openclaw/workspace
        ports:
          - "127.0.0.1:8000:8000"  # Localhost only – not public!
        security_opt:
          - no-new-privileges:true
        deploy:
          resources:
            limits:
              memory: 4G
    
    volumes:
      openclaw-config:

    Key Security Details

    • 127.0.0.1:8000:8000: Port bound to localhost only – no external access
    • no-new-privileges: Container cannot escalate to root privileges
    • memory: 4G: Resource limit prevents OOM on the host
    • Persistent volumes: Config and workspace survive container restarts

    Step 2: Environment Variables

    Create a .env file in the same directory:

    # .env
    OPENCLAW_GATEWAY_TOKEN=your-secure-token-here
    OPENCLAW_WORKSPACE_DIR=./workspace
    OPENCLAW_IMAGE=openclaw:local

    Tip: Generate a secure token with openssl rand -hex 32.

    Step 3: Start the Container

    # Build image (or use the official image)
    docker compose up -d
    
    # Check logs
    docker compose logs -f openclaw-gateway
    
    # Check status
    docker compose ps

    After a few seconds, OpenClaw is running at http://localhost:8000.

    Step 4: Local LLM with Ollama

    For full GDPR compliance you need a local LLM – no API calls to OpenAI or Anthropic. That's where Ollama comes in.

    Install Ollama

    curl -fsSL https://ollama.com/install.sh | sh

    Download a Model

    # Recommended for a good balance of quality and speed
    ollama pull llama3.3
    
    # For code tasks
    ollama pull qwen2.5-coder:7b
    
    # For lower-end hardware
    ollama pull phi-3:mini

    Connect OpenClaw to Ollama

    # Set API key (Ollama doesn't need a real key – any string works)
    openclaw config set models.providers.ollama.apiKey "ollama-local"
    
    # Set default model
    openclaw config set agents.defaults.model.primary "ollama/llama3.3"

    Or manually in ~/.openclaw/openclaw.json:

    {
      "agents": {
        "defaults": {
          "model": {
            "primary": "ollama/llama3.3",
            "fallbacks": ["ollama/qwen2.5-coder:7b"]
          }
        }
      }
    }

    Verify

    # Check if OpenClaw detects your Ollama models
    openclaw models list

    You should see your local models in the list. From now on, not a single token leaves your network.

    Step 5: Hybrid Setup (Optional)

    Not every task needs a cloud model, but some benefit from it. OpenClaw supports Multi-LLM: you can choose the model per task type.

    {
      "agents": {
        "defaults": {
          "model": {
            "primary": "ollama/llama3.3"
          }
        },
        "email-triage": {
          "model": {
            "primary": "ollama/llama3.3"
          }
        },
        "complex-analysis": {
          "model": {
            "primary": "anthropic/claude-3.5-sonnet"
          }
        }
      }
    }

    GDPR tip: Use cloud models only for non-personal data. For email triage and customer data → always local.

    GDPR Compliance Checklist

    To ensure your self-hosting setup is truly GDPR-compliant:

    Requirement Implementation
    No third-country transfer Local LLM via Ollama, no API calls to US servers
    Data minimization Only pass necessary data to the agent
    Deletion concept Regularly clean workspace data, retention policy
    Access control Gateway token, localhost-only binding, firewall
    Encryption LUKS/dm-crypt for host disk, TLS for internal communication
    Logging Docker logs for audit trail, docker compose logs
    Records of processing Document OpenClaw as a processing system

    Backup & Restore

    Back up persistent data:

    # Backup
    docker compose stop
    tar -czf openclaw-backup-$(date +%Y%m%d).tar.gz \
      ./workspace \
      $(docker volume inspect openclaw-config --format '{{ .Mountpoint }}')
    docker compose start
    
    # Restore
    docker compose stop
    tar -xzf openclaw-backup-YYYYMMDD.tar.gz
    docker compose start

    Troubleshooting

    Problem Solution
    Ollama models not detected Is ollama serve running? Port 11434 reachable?
    Container won't start Check docker compose logs, increase RAM limit
    Slow responses Larger model → more RAM/GPU needed, or choose a smaller model
    Permission denied Check volume permissions: chown -R 1000:1000 ./workspace
    Use Case Model RAM Required Quality
    Email triage Llama 3.3 (8B) 8 GB ★★★★☆
    Code analysis Qwen 2.5 Coder (7B) 8 GB ★★★★★
    Summarization Llama 3.3 (70B) 48 GB ★★★★★
    Quick replies Phi-3 Mini (3.8B) 4 GB ★★★☆☆
    Offline on RPi Phi-3 Mini (3.8B) 4 GB ★★★☆☆

    Conclusion

    Self-hosting OpenClaw isn't rocket science – with Docker Compose and Ollama, the stack is up and running in 30 minutes. The key advantage: Full data sovereignty, full GDPR compliance. No data shared with third parties, no DPA debates, no vendor lock-in.

    For teams that want to boost productivity with AI without giving up control, self-hosting is the only way.


    Want to set up OpenClaw on your infrastructure? Talk to us – we help with setup, security hardening, and team training.

    More on this topic: What is OpenClaw? · NanoClaw: The lean successor · Pricing Shock: Anthropic's Change · Our tool philosophy

    TeilenLinkedInWhatsAppE-Mail

    Related Articles

    Local LLMs with OpenClaw: Ollama, Llama 3.3, Qwen 3.5 & MiniMax M2.5 – A Practical Benchmark
    February 28, 20266 min

    Local LLMs with OpenClaw: Ollama, Llama 3.3, Qwen 3.5 & MiniMax M2.5 – A Practical Benchmark

    Run Llama 3.3, Qwen 3.5, and MiniMax M2.5 locally with OpenClaw and Ollama – performance benchmarks, cloud vs. local cos…

    Read more
    monday.com board connected to OpenClaw AI agent as central memory and control system
    March 12, 20266 min

    monday.com + OpenClaw: How monday.com Becomes the Brain of Your AI Agent

    monday.com is more than a project management tool – it can serve as the long-term memory and execution log for an AI age…

    Read more
    NanoClaw: The Lean Successor to OpenClaw – An AI Agent That Fits in Your Pocket
    February 21, 20264 min

    NanoClaw: The Lean Successor to OpenClaw – An AI Agent That Fits in Your Pocket

    NanoClaw is the minimalist successor to OpenClaw – an AI agent that runs on a Raspberry Pi, is controllable via WhatsApp…

    Read more
    The Best OpenClaw Alternatives 2026 – from NanoClaw to NullClawDeep Dive
    February 21, 202610 min

    The Best OpenClaw Alternatives 2026 – from NanoClaw to NullClaw

    OpenClaw has 160,000+ GitHub stars – but not everyone needs 430,000 lines of code. We compare the best alternatives in 2…

    Read more
    OpenClaw AI agent interface with autonomous task management and LLM integration
    February 20, 20265 min

    What Is OpenClaw? The Open-Source AI Agent Overview

    OpenClaw is an open-source AI agent that handles tasks autonomously – from emails to calendars. Self-hosted, GDPR-compli…

    Read more
    Diagram of a Privacy Router: local models for sensitive data, cloud models for everything else
    March 17, 20264 min

    NemoClaw: NVIDIA's Privacy Router and What It Means for Agent Architecture

    NVIDIA enters the Claw ecosystem with NemoClaw – and brings a concept that could reshape agent architecture: Privacy Rou…

    Read more
    Architecture diagram of a Privacy Router: data flow split into local and cloud paths
    March 17, 20266 min

    Building a Privacy Router with OpenClaw: A Practical Guide with Code

    Privacy Routing is the concept – but how do you build it? A practical guide with OpenClaw, a policy engine, and concrete…

    Read more
    OpenClaw Pricing Shock: How to Avoid the $500 Bill
    April 5, 20262 min

    OpenClaw Pricing Shock: How to Avoid the $500 Bill

    Anthropic just killed third-party tool coverage under Claude subscriptions. If you're running OpenClaw without prep, you…

    Read more
    Comparison of three orchestration tools Make, Claude Code and OpenClaw as stack layers
    March 21, 20265 min

    Make vs. Claude Code vs. OpenClaw – Picking the Right Orchestration Layer (2026)

    Make.com, Claude Code, or OpenClaw? Three tools, three layers of the stack. Here's when to pick which orchestration tool…

    Read more