Skip to content

Local models


title: Local Models description: Run OpenClaw completely offline using local LLMs (LM Studio, Ollama). sidebar: label: Local Models order: 21

Section titled “title: Local Models description: Run OpenClaw completely offline using local LLMs (LM Studio, Ollama). sidebar: label: Local Models order: 21”

OpenClaw supports running with local Large Language Models (LLMs), allowing for:

  • Privacy: Data never leaves your machine.
  • Offline Use: Run without internet.
  • Cost: No API fees.

LM Studio is the easiest way to serve OpenAI-compatible local models.

  1. Install LM Studio Download from lmstudio.ai.

  2. Load a Model Search for and download a model.

    • Recommendation: MiniMax-Text-01 or Llama-3-70B (if hardware permits).
    • Avoid small (<7B) quantized models for complex agent tasks.
  3. Start Server In LM Studio, go to the Local Server tab and click Start Server.

    • Ensure it’s running on http://127.0.0.1:1234.
  4. Configure OpenClaw Add the local provider to ~/.openclaw/openclaw.json.

    {
    models: {
    providers: {
    lmstudio: {
    baseUrl: "http://127.0.0.1:1234/v1",
    apiKey: "lm-studio", // Value doesn't matter
    models: [
    {
    id: "local-model", // Match the ID in LM Studio
    name: "My Local Model",
    contextWindow: 32000
    }
    ]
    }
    }
    },
    agents: {
    defaults: {
    model: { primary: "lmstudio/local-model" }
    }
    }
    }

You can use a local model for “easy” tasks (chat) and a hosted model (Claude/GPT-4) for complex reasoning or fallbacks.

{
agents: {
defaults: {
model: {
primary: "anthropic/claude-3-5-sonnet", // Smart cloud model
fallbacks: ["lmstudio/local-model"] // Local fallback
}
}
}
}