Skip to content

Ollama

Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. OpenClaw integrates with Ollama’s OpenAI-compatible API and can auto-discover tool-capable models when you opt in with OLLAMA_API_KEY (or an auth profile) and do not define an explicit models.providers.ollama entry.

  1. Install Ollama: https://ollama.ai

  2. Pull a model:

Terminal window
ollama pull llama3.3
# or
ollama pull qwen2.5-coder:32b
# or
ollama pull deepseek-r1:32b
  1. Enable Ollama for OpenClaw (any value works; Ollama doesn’t require a real key):
Terminal window
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
openclaw config set models.providers.ollama.apiKey "ollama-local"
  1. Use Ollama models:
{
agents: {
defaults: {
model: { primary: "ollama/llama3.3" },
},
},
}

When you set OLLAMA_API_KEY (or an auth profile) and do not define models.providers.ollama, OpenClaw discovers models from the local Ollama instance at http://127.0.0.1:11434:

  • Queries /api/tags and /api/show
  • Keeps only models that report tools capability
  • Marks reasoning when the model reports thinking
  • Reads contextWindow from model_info["<arch>.context_length"] when available
  • Sets maxTokens to 10× the context window
  • Sets all costs to 0

This avoids manual model entries while keeping the catalog aligned with Ollama’s capabilities.

To see what models are available:

Terminal window
ollama list
openclaw models list

To add a new model, simply pull it with Ollama:

Terminal window
ollama pull mistral

The new model will be automatically discovered and available to use.

If you set models.providers.ollama explicitly, auto-discovery is skipped and you must define models manually (see below).

The simplest way to enable Ollama is via environment variable:

Terminal window
export OLLAMA_API_KEY="ollama-local"

Use explicit config when:

  • Ollama runs on another host/port.
  • You want to force specific context windows or model lists.
  • You want to include models that do not report tool support.
{
models: {
providers: {
ollama: {
// Use a host that includes /v1 for OpenAI-compatible APIs
baseUrl: "http://ollama-host:11434/v1",
apiKey: "ollama-local",
api: "openai-completions",
models: [
{
id: "llama3.3",
name: "Llama 3.3",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 8192,
maxTokens: 8192 * 10
}
]
}
}
}
}

If OLLAMA_API_KEY is set, you can omit apiKey in the provider entry and OpenClaw will fill it for availability checks.

If Ollama is running on a different host or port (explicit config disables auto-discovery, so define models manually):

{
models: {
providers: {
ollama: {
apiKey: "ollama-local",
baseUrl: "http://ollama-host:11434/v1",
},
},
},
}

Once configured, all your Ollama models are available:

{
agents: {
defaults: {
model: {
primary: "ollama/llama3.3",
fallback: ["ollama/qwen2.5-coder:32b"],
},
},
},
}

OpenClaw marks models as reasoning-capable when Ollama reports thinking in /api/show:

Terminal window
ollama pull deepseek-r1:32b

Ollama is free and runs locally, so all model costs are set to $0.

For auto-discovered models, OpenClaw uses the context window reported by Ollama when available, otherwise it defaults to 8192. You can override contextWindow and maxTokens in explicit provider config.

Make sure Ollama is running and that you set OLLAMA_API_KEY (or an auth profile), and that you did not define an explicit models.providers.ollama entry:

Terminal window
ollama serve

And that the API is accessible:

Terminal window
curl http://localhost:11434/api/tags

OpenClaw only auto-discovers models that report tool support. If your model isn’t listed, either:

  • Pull a tool-capable model, or
  • Define the model explicitly in models.providers.ollama.

To add models:

Terminal window
ollama list # See what's installed
ollama pull llama3.3 # Pull a model

Check that Ollama is running on the correct port:

Terminal window
# Check if Ollama is running
ps aux | grep ollama
# Or restart Ollama
ollama serve