Ollama
Ollama
Section titled “Ollama”Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. OpenClaw integrates with Ollama’s OpenAI-compatible API and can auto-discover tool-capable models when you opt in with OLLAMA_API_KEY (or an auth profile) and do not define an explicit models.providers.ollama entry.
Quick start
Section titled “Quick start”Install Ollama: https://ollama.ai
Pull a model:
ollama pull llama3.3# orollama pull qwen2.5-coder:32b# orollama pull deepseek-r1:32b- Enable Ollama for OpenClaw (any value works; Ollama doesn’t require a real key):
# Set environment variableexport OLLAMA_API_KEY="ollama-local"
# Or configure in your config fileopenclaw config set models.providers.ollama.apiKey "ollama-local"- Use Ollama models:
{ agents: { defaults: { model: { primary: "ollama/llama3.3" }, }, },}Model discovery (implicit provider)
Section titled “Model discovery (implicit provider)”When you set OLLAMA_API_KEY (or an auth profile) and do not define models.providers.ollama, OpenClaw discovers models from the local Ollama instance at http://127.0.0.1:11434:
- Queries
/api/tagsand/api/show - Keeps only models that report
toolscapability - Marks
reasoningwhen the model reportsthinking - Reads
contextWindowfrommodel_info["<arch>.context_length"]when available - Sets
maxTokensto 10× the context window - Sets all costs to
0
This avoids manual model entries while keeping the catalog aligned with Ollama’s capabilities.
To see what models are available:
ollama listopenclaw models listTo add a new model, simply pull it with Ollama:
ollama pull mistralThe new model will be automatically discovered and available to use.
If you set models.providers.ollama explicitly, auto-discovery is skipped and you must define models manually (see below).
Configuration
Section titled “Configuration”Basic setup (implicit discovery)
Section titled “Basic setup (implicit discovery)”The simplest way to enable Ollama is via environment variable:
export OLLAMA_API_KEY="ollama-local"Explicit setup (manual models)
Section titled “Explicit setup (manual models)”Use explicit config when:
- Ollama runs on another host/port.
- You want to force specific context windows or model lists.
- You want to include models that do not report tool support.
{ models: { providers: { ollama: { // Use a host that includes /v1 for OpenAI-compatible APIs baseUrl: "http://ollama-host:11434/v1", apiKey: "ollama-local", api: "openai-completions", models: [ { id: "llama3.3", name: "Llama 3.3", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 8192, maxTokens: 8192 * 10 } ] } } }}If OLLAMA_API_KEY is set, you can omit apiKey in the provider entry and OpenClaw will fill it for availability checks.
Custom base URL (explicit config)
Section titled “Custom base URL (explicit config)”If Ollama is running on a different host or port (explicit config disables auto-discovery, so define models manually):
{ models: { providers: { ollama: { apiKey: "ollama-local", baseUrl: "http://ollama-host:11434/v1", }, }, },}Model selection
Section titled “Model selection”Once configured, all your Ollama models are available:
{ agents: { defaults: { model: { primary: "ollama/llama3.3", fallback: ["ollama/qwen2.5-coder:32b"], }, }, },}Advanced
Section titled “Advanced”Reasoning models
Section titled “Reasoning models”OpenClaw marks models as reasoning-capable when Ollama reports thinking in /api/show:
ollama pull deepseek-r1:32bModel Costs
Section titled “Model Costs”Ollama is free and runs locally, so all model costs are set to $0.
Context windows
Section titled “Context windows”For auto-discovered models, OpenClaw uses the context window reported by Ollama when available, otherwise it defaults to 8192. You can override contextWindow and maxTokens in explicit provider config.
Troubleshooting
Section titled “Troubleshooting”Ollama not detected
Section titled “Ollama not detected”Make sure Ollama is running and that you set OLLAMA_API_KEY (or an auth profile), and that you did not define an explicit models.providers.ollama entry:
ollama serveAnd that the API is accessible:
curl http://localhost:11434/api/tagsNo models available
Section titled “No models available”OpenClaw only auto-discovers models that report tool support. If your model isn’t listed, either:
- Pull a tool-capable model, or
- Define the model explicitly in
models.providers.ollama.
To add models:
ollama list # See what's installedollama pull llama3.3 # Pull a modelConnection refused
Section titled “Connection refused”Check that Ollama is running on the correct port:
# Check if Ollama is runningps aux | grep ollama
# Or restart Ollamaollama serveSee Also
Section titled “See Also”- Model Providers - Overview of all providers
- Model Selection - How to choose models
- Configuration - Full config reference