Model Providers
Model Providers
Section titled “Model Providers”OpenClaw can use many LLM providers. Pick a provider, authenticate, then set the default model as provider/model.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See Channels.
Highlight: Venice (Venice AI)
Section titled “Highlight: Venice (Venice AI)”Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.
- Default:
venice/llama-3.3-70b - Best overall:
venice/claude-opus-45(Opus remains the strongest)
See Venice AI.
Quick start
Section titled “Quick start”- Authenticate with the provider (usually via
openclaw onboard). - Set the default model:
{ agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },}Provider docs
Section titled “Provider docs”- OpenAI (API + Codex)
- Anthropic (API + Claude Code CLI)
- Qwen (OAuth)
- OpenRouter
- Vercel AI Gateway
- Moonshot AI (Kimi + Kimi Coding)
- OpenCode Zen
- Amazon Bedrock
- Z.AI
- Xiaomi
- GLM models
- MiniMax
- Venice (Venice AI, privacy-focused)
- Ollama (local models)
Transcription providers
Section titled “Transcription providers”Community tools
Section titled “Community tools”- Claude Max API Proxy - Use Claude Max/Pro subscription as an OpenAI-compatible API endpoint
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration, see Model providers.