LLM Providers
Overview
Section titled “Overview”Wilson supports 9 LLM providers. Provider resolution is prefix-based — the model name prefix determines which provider handles the request.
Provider Table
Section titled “Provider Table”| Provider | Prefix | API Key Env Var | Example Model |
|---|---|---|---|
| OpenAI | gpt- | OPENAI_API_KEY | gpt-4o |
| Anthropic | claude- | ANTHROPIC_API_KEY | claude-sonnet-4-20250514 |
gemini- | GOOGLE_API_KEY | gemini-2.0-flash | |
| xAI | grok- | XAI_API_KEY | grok-2 |
| Moonshot | kimi- | MOONSHOT_API_KEY | kimi-chat |
| DeepSeek | deepseek- | DEEPSEEK_API_KEY | deepseek-chat |
| OpenRouter | openrouter: | OPENROUTER_API_KEY | openrouter:anthropic/claude-3.5-sonnet |
| LiteLLM | litellm: | LITELLM_API_KEY | litellm:gpt-4o |
| Ollama | ollama: | None (local) | ollama:llama3.1 |
Switching Providers
Section titled “Switching Providers”Interactive Mode
Section titled “Interactive Mode”Use the /model command:
/model ollama:llama3.1/model gpt-4o/model claude-sonnet-4-20250514Or just /model to open the model selector.
Headless Mode
Section titled “Headless Mode”Set the DEFAULT_MODEL environment variable in .env:
DEFAULT_MODEL=ollama:llama3.1Provider Details
Section titled “Provider Details”Ollama (Local)
Section titled “Ollama (Local)”The privacy-first option. Runs entirely on your machine with no API calls.
# Installbrew install ollama
# Start serverollama serve
# Pull a modelollama pull llama3.1No API key needed. Set OLLAMA_BASE_URL if running on a non-default port:
OLLAMA_BASE_URL=http://localhost:11434Recommended models for financial tasks:
| Model | Size | Quality | Speed |
|---|---|---|---|
llama3.1:70b | 40GB | Best | Slow |
llama3.1:8b | 4.7GB | Good | Fast |
mistral | 4.1GB | Good | Fast |
deepseek-coder-v2 | 8.9GB | Good for structured data | Medium |
OpenAI
Section titled “OpenAI”OPENAI_API_KEY=sk-...Best overall quality for complex financial reasoning. gpt-4o recommended.
Anthropic
Section titled “Anthropic”ANTHROPIC_API_KEY=sk-ant-...Strong for nuanced analysis and following detailed skill instructions.
GOOGLE_API_KEY=AIza...gemini-2.0-flash offers good quality at low cost.
OpenRouter
Section titled “OpenRouter”Access 100+ models through a single API key:
OPENROUTER_API_KEY=sk-or-...Prefix models with openrouter: — e.g., openrouter:anthropic/claude-3.5-sonnet.
LiteLLM
Section titled “LiteLLM”Proxy server that provides a unified interface to 100+ LLM providers:
LITELLM_API_KEY=sk-...LITELLM_BASE_URL=http://localhost:4000Architecture
Section titled “Architecture”Wilson uses a central callLlm() facade with retry logic that routes to provider-specific adapters. Each adapter normalizes the provider’s API into Wilson’s internal message format.
Providers that support the OpenAI-compatible API (xAI, Moonshot, DeepSeek, OpenRouter, LiteLLM) share a single adapter implementation, reducing maintenance overhead.