Skip to content

LLM Providers

Wilson supports 9 LLM providers. Provider resolution is prefix-based — the model name prefix determines which provider handles the request.

ProviderPrefixAPI Key Env VarExample Model
OpenAIgpt-OPENAI_API_KEYgpt-4o
Anthropicclaude-ANTHROPIC_API_KEYclaude-sonnet-4-20250514
Googlegemini-GOOGLE_API_KEYgemini-2.0-flash
xAIgrok-XAI_API_KEYgrok-2
Moonshotkimi-MOONSHOT_API_KEYkimi-chat
DeepSeekdeepseek-DEEPSEEK_API_KEYdeepseek-chat
OpenRouteropenrouter:OPENROUTER_API_KEYopenrouter:anthropic/claude-3.5-sonnet
LiteLLMlitellm:LITELLM_API_KEYlitellm:gpt-4o
Ollamaollama:None (local)ollama:llama3.1

Use the /model command:

/model ollama:llama3.1
/model gpt-4o
/model claude-sonnet-4-20250514

Or just /model to open the model selector.

Set the DEFAULT_MODEL environment variable in .env:

Terminal window
DEFAULT_MODEL=ollama:llama3.1

The privacy-first option. Runs entirely on your machine with no API calls.

Terminal window
# Install
brew install ollama
# Start server
ollama serve
# Pull a model
ollama pull llama3.1

No API key needed. Set OLLAMA_BASE_URL if running on a non-default port:

Terminal window
OLLAMA_BASE_URL=http://localhost:11434

Recommended models for financial tasks:

ModelSizeQualitySpeed
llama3.1:70b40GBBestSlow
llama3.1:8b4.7GBGoodFast
mistral4.1GBGoodFast
deepseek-coder-v28.9GBGood for structured dataMedium
Terminal window
OPENAI_API_KEY=sk-...

Best overall quality for complex financial reasoning. gpt-4o recommended.

Terminal window
ANTHROPIC_API_KEY=sk-ant-...

Strong for nuanced analysis and following detailed skill instructions.

Terminal window
GOOGLE_API_KEY=AIza...

gemini-2.0-flash offers good quality at low cost.

Access 100+ models through a single API key:

Terminal window
OPENROUTER_API_KEY=sk-or-...

Prefix models with openrouter: — e.g., openrouter:anthropic/claude-3.5-sonnet.

Proxy server that provides a unified interface to 100+ LLM providers:

Terminal window
LITELLM_API_KEY=sk-...
LITELLM_BASE_URL=http://localhost:4000

Wilson uses a central callLlm() facade with retry logic that routes to provider-specific adapters. Each adapter normalizes the provider’s API into Wilson’s internal message format.

Providers that support the OpenAI-compatible API (xAI, Moonshot, DeepSeek, OpenRouter, LiteLLM) share a single adapter implementation, reducing maintenance overhead.