Documentation Index
Fetch the complete documentation index at: https://docs.wolffi.sh/llms.txt
Use this file to discover all available pages before exploring further.
Providers
Wolffish communicates with LLMs via three providers using purefetch() — no SDKs. Each provider has its own streaming format and tool-calling convention, which wernicke.ts normalizes into a single interface.
Anthropic (Claude)
tool_use content blocks. Configure your API key in Settings or directly in config.json.
Best for: Complex reasoning, detailed instructions following, nuanced tool use.
OpenAI (GPT)
function_call objects. Configure your API key in Settings or directly in config.json.
Best for: General-purpose tasks, broad knowledge, fast responses.
Ollama (Local)
Health Tracking
Thethalamus tracks each provider’s health independently:
- Failure count — Incremented on each failed request
- Backoff cooldown — Exponential backoff after failures
- Online check —
net.isOnline()for instant offline detection
Choosing a Primary Provider
Set your primary provider in Settings orconfig.json. The cascade always falls back in order: Claude → OpenAI → Ollama. Your primary provider is tried first, then the cascade takes over on failure.