Documentation Index
Fetch the complete documentation index at: https://docs.wolffi.sh/llms.txt
Use this file to discover all available pages before exploring further.
The Agent Pipeline
Every message follows this exact path through the system. No exceptions, no shortcuts.Step-by-Step
1. Message Received
The user sends a message from the chat UI. It arrives in the main process via IPC and enters the agent loop.2. Context Assembly (prefrontal)
This is the most important step. The prefrontal cortex:- Reads all workspace markdown files (identity, memory, skills)
- Calls
cerebellumfor tool definitions from loaded capabilities - Calls
cortexfor memory search results (SQLite FTS5) - Passes all candidates through
rasfor relevance scoring - Applies token budget allocation (15% identity / 10% prefrontal / 30% memory / 20% skills / 25% history)
- Assembles the final system prompt with XML tags
- Writes a debug snapshot to
brain/prefrontal/.debug/
You can inspect exactly what the LLM received by reading the debug snapshot files. This is how you debug “why did Wolffish do that?” questions.
3. LLM Call (thalamus)
The assembled context goes tothalamus.stream(), which:
- Checks
net.isOnline()for instant offline detection - Tries the primary provider (Claude, OpenAI, or Ollama based on config)
- If the primary fails, cascades to the next healthy provider
- Returns a unified
StreamChunkasync generator
4. Response Streaming (broca)
broca receives the stream chunks and pipes them to the renderer via IPC for real-time display in the chat UI.
5. Response Parsing (wernicke)
wernicke parses the streamed response, normalizing across provider formats:
- Anthropic:
tool_usecontent blocks - OpenAI:
function_callobjects - Ollama: structured JSON in response
ToolCall type: { name, args, id }.
6. Tool Execution Loop (if tool calls detected)
Ifwernicke finds tool calls, the loop begins (max 8 iterations):
- amygdala.classify() — Checks the tool call against danger patterns loaded from SKILL.md files. Three outcomes:
safe(proceed),confirm(show approval dialog),block(deny). - motor.execute() — Creates a
TASK-{id}.mdfile, logs the step, calls the plugin with retry logic (3x with 2s/6s/18s backoff). - cerebellum.executeTool() — Routes the call to the correct capability plugin.
- Results go back to the LLM for the next iteration.
7. Memory (hippocampus + basalganglia)
After the response is complete:hippocampusappends a summary of the turn to today’s episode file (brain/hippocampus/episodes/YYYY-MM-DD.md)basalgangliarecords the outcome (success/failure/denial) to today’s feedback file