Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.wolffi.sh/llms.txt

Use this file to discover all available pages before exploring further.

Installation

Prerequisites

Node.js (Required)

Wolffish capability plugins are .mjs files that execute in a Node.js runtime. Install Node.js 20+ from nodejs.org or via a version manager like nvm or fnm.
node --version   # Must be v20.0.0 or higher

Ollama (Required)

Wolffish depends on Ollama for local LLM inference — it’s not optional. Install it from ollama.com.After installing, pull a model:
ollama pull gemma3:4b
Wolffish detects Ollama automatically on launch and won’t function without it.
For higher quality responses, configure API keys for one or both:You’ll configure these in Wolffish’s settings after installation. Cloud providers enhance quality but cascade back to Ollama on failure.

Download

Download the latest .dmg from the releases page.
  1. Open the .dmg file
  2. Drag Wolffish to your Applications folder
  3. Launch Wolffish from Applications
On first launch, macOS may warn about an unidentified developer. Right-click the app and select “Open” to bypass this.

Build from Source

git clone https://github.com/younesalturkey/wolffish.git
cd wolffish
npm install
npm run dev
Building from source requires Node.js 20+ and npm. Native modules (better-sqlite3) will be compiled during npm install via electron-rebuild.

Verify Installation

On first launch, Wolffish creates ~/.wolffish/workspace/ with default configuration files. You should see the onboarding screen prompting you to select an Ollama model. To verify everything is working:
  1. Select a model (e.g., gemma3:4b)
  2. Send a test message like “Hello, what can you do?”
  3. You should see a streaming response
If Ollama is not detected, Wolffish will guide you through the setup process.