Overview
Before diving deeper, let’s define the key concepts you’ll encounter throughout Stoneforge.
Elements
An element is the atomic unit of data in Stoneforge. Everything in the system — tasks, messages, documents, entities, plans, workflows — is an element. All elements share a common base:
id— unique identifier (e.g.el-3a8f)type— one of:task,message,document,entity,plan,workflow,playbook,channel,library,teamcreatedBy— the entity that created this elementcreatedAt/updatedAt— timestampstags— freeform labels for filtering and organizationmetadata— extensible key-value data
Each element type adds its own fields on top of this base. Tasks have statuses and priorities. Messages have content and channels. Documents have titles and version history.
Tasks
Tasks are the primary work unit in Stoneforge. They track what needs to be done, who’s doing it, and what’s blocking progress.
Status lifecycle
┌──────────┐ │ backlog │ └────┬─────┘ │ triaged ▼ ┌──────────┐ ┌──────│ open │──────┐ │ └────┬─────┘ │ │ │ assigned │ │ ▼ │ │ ┌───────────┐ │ ├──────│in_progress│─────┤ │ └────┬──────┘ │ │ │ completed │ │ ▼ │ │ ┌──────────┐ │ │ │ review │ │ │ └────┬─────┘ │ │ │ merged │ │ ▼ │ ┌────▼────┐ ┌──────────┐ │ │deferred │ │ closed │ │ └─────────┘ └──────────┘ │ │ ┌─────▼───┐ │ blocked │ └─────────┘Valid statuses: backlog, open, in_progress, blocked, deferred, review, closed, tombstone.
Agents
Agents are AI coding assistants that execute tasks. Each agent has a role that determines its behavior:
| Role | Description |
|---|---|
| Director | Strategic planner. Receives your goal, breaks it into tasks with priorities and dependencies. Runs as a persistent session. |
| Ephemeral Worker | Auto-dispatched by the daemon to complete a specific task. Executes in an isolated worktree, commits, pushes, then completes or hands off. |
| Persistent Worker | Manually started for one-off or exploratory work. Not auto-dispatched. |
| Steward | Handles maintenance — merge review, documentation scanning, recovery of stuck tasks, custom workflows. Runs on triggers or schedules. |
Agents can use Claude Code (default), OpenCode, or OpenAI Codex as their underlying provider. Authentication is configured within the provider CLI and passes through automatically — no API keys needed in Stoneforge.
Agent pools
Agent pools provide concurrency control over agent spawning. A pool defines a maximum number of agents that can run simultaneously, with optional per-type limits and priorities. Without pools, the dispatch daemon spawns agents for every available task with no cap.
Pools are especially useful when you need to:
- Limit total resource usage (CPU, memory, API costs)
- Reserve capacity for merge stewards so merges don’t wait behind workers
- Distribute rate limits across providers
Worktrees
Stoneforge uses git worktrees to give each worker an isolated copy of your repository. This is the key to safe parallel execution — multiple agents can make changes simultaneously without interfering with each other.
Each worker’s worktree lives on a branch named:
agent/{worker-name}/{task-id}-{slug}For example: agent/Worker1/el-3a8f-add-login-form
Stoneforge manages the full worktree lifecycle: creation, branch setup, and cleanup after merge.
The orchestration loop
The orchestration loop is the core engine that drives Stoneforge:
- You communicate your goal to the Director via the Director Panel
- Director creates a plan with tasks, priorities, and dependencies
- Dispatch daemon detects ready (unblocked, unassigned) tasks
- Daemon assigns tasks to idle workers from the pool
- Each worker spawns in an isolated git worktree
- Worker executes, commits, pushes, then completes or hands off
- Merge steward reviews — runs tests, squash-merges on pass, creates fix task on fail
- Loop repeats for remaining tasks
Merge process
When a worker finishes a task:
- Worker commits and pushes changes on its worktree branch
- A merge request is created automatically
- The merge steward picks up the merge request
- Steward runs your test command against the branch
- Tests pass — squash-merge into main, clean up worktree
- Tests fail — create a fix task, assign to next available worker
Handoff
When a worker can’t complete a task (hits an obstacle, runs out of context), it hands off — the task returns to the pool with context notes describing what was tried and where it got stuck. The next available worker picks it up from the existing branch and worktree, continuing where the previous worker left off.
Dual storage
Stoneforge uses a dual storage model — SQLite for fast queries, JSONL files for durable, git-trackable persistence:
┌─────────────────────────────────────────────────────────────┐│ SQLite ││ • Fast queries with indexes ││ • Full-text search (FTS5) ││ • Materialized views (blocked cache) ││ • Ephemeral — rebuilt from JSONL on sync │└────────────────────────────┬────────────────────────────────┘ │ sync┌────────────────────────────▼────────────────────────────────┐│ JSONL ││ • Git-tracked, append-only ││ • Source of truth for all durable data ││ • Human-readable, diff-friendly ││ • Mergeable across branches │└─────────────────────────────────────────────────────────────┘External sync
Stoneforge can sync elements bidirectionally with the tools your team already uses. Tasks sync with issue trackers, and documents sync with knowledge bases:
| Provider | Element type | External system |
|---|---|---|
| GitHub | Tasks | GitHub Issues |
| Linear | Tasks | Linear Issues |
| Notion | Documents | Notion Pages |
| Folder | Documents | Local Markdown files |
Each synced element tracks its link to the external system (provider, external ID, content hashes). The sync engine detects changes on either side using content hashing and supports configurable conflict resolution when both sides change simultaneously.
External sync runs as a background daemon alongside the orchestrator, or on-demand via sf external-sync push, pull, and sync commands.