Agentic apparatuses
On designing systems for AI operators.
There’s an emerging flavor of software system that needs a name. I propose: agentic apparatus.
@Steve__Yegge’s Beads task system is one. @arscontexta’s agentic note-taking plugin is another. @github repos are big ones hiding in plain sight.
To understand what I mean, let’s look at the humble abacus.
I. The abacus
An abacus is not a calculator. A calculator takes input and produces output — you give it a problem, it gives you an answer. An abacus is different. It’s a mechanism: it must be operated. The beads sit in positions that encode the current state of a computation. The operator looks at those positions, applies rules, and moves beads. Each move advances the computation. The state persists between moves. The progress is visible. And critically, a skilled operator can glance at the beads and know exactly what to do next.
Three things make this work. First, state: the positions of the beads, which are persistent, inspectable, and meaningful. Second, operations: the moves — slide a bead up, carry to the next rod — each one small, discrete, and irreversible within the computation. Third, skill: the knowledge of arithmetic that tells the operator which moves to make, in what order, given what the beads currently show.
Take away any one of these and the abacus stops being useful. Beads without rules are decoration. Rules without beads are theory. And beads plus rules without a skilled operator are just sitting there.
Now replace “skilled human” with “AI agent.”
II. Agentic operators
An agentic apparatus is one meant to be operated by an AI. When the operator is an LLM — tireless, fast, capable of following complex procedures exactly — the design space for apparatuses explodes. A dependency-aware task system tracking hundreds of concurrent work streams. A note-taking engine that builds individualized cognitive scaffolds from conversation. A knowledge graph extracting structured claims from ten thousand documents. These are systems whose state exceeds any human’s patience to maintain, but not an agent’s.
The constraint that makes apparatuses necessary is the same one that made the abacus necessary: the work exceeds what the operator can hold in their head at once. An abacus externalizes arithmetic into physical state. An agentic apparatus externalizes complex work into persistent data that an agent can inspect, reason about, and advance — one operation at a time, across many sessions, without losing the thread.
The key property: apparatuses ratchet. Each operation moves state forward in a way that accumulates. A note gets added. A task gets completed. A tension gets identified. Like a ratchet mechanism, each click is a small advance that can’t slip backward. The skill tells the agent which direction is forward. This is what distinguishes an apparatus from a tool. A tool amplifies a single action. An apparatus structures a sequence of actions into cumulative progress.
III. Examples in the wild
Beads is a dependency-aware task system for coding agents. The state is an issue graph with dependencies, priorities, and completion status. The operations are creating issues, claiming them, resolving them, discovering new work. The skill lives in the instructions that tell the agent to check the graph, find unblocked issues, and work them. The name is not accidental — beads on a wire, positions encoding progress. It’s an abacus for project management.
Ars Contexta generates individualized knowledge management scaffolds from conversation. You describe how you think; the engine produces a folder structure, context files, and processing pipelines tailored to your cognitive style. It’s an apparatus generator — it creates a bespoke apparatus and hands it to the agent to operate.
A git repository with an issue tracker is an apparatus hiding in plain sight. Code, commits, and branches are the state. Staging, committing, and merging are the operations. Contributing guidelines and triage labels are the skill. But it’s the issues that make it apparatus-shaped: they signal to an outside observer — human or agent — here’s what needs doing to move the whole system toward a desirable state.
IV. What changes
Agents are a fundamentally different kind of operator, and most of our software isn’t designed for them. Yet.
First, agents can do tedious work that humans won’t. A human could, in principle, read ten thousand documents and extract every factual claim into a structured graph. No one is going to do that. An agent will, if you give it the right structure to operate on. Entire categories of systems that were theoretically possible but practically absurd become viable when the operator doesn’t get bored or lose focus.
Second, unlike traditional software, agents are stochastic. They’re powerful but probabilistic — brilliant one turn, confused the next. Without structure, their output drifts. An apparatus solves this by externalizing progress into persistent state. The agent doesn’t need to remember where it was — it reads the state, sees what’s been done, and picks up from there. The ratchet ensures that good work stays done, even across sessions, even across different agents. The apparatus is what turns a series of probabilistic guesses into deterministic forward progress.
Third, an apparatus tells the agent what to do. This is the part that changes the human’s role. Today, most agent workflows require a human in the loop — prompting, directing, babysitting. An apparatus inverts that. Because the state is legible and meaningful, the agent can inspect it and derive the next action on its own. Open issues in a repo. Unprocessed notes in a knowledge base. Unresolved dependencies in a task graph. The state itself is the prompt. The human sets the apparatus in motion and defines the goal; the apparatus tells the agent, step by step, how to get there.
This is why I think the concept deserves a name. Not because the examples I’ve mentioned are the only ones — but because once you see the pattern, you start seeing it everywhere, and you start asking different design questions. Not “what UI does the human need?” but “what state, operations, and skill does the agent need?”