Chapter 0

The Prologue — Why This Exists

Press → or click to begin

“What kind of society will AI agents form?”

The Question

In 2026, AI agents write code, review pull requests, fix bugs, deploy software, and run experiments — often without a human touching the keyboard. The question is no longer whether agents will build our products. The question is what kind of system will govern them.

Today’s answer is: barely any system at all. Most AI-assisted development operates in one of two modes:

  1. The Co-Pilot: A human writes code with AI suggestions. Faster, but still human-bottlenecked. The human does the thinking; the AI fills in the syntax.
  2. The Swarm: Multiple agents work in parallel, supervised by a human who reviews their output. Faster still, but the human becomes the bottleneck — drowning in review requests, context-switching between agent threads, making rapid-fire decisions with mounting cognitive load.

Neither mode is sustainable. The Co-Pilot doesn’t scale. The Swarm burns out the human.

Steve Yegge calls this the AI Vampire Effect: the productivity gains from AI come with hidden human costs. The more agents you deploy, the more decisions flow to the human. The more decisions the human makes, the faster they drain. Three to four hours of deep decision-making is the realistic daily limit — not eight to ten hours of unrelenting review.

And yet, the work doesn’t stop when the human does.

The Town, Not the Factory

StrongDM’s Software Factory demonstrated that a team of three could build a system where no human writes code and no human reviews code. Specifications and scenarios drive agents that write, test, and converge on working software autonomously. Dan Shapiro calls this Level 5: the Dark Factory.

The Dark Factory is instructive. It proves that specification-driven autonomous development works. But it’s also incomplete. It answers “can agents build software alone?” with yes. It doesn’t answer “should they?”

The Anokye System says: not alone. Not because agents aren’t capable — they are. But because products aren’t just code. Products are expressions of human values, judgment, creativity, and intent. A product that no human understands is a product that no human can direct. And a product that no human can direct is a product that drifts.

The Anokye System models product building not as a factory — where finished goods roll off an assembly line — but as a town: a living settlement with citizens, governance, institutions, infrastructure, and continuous evolution. The product is not manufactured and shipped; it is cultivated and inhabited.

The name comes from Okomfo Anokye (c. 1655–1717), the priest-statesman who co-founded the Ashanti Empire. Anokye unified fragmented Akan states into a cohesive nation through shared laws, rituals, a constitution, and the Golden Stool — a symbol that made the abstract concept of unity tangible and sacred. The Anokye System aspires to do the same: unify fragmented product-building concerns under a shared conceptual architecture, with named agents and concrete processes that make the complex system graspable.

“Krom” is Twi for “town.” A factory does one thing. A town has commerce, governance, infrastructure, education, defense, and culture — analogous to development, operations, research, design, feedback, and experimentation. A factory has workers. A town has citizens with ongoing roles, institutional memory, and accountability. A factory shuts down between shifts. A town never sleeps.

The Core Hypothesis

If you provide a well-structured system of roles, rhythms, and rules, a constellation of AI agents can maintain continuous progress on complex projects with minimal human intervention — and the human’s role shifts from doing the work to directing and supervising the system that does the work.

This is not a hypothesis about replacing humans. It is a hypothesis about amplifying human agency through a system that never sleeps, never forgets context, and never loses momentum.

The Landscape That Informs Us

The Anokye System doesn’t emerge from theory alone. It synthesizes lessons from the most advanced AI orchestration systems publicly documented as of early 2026:

GasTown (Steve Yegge)

A multi-agent orchestration system that coordinates 20-30 parallel Claude Code instances. GasTown’s critical insight: “You don’t make agents better by giving them more freedom. You make them better by constraining their workflow and externalizing their state.” All task state lives in Beads (JSON records in Git), immune to agent crashes, context exhaustion, or hallucination. Work flows through Formulas (workflow templates) and Molecules (chained tasks with verification gates). Agents cannot self-certify completion.

GasTown taught us that externalized state and constrained workflows are non-negotiable.

StrongDM Software Factory

The most radical approach: a non-interactive development system where specs and scenarios drive agents that write, test, and converge code without any human review. Key innovations include Digital Twins (behavioral clones of external services for unlimited testing), Satisfaction Testing (probabilistic LLM-judged validation replacing boolean tests), and the Attractor — a coding agent whose repository contains no code, just three markdown specification files.

StrongDM taught us that specification primacy works and that digital twins collapse the cost of validation infrastructure.

Grant Harvey’s Enterprise AI Analysis

The sobering counterpoint: 80-95% of enterprise AI initiatives fail. The single best predictor of success is codebase quality. The gap between AI benchmarks and real-world utility remains enormous. Human-in-the-loop by default is the pattern that survives.

Harvey taught us that governance isn’t optional — it’s the difference between the 5% that succeed and the 95% that don’t.

Dan Shapiro’s Five Levels

A maturity model for AI-assisted development: from Spicy Autocomplete (Level 0) through Collaborative Partner (Level 2) to Dark Factory (Level 5). The critical insight: the gap between Level 2 and Level 4 is a phase transition — “the middle class of coding evaporates.” At Level 4, “one human can direct multiple agents. You’re only limited by how well you can articulate requirements.”

Shapiro taught us that the human’s primary output becomes specifications, validation, and ideation — not code.

The AI Vampire Effect (Steve Yegge)

AI productivity gains come with hidden human costs. The more agents you deploy, the more context-switching the human endures. Three to four hours of deep decision-making is the realistic daily limit. Systems must be designed for rhythm over throughput — sustainable cadence, not sprint-and-stall.

The Vampire taught us that the human’s well-being is a system design constraint, not an afterthought.

What the Anokye System Adds

Every exemplar above covers only the Decide/Act phases of the product lifecycle — the “build” part. None covers:

  • Systematic observation: Agent-driven monitoring of telemetry, user behavior, feedback, market signals
  • Automated orientation: Synthesizing observations into actionable understanding
  • Full OODA loop integration: Closing the loop from production telemetry back through specifications
  • Multi-domain coverage: Extending beyond software to design, content, experiments, operations
  • Governance as a living agent: Not just static policies, but a persistent governance daemon with its own reasoning loop
  • Cross-domain coordination: One person’s Okyeame communicating with multiple domains simultaneously

The Anokye System fills these gaps by applying the OODA loop (Observe-Orient-Decide-Act) to the entire product lifecycle, running continuously and asynchronously across multiple timescales.

The Akan Model: Why This Architecture Works

The organizational model is drawn from the Akan political structure of West Africa — one of the most sophisticated pre-colonial governance systems ever developed. This is not mere metaphor. The Akan system solved the same fundamental problem: How do you coordinate many autonomous actors toward a shared objective while maintaining both cohesion and adaptability?

Key principles from the Akan model that the Anokye System adopts:

  1. Authority is earned and maintained through results — The Ohene (chief) could be destooled. Agents that consistently fail are replaced.
  2. The Queen Mother has veto power — The Ohemaa could override the chief on matters of community welfare. Governance policies can halt agent actions.
  3. The Linguist is not a mere translator — The Okyeame was a diplomat, advisor, and protocol expert. The system’s personal agent is similarly sophisticated.
  4. The warriors are self-organizing — Asafo companies had internal structure and operated independently within mission parameters. Implementation agents self-select work.
  5. Decisions are made in council — The Ohene didn’t act alone. Complex decisions involve multiple agents contributing analysis.

The Invitation

This specification is designed to be:

  • Read by humans who want to understand a new model for AI-driven product building
  • Consumed by AI agents who will implement the system on whatever platform they have access to
  • Built incrementally — from a single agent on a simple work tracker to a civilization of specialized agents maintaining continuous progress across multiple domains

The Anokye System is not a product. It is a pattern language — a set of named concepts, structural relationships, and behavioral contracts that any team (human or AI) can use to build their own implementation.

The question is not whether AI agents will build our products. The question is: what kind of society will those agents form?

The Anokye System answers: one modeled on millennia of human organizational wisdom, adapted for the age of artificial intelligence.

1 / ?