Press → or click to begin
“What kind of society will AI agents form?”
In 2026, AI agents write code, review pull requests, fix bugs, deploy software, and run experiments — often without a human touching the keyboard. The question is no longer whether agents will build our products. The question is what kind of system will govern them.
Today’s answer is: barely any system at all. Most AI-assisted development operates in one of two modes:
Neither mode is sustainable. The Co-Pilot doesn’t scale. The Swarm burns out the human.
Steve Yegge calls this the AI Vampire Effect: the productivity gains from AI come with hidden human costs. The more agents you deploy, the more decisions flow to the human. The more decisions the human makes, the faster they drain. Three to four hours of deep decision-making is the realistic daily limit — not eight to ten hours of unrelenting review.
And yet, the work doesn’t stop when the human does.
StrongDM’s Software Factory demonstrated that a team of three could build a system where no human writes code and no human reviews code. Specifications and scenarios drive agents that write, test, and converge on working software autonomously. Dan Shapiro calls this Level 5: the Dark Factory.
The Dark Factory is instructive. It proves that specification-driven autonomous development works. But it’s also incomplete. It answers “can agents build software alone?” with yes. It doesn’t answer “should they?”
The Anokye System says: not alone. Not because agents aren’t capable — they are. But because products aren’t just code. Products are expressions of human values, judgment, creativity, and intent. A product that no human understands is a product that no human can direct. And a product that no human can direct is a product that drifts.
The Anokye System models product building not as a factory — where finished goods roll off an assembly line — but as a town: a living settlement with citizens, governance, institutions, infrastructure, and continuous evolution. The product is not manufactured and shipped; it is cultivated and inhabited.
The name comes from Okomfo Anokye (c. 1655–1717), the priest-statesman who co-founded the Ashanti Empire. Anokye unified fragmented Akan states into a cohesive nation through shared laws, rituals, a constitution, and the Golden Stool — a symbol that made the abstract concept of unity tangible and sacred. The Anokye System aspires to do the same: unify fragmented product-building concerns under a shared conceptual architecture, with named agents and concrete processes that make the complex system graspable.
“Krom” is Twi for “town.” A factory does one thing. A town has commerce, governance, infrastructure, education, defense, and culture — analogous to development, operations, research, design, feedback, and experimentation. A factory has workers. A town has citizens with ongoing roles, institutional memory, and accountability. A factory shuts down between shifts. A town never sleeps.
If you provide a well-structured system of roles, rhythms, and rules, a constellation of AI agents can maintain continuous progress on complex projects with minimal human intervention — and the human’s role shifts from doing the work to directing and supervising the system that does the work.
This is not a hypothesis about replacing humans. It is a hypothesis about amplifying human agency through a system that never sleeps, never forgets context, and never loses momentum.
The Anokye System doesn’t emerge from theory alone. It synthesizes lessons from the most advanced AI orchestration systems publicly documented as of early 2026:
A multi-agent orchestration system that coordinates 20-30 parallel Claude Code instances. GasTown’s critical insight: “You don’t make agents better by giving them more freedom. You make them better by constraining their workflow and externalizing their state.” All task state lives in Beads (JSON records in Git), immune to agent crashes, context exhaustion, or hallucination. Work flows through Formulas (workflow templates) and Molecules (chained tasks with verification gates). Agents cannot self-certify completion.
GasTown taught us that externalized state and constrained workflows are non-negotiable.
The most radical approach: a non-interactive development system where specs and scenarios drive agents that write, test, and converge code without any human review. Key innovations include Digital Twins (behavioral clones of external services for unlimited testing), Satisfaction Testing (probabilistic LLM-judged validation replacing boolean tests), and the Attractor — a coding agent whose repository contains no code, just three markdown specification files.
StrongDM taught us that specification primacy works and that digital twins collapse the cost of validation infrastructure.
The sobering counterpoint: 80-95% of enterprise AI initiatives fail. The single best predictor of success is codebase quality. The gap between AI benchmarks and real-world utility remains enormous. Human-in-the-loop by default is the pattern that survives.
Harvey taught us that governance isn’t optional — it’s the difference between the 5% that succeed and the 95% that don’t.
A maturity model for AI-assisted development: from Spicy Autocomplete (Level 0) through Collaborative Partner (Level 2) to Dark Factory (Level 5). The critical insight: the gap between Level 2 and Level 4 is a phase transition — “the middle class of coding evaporates.” At Level 4, “one human can direct multiple agents. You’re only limited by how well you can articulate requirements.”
Shapiro taught us that the human’s primary output becomes specifications, validation, and ideation — not code.
AI productivity gains come with hidden human costs. The more agents you deploy, the more context-switching the human endures. Three to four hours of deep decision-making is the realistic daily limit. Systems must be designed for rhythm over throughput — sustainable cadence, not sprint-and-stall.
The Vampire taught us that the human’s well-being is a system design constraint, not an afterthought.
Every exemplar above covers only the Decide/Act phases of the product lifecycle — the “build” part. None covers:
The Anokye System fills these gaps by applying the OODA loop (Observe-Orient-Decide-Act) to the entire product lifecycle, running continuously and asynchronously across multiple timescales.
The organizational model is drawn from the Akan political structure of West Africa — one of the most sophisticated pre-colonial governance systems ever developed. This is not mere metaphor. The Akan system solved the same fundamental problem: How do you coordinate many autonomous actors toward a shared objective while maintaining both cohesion and adaptability?
Key principles from the Akan model that the Anokye System adopts:
This specification is designed to be:
The Anokye System is not a product. It is a pattern language — a set of named concepts, structural relationships, and behavioral contracts that any team (human or AI) can use to build their own implementation.
The question is not whether AI agents will build our products. The question is: what kind of society will those agents form?
The Anokye System answers: one modeled on millennia of human organizational wisdom, adapted for the age of artificial intelligence.