The Bureaucracy of Artificial Intelligence
We wanted a digital god. We're building a firm instead.

The Pin Factory Paradox
In 1776, Adam Smith described a pin factory to explain the efficiency of the division of labor. He noted a skilled generalist working alone could make ~20 pins a day. However, by dividing the labor so that one man draws the wire, another straightens it, a third cuts it, and so forth, ten men could produce 48,000 pins a day.1
Smith wasn’t just describing a factory. He was describing a patch for the human mind. He recognized that while humans are “general intelligences,” we suffer from high “switching costs”. The metabolic tax of shifting our attention from one context to another is so high that we must organize ourselves into rigid, specialized slots to get anything done.
For the last three centuries, we have viewed this bureaucracy as a necessary evil. It was a biological limitation we hoped to eventually transcend.
Then we built the “God Model.”
The Death of the Monolith
When GPT-4 arrived, the tacit assumption in Silicon Valley was that Artificial Intelligence would be the ultimate generalist. We imagined a single, monolithic neural network that could do it all. We thought it would write the code, design the UI, and manage the database without the messy overhead of departments or managers.
But look at the state-of-the-art in 2026. The “God Model” dream is dead. Instead of a single digital genius, the industry has converged on architectures that look suspiciously like the very thing we tried to escape: bureaucracy. We are building “Orchestrators” (managers), “Workers” (specialists), and “Gateways” (compliance officers).
It turns out that intelligence, whether biological or silicon, is subject to the same laws of physics.
Lessons from the Flat Organization
I’ve been tracking the “organizational design” of AI agents since the chaotic days of 2023. Back then, we had the “ReAct” paradigm (chain-of-thought reasoning with external actions) with simple loops like AutoGPT. These were the AI equivalent of a “flat organization”. Every agent could talk to every other agent. There was no hierarchy, and there were no managers.
Just like the famous “flat” experiments at Zappos2 or Valve3, it resulted in chaos. These flat swarms suffered from “loops of death” and massive hallucination spirals. Without a hierarchy to filter information, the noise amplified until the system crashed.
By late 2025, the trend line shifted aggressively toward structure. The release of Claude Opus 4.5 and GPT-5.2-Codex didn’t just give us smarter models. It gave us models capable of submitting to a “boss”.
We aren’t seeing the liberation of intelligence. We are seeing the industrialization of it. The cutting-edge AI architecture of 2026 is essentially a digital org chart. It consists of containerized pods of specialized agents strictly governed by an orchestration layer.
The Transaction Costs of Thought
Why is this happening? Why can’t a super-intelligent model just “figure it out”?
To understand this, we have to look at the Transaction Cost Theory of the firm, introduced by Ronald Coase in 1937.
Coase asked why companies exist at all. Why don’t we just contract everything out in a free market? His answer was that coordination is expensive. There are costs to finding the right person, negotiating the price, and enforcing the contract. When those costs are high, you build a firm (a hierarchy) to reduce the friction.
AI is facing its own “Coasean Moment.”
The Cognitive Transaction Cost. Just as humans have “Bounded Rationality” (a limit on how much we can process), AI models have “Context Windows”. Even with “Context Compaction”, dumping every piece of information into a single model creates “Context Drift”. The model gets confused. Specialization is the fix. By breaking a complex objective into small, specialized “MCP Servers” (Model Context Protocol), we lower the cognitive load on any single agent. We have one server for the database, one for Slack, and one for the file system. We are essentially creating “departments” to handle the information overload.
The Principal-Agent Problem. In economics, the “Principal-Agent Problem” occurs when a worker (the Agent) doesn’t perfectly align with the owner’s (the Principal) goals. In AI, we call this “Alignment” or “Safety.” A rogue agent with root access is a security nightmare. The solution in 2026 mirrors the solution in 1920: Middle Management. We now use “Orchestration Layers” to act as the digital middle manager. This layer doesn’t do the work. It audits the work. It ensures the “Worker Agent” is not hallucinating or trying to execute a malicious command. We’ve reinvented the supervisor because trust is not scalable.
The Scaling Laws of Agency. Research from DeepMind in 2026 formalized this with the “Scaling Laws of Agency”. They found that adding more agents to a flat swarm does not linearly increase performance. Instead, it exponentially increases coordination friction.
This is mathematically identical to “Dunbar’s Number” in sociology, which suggests human groups fall apart without structure once they exceed roughly 150 members. The optimal topology for AI, it turns out, is a hierarchy.
Structure is a Feature
There’s a profound irony here.
For decades, technologists have viewed the corporation with its org charts, memos, and managers as a relic of the past. We thought silicon would liberate us from structure.
But it seems that structure is not a bug of human biology. It is a feature of general intelligence.
As we scale AI toward AGI, we are not building a god. We are building a firm. The limiting factor of the future will not be raw compute. It will be organizational design.
The question for us, as leaders, shifts from “How do I prompt this model?” to a much older, more familiar question:
“How do I structure this team?”



Congratulations Matt, on breaking your duck. I look forward to many more. I’m going to restack for distribution :-)
Give me a call this week if you can on cell if you get the chance.
Cheers
SM