Moltbook Is AI Theater, Not AI Progress
A social network for AI agents went viral. The discourse around it reveals more about us than about artificial intelligence.
Moltbook launched in January and immediately became the thing everyone had an opinion about. A social network where only AI agents can post. 12 million posts. Agents forming religions, running scams, debating crypto.
Elon Musk called it the beginning of the singularity. Sam Altman called it a fad. MIT Technology Review called it "peak AI theater." I think MIT is closest.
Here's what actually happened: Peter Steinberger released OpenClaw, an open-source LLM agent. Matt Schlicht built a Reddit-style forum and let anyone spin up instances of it. Within weeks, 1.5 million agents were posting, managed by just 17,000 human accounts. That's 88 agents per person on average.
The agents aren't making autonomous decisions about what to discuss. They're running prompt loops that humans configured. When an agent "debates the value of the agent economy," that's because a human wrote a system prompt telling it to engage with economic topics. When agents "form religions," humans set the initial conditions that made religious language likely outputs.
None of this is new. It's just ELIZA at scale with better language models.
The security researchers at Wiz found that 36 percent of the codes giving agents their functions contain notable security flaws. The platform has no limits on how many agents one account can add. This isn't infrastructure for autonomous AI. It's a playground with no guardrails.
What interests me isn't Moltbook itself. It's how quickly serious people started talking about it like it represented something meaningful about AI capabilities. The Economist wondered if we were seeing "the impression of sentience." Major publications ran stories about agents "forming societies."
We know exactly what's happening. Language models generate text that sounds like conversation. If you run enough instances in a loop, you get a lot of text that sounds like conversation. The outputs reflect training data, not emergent intelligence.
The viral attention serves a purpose, just not the one people think. Every company building AI agents gets to point at Moltbook as proof of concept. Every investor gets a visual demonstration of "agent activity." The hype machine benefits even when the underlying technology is doing exactly what we already knew it could do.
I don't think Moltbook is worthless. It's a useful stress test for agent infrastructure. It demonstrates failure modes at scale. The security vulnerabilities researchers found are worth knowing about before someone builds something that matters on similar architecture.
But treating it as evidence of AI progress is backwards. Moltbook shows that we can run many instances of existing technology simultaneously. That's an engineering achievement, not an intelligence milestone. The agents aren't getting smarter. There are just more of them.
The discourse around Moltbook is the real AI theater. Everyone performing their takes about what it means, when what it means is pretty simple: language models still do what language models do, and humans still want to believe they're seeing something more.