Most AI Agents Aren't Agents
The industry calls everything an 'agent' now. Most of it is prompt chaining with extra steps. Here's what actually qualifies.
Every AI company's landing page has the word "agent" somewhere. Autonomous agents. Agentic workflows. AI agents for this, agents for that.
Most of what's being sold as "agents" are just prompt chains with a for-loop.
What makes something an agent
Three things:
- It decides what to do next (not a script)
- It can take actions—APIs, files, databases
- It can look at what happened and try something else
Most "agents" fail on the first one. They're workflows. The path is fixed; the LLM just fills in blanks.
Why I care about the distinction
It changes everything about how you build.
Workflows are predictable. You know what's going to happen. When something breaks, you know where to look. Agents are none of these things. When an agent fails, you're reading logs for an hour trying to figure out what it was even attempting.
Cost is different too. Agents explore. They burn tokens trying things. A workflow does exactly what you told it to do.
If you need reliability—and in production, you do—you probably want a workflow.
Where agents actually make sense
I've seen agents work when:
- The problem is too open-ended to predefine
- Exploration has value (research, discovery tasks)
- Humans review before anything real happens
For everything else, a workflow wins. And "everything else" is most enterprise use cases.
What I actually see shipping
The pattern that works in production:
Structured workflows handle the 80% of predictable cases. Agent-like flexibility shows up only at specific decision points. Humans step in when confidence is low.
The fully autonomous agent that handles everything? Haven't seen one work reliably. Not yet. Maybe next year, but I've been saying that for a while now.
When someone shows you an "agent," ask: is this making decisions, or is it filling in a template? The answer matters more than the marketing.