LearnAIAgents
🎨 Design

Do you actually need an agent?

Direct call → augmented LLM → workflow → autonomous agent.

The four levels of "AI thing"

Before you decide you need an agent, check whether something simpler will do. Anthropic's formulation is clearest: four levels of capability, and you should pick the lowest one that solves the problem.

LevelWhat it isWhen it fits
Direct LLM callOne prompt, one response. No tools, no loop.Summarising, rewriting, classifying, short-form Q&A. Solves 60%+ of requests that sound like "use AI for X".
Augmented LLMLLM + retrieval + (maybe) one-shot tool use.Grounded Q&A, lookup-and-answer, draft-with-sources.
WorkflowMultiple LLMs in predefined paths. Deterministic, auditable.Multi-step processes with known structure: extract → validate → summarise → route.
Autonomous agentLLM directs its own process. Dynamic, adaptive.Complex, judgement-driven tasks where the path is not known up front.

The jump from workflow to autonomous agent is the biggest. You are trading determinism for adaptability and paying for it in cost, latency, and governance overhead.

When is an agent actually the right answer?

OpenAI's A Practical Guide to Building Agents gives three signals that point toward an agent. If none of these apply, you probably do not need one.

  • Complex decision-making. The task requires nuanced judgement, not rule-following. Multiple factors, ambiguity, context-dependence.
  • Difficult-to-maintain rules. There are so many exceptions and edge cases that hardcoded logic becomes brittle. The rules change faster than you can update them.
  • Heavy unstructured data. Emails, documents, conversations, images. Data that does not fit neatly into spreadsheets or databases.

The cost of picking the wrong level

Over-complicating kills projects more often than under-complicating them. An autonomous agent for a task that a workflow could handle is not merely expensive — it is harder to evaluate, harder to monitor, harder to govern, and harder to debug. You are paying capability tax for capability you do not need.

Under-complicating is recoverable: you can always wrap a direct call in more structure later. Over-complicating tends to lock in architectural choices and runtime costs that are hard to back out.

Worked example

"Our support team is drowning in Tier-1 questions. Should we build an agent?"

  • Direct call? Could a fine-tuned model answer Tier-1 directly? Probably, but without a tool to open tickets or update records, it would just produce text.
  • Augmented LLM? Yes — retrieve from the help centre and a recent-tickets index, then respond. This likely handles most of the volume.
  • Workflow? If you also need to classify, route, draft a reply, and log an action, a workflow fits.
  • Autonomous agent? Only if the agent should decide whether to refund, escalate, or call another system — and those decisions need to be made dynamically.

The guidance is: start with augmented LLM, graduate to a workflow once you know the shape, and add autonomy only to the parts of the flow where the rules keep changing.