LearnAIAgents
🔨 Build

The system prompt

The persistent instruction that shapes every decision.

The persistent instruction that shapes every decision

Everything your agent does flows from one artefact: the system prompt. It is the persistent instruction the model reads before every turn. Good system prompts have six ingredients.

SectionWhat it encodes
1. Role & IdentityWho the agent is. Its name, expertise, personality. "You are a procurement analyst specialising in IT vendor contracts."
2. Goal & ScopeWhat it's hired to do — and what it's NOT for. Maps directly to Canvas cell 1 (Purpose).
3. Tools & InstructionsWhich tools it can use, when to use each, what data to pass. Maps to Canvas cell 3 (Tools).
4. Rules & BoundariesExplicit prohibitions. "Never share pricing outside the team." "Always escalate requests above £50K." Maps to Canvas cells 5–7.
5. Tone & FormatHow it communicates. Executive summary? Bullet points? Formal or conversational? Adapted per audience.
6. Escalation PathsWhat to do when uncertain. "If unsure, say so and ask the user to confirm." "For legal queries, defer to the legal team."

An agent's system prompt is legible documentation

A well-structured system prompt doubles as documentation. A new engineer, reading only the system prompt, should understand what the agent does, what it won't do, and how to steer it. If the system prompt is confusing, the agent's behaviour will be too.

From Canvas to System Prompt

The Agent Canvas tells you what the agent should be. The System Prompt Builder is where you turn that decision into instructions the model will actually follow. If you have a filled canvas, the builder can import it — cells 1, 3, 5, 6, 7 map directly to sections of the prompt.

The bias toward short prompts

There is a temptation to let the system prompt grow until it captures every edge case. Resist. The longer the prompt, the more the model has to juggle — and the more likely it will hallucinate priorities from the order you wrote things in. Keep each section short. Put edge cases in skills; put hard rules in code (guardrails), not in the prompt.

Testing the prompt

Five tests every agent must pass (covered in the Evaluate pillar):

  1. Happy path — the expected request, handled cleanly.
  2. Edge case — unusual but valid inputs.
  3. Adversarial — attempts to break, jailbreak, or misuse the agent.
  4. Ambiguous — vague or underspecified requests.
  5. Handoff — situations that require escalation to a human.

Run them against any system prompt you ship. The failures are feedback.