EU AI Act
Risk tiers, GPAI obligations, 2025-27 enforcement timeline.
EU AI Act — horizontal regulation with extraterritorial bite
The EU AI Act is the first major horizontal regulation of AI. If you put an AI system on the EU market, or if the output of your AI system is used in the EU, the Act applies to you — regardless of where you are based.
This lesson summarises what a product/engineering team actually needs to know. It is not legal advice. For compliance, engage counsel.
The four risk tiers
The Act classifies AI systems by risk.
- Prohibited. Social scoring, subliminal manipulation, real-time remote biometric ID in public spaces (with narrow exceptions), emotion recognition in workplaces and schools. Banned since February 2025.
- High-risk. Systems used in critical infrastructure, education, employment, essential services, law enforcement, migration, justice, democratic processes. Heavy obligations: risk management, data governance, transparency, human oversight, accuracy, robustness, cybersecurity, post-market monitoring.
- Limited risk. Chatbots, emotion-recognition systems, biometric categorisation, deep fakes — transparency obligations (Article 50).
- Minimal risk. Everything else — no specific obligations, but voluntary codes of conduct are encouraged.
General-Purpose AI (GPAI) — the rules for model providers
GPAI models (e.g., foundation models) have their own regime, separate from the four-tier stack.
- Transparency, copyright compliance, and technical documentation are baseline obligations for all GPAI providers.
- Systemic-risk GPAI (models above a FLOP threshold, currently 10²⁵) have additional obligations: model evaluations, systemic risk assessments, adversarial testing, incident reporting, cybersecurity.
The enforcement timeline that matters now
- 2 February 2025 — prohibited-AI bans and AI-literacy requirements in force.
- 2 August 2025 — GPAI obligations entered application. New GPAI providers must comply.
- 2 August 2026 — Commission's full enforcement powers begin. The AI Office may request information, order recalls, mandate mitigations, impose fines.
- 2 August 2027 — legacy GPAI models (placed on market before 2 August 2025) must comply.
The deadlines cascade: if you are planning any new GPAI-tier or high-risk deployment in 2026, the work is now.
Provider vs deployer
Two distinct legal roles, two distinct sets of obligations.
- Provider — develops and places on the market the AI system. Carries most of the technical compliance burden: documentation, conformity assessment, CE marking for high-risk, GPAI obligations.
- Deployer — uses a provider's AI system under their own authority. Carries operational obligations: human oversight, use within intended purpose, logging, data-subject information.
If you build and deploy your own agent, you are both. If you embed a third-party LLM provider into your product, you may be both provider (of your agent) and deployer (of the LLM).
Six steps to take before 2 August 2026
The Orrick and DLA Piper analyses of the Act converge on roughly this shortlist for a product team:
- Inventory all AI systems and determine classification (prohibited / high-risk / limited / minimal / GPAI).
- Map providers, deployers, and affected users for each system.
- Run a DPIA-style impact assessment for any high-risk system, documenting residual risk.
- Set up human oversight in accordance with Article 14 — meaningful, not nominal.
- Wire transparency disclosures per Article 50 (so users know they are interacting with an AI).
- Establish incident reporting, post-market monitoring, and audit trails sufficient to defend your classification to a supervising authority.
Extraterritorial scope
The Act applies if:
- The provider or deployer is based in the EU, or
- The AI system's output is used in the EU, or
- The user is in the EU.
US or UK teams are not exempt.