The EU AI Act, plainly¶
Regulation (EU) 2024/1689 — the EU AI Act — is the world's first horizontal regulation of artificial-intelligence systems. It was adopted 13 June 2024 and entered into force on 1 August 2024. It applies to you if you place an AI system on the EU market, put it into service in the EU, or process EU data with one, regardless of where your company is headquartered (Art. 2).
This page walks you through the structure. Then the articles page covers the specific obligations you owe as a provider or deployer of a high-risk system.
The four risk tiers¶
The Act classifies AI systems into four buckets. The obligations scale with the bucket.
1. Prohibited practices (Art. 5)¶
Already in force since 2 February 2025. Banned outright:
- Subliminal or manipulative techniques that distort behaviour causing harm.
- Exploiting vulnerabilities of specific groups (age, disability, socio-economic).
- Social scoring by public authorities (with narrow exceptions).
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (narrow exceptions: terrorism, missing children, suspect of serious crime).
- Individual predictive policing based solely on profiling.
- Untargeted facial-recognition database scraping.
- Emotion recognition in workplaces and schools.
- Biometric categorisation inferring race, political opinions, trade union membership, religion, sex life, sexual orientation.
If your product does any of this, stop shipping it to the EU.
2. High-risk AI systems (Art. 6 + Annex I / Annex III)¶
The bulk of the regulation. Two routes into high-risk:
- Annex I: AI system that is a safety component of a product already regulated by other EU product-safety law (medical devices, machinery, toys, vehicles…).
- Annex III: an eight-category list of use-cases the Act considers high-risk by intended purpose. See Is my system high-risk?.
High-risk obligations kick in 2 August 2026 for most categories, and 2027 for Annex I systems already regulated under existing product-safety law.
The rest of this documentation is mostly about these systems.
3. Limited risk — transparency obligations (Art. 50)¶
Systems that interact with natural persons must disclose that they are AI. AI-generated or manipulated content (deep-fakes, text) must be marked machine-readable and understandable.
Applies from 2 August 2026. Lex Custis renders the "AI-generated" marker on every model output and can embed it in exports.
4. Minimal risk¶
Everything else. No obligations. Most LLM-powered features that just summarise or draft text fall here — until they start influencing a decision about a natural person, at which point they may cross into Annex III.
Who the Act talks about¶
Four roles:
| Role | Definition (Art. 3) | Your reality |
|---|---|---|
| Provider | Whoever develops or has developed an AI system / GPAI model and places it on the market | You, if you ship a SaaS with AI |
| Deployer | Whoever uses an AI system under their authority in a professional capacity | Your customer who enables the AI feature; sometimes also you |
| Authorised representative | Natural or legal person in the EU with written mandate from a non-EU provider | Matters if you HQ outside the EU |
| Importer / distributor | Those who put foreign AI systems on the EU market | Rare for pure SaaS |
Key point: a single product can make you a provider (of the feature) and put your customer in the deployer seat simultaneously. Both of you owe different obligations. Art. 13 is about provider → deployer information; Art. 26 is about deployer → provider monitoring.
Timeline that matters¶
| Date | What applies |
|---|---|
| 1 August 2024 | Regulation enters into force |
| 2 February 2025 | Art. 5 prohibitions + Art. 4 AI literacy — already in force |
| 2 August 2025 | GPAI obligations (Art. 53/55), notified bodies, governance chapters |
| 2 August 2026 | Most high-risk obligations (Arts. 6–49, 73) — this is the headline deadline |
| 2 August 2027 | High-risk systems embedded in Annex I product-safety regimes |
| 31 December 2030 | Public-sector legacy AI systems deployed before 2 Aug 2026 must be brought into compliance |
If you ship any Annex III AI today, plan against 2 August 2026.
Penalties (Art. 99)¶
- Up to €35 million or 7 % of global annual turnover (whichever is higher) for prohibited-practice violations.
- Up to €15 million or 3 % for most high-risk obligation failures (Arts. 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 23, 26, 27, 48, 49, 72, 73).
- Up to €7.5 million or 1 % for providing incorrect or misleading information to authorities.
SMEs cap at the lower of the fixed amount or percentage (the Regulation is slightly kinder to SMEs than to large enterprises).
Governance¶
Each Member State designates at least one national Market Surveillance Authority (MSA). They run investigations, can order corrective measures, and can withdraw a system from the market (Art. 82). Examples:
- Germany: Bundesnetzagentur (BNetzA) + sectoral regulators (BaFin for finance, BfDI for data protection, BfArM for medical devices).
- France: Direction Générale des Entreprises + CNIL for data-protection aspects.
- Netherlands: RDI / Autoriteit Persoonsgegevens.
- Italy: MIMIT.
- Ireland: DPC for data-protection angles, CompComm for market surveillance.
The European Commission's AI Office coordinates across Member States and enforces GPAI obligations directly.
What the Act does not do¶
- Replace GDPR. If your AI processes personal data, GDPR still applies on top. The two regulations interlock but do not merge.
- Create a private right of action automatically. Most remedies are public-law (MSA corrective measures, fines). National law may still create private claims (anti-discrimination, product liability).
- Pre-approve your system. There is no FDA-style pre-market approval — you self-certify conformity (Arts. 43, 47, 48), register in the EU database (Art. 71), and face audits post-hoc.
Where Lex Custis fits¶
Lex Custis is not a lawyer and not a regulator. It is an engineering substrate that produces the evidence artefacts the regulation requires — logs, dossiers, attestations, incidents, metrics — in a form a regulator can ingest and a DPO can verify.
Next: Is my system high-risk?.