Is my system high-risk?¶
A system is high-risk if either of these holds:
- It is listed in Annex III and is intended to be used for that purpose (Art. 6(2)).
- It is a safety component of a product regulated by one of the EU product-safety regimes in Annex I (medical devices under MDR, toys, machinery, vehicles, etc.) — Art. 6(1).
Annex I captures regulated-product makers. If that's you, you already work with notified bodies and you know the drill. The rest of this page is about Annex III — the place most SaaS founders end up by accident.
Quick triage
Prefer the interactive self-assessment at /check
in the product, or in your self-host at http://localhost:3000/check.
Art. 6(3) — the "significant risk" carve-out¶
Before we walk Annex III: a system that falls under Annex III is not high-risk if all of these are true (Art. 6(3)):
- It performs a narrow procedural task.
- It improves a previously completed human activity.
- It detects decision-making patterns without replacing them.
- It performs a preparatory task for Annex III use.
If none of those apply, you're in. The carve-out is narrow by design — don't build a legal strategy around convincing a regulator you fit it. And note: if the system profiles natural persons, Art. 6(3) does not apply regardless (you're always high-risk).
Annex III — the eight categories¶
§1 — Biometric systems¶
- §1(a) Remote biometric identification systems (real-time or post).
- §1(b) Biometric categorisation based on sensitive or protected attributes.
- §1(c) Emotion-recognition systems.
Note that Art. 5 outright bans some of these in workplaces, schools, law enforcement. Annex III catches the rest.
Example: a retail security product that fingerprints shoppers' faces to flag suspected shoplifters across stores → Annex III §1(a). A wellness app that reads voice tone to infer stress → Annex III §1(c).
§2 — Critical infrastructure¶
AI used as a safety component in management and operation of:
- Critical digital infrastructure.
- Road traffic.
- Supply of water, gas, heating, electricity.
Example: an AI that controls load-balancing on a national grid.
§3 — Education and vocational training¶
- §3(a) AI used to determine access to or admission / assignment to educational institutions.
- §3(b) AI used to evaluate learning outcomes (grading).
- §3(c) AI used to assess appropriate level of education.
- §3(d) Exam proctoring / monitoring prohibited behaviour during tests.
Example: Adaptive learning that streams students into curricula; essay-grading AI; online-exam proctoring that analyses webcam feeds.
§4 — Employment, workers management¶
- §4(a) AI used for recruitment or selection of natural persons, in particular targeted job ads, résumé filtering, evaluating candidates.
- §4(b) AI used to make or materially influence decisions affecting terms of work-related relationships — promotion, termination, task allocation, monitoring performance or behaviour of workers.
Example: CV-ranking AI (Retorio, Harver, Teamtailor's AI mode). AI that scores internal promotion-readiness (Personio, HiBob).
§5 — Access to essential services¶
- §5(a) AI used to evaluate eligibility for public benefits (welfare, housing, disability).
- §5(b) AI used to evaluate creditworthiness or establish a credit score (except fraud detection).
- §5(c) AI used for risk assessment + pricing in life or health insurance.
- §5(d) AI used to evaluate and classify emergency calls / dispatch emergency first-response services.
Example: lending-decision models at Auxmoney, Younited, N26 → §5(b). Health-insurance risk tables → §5(c). Emergency-call triage AI in EMS → §5(d).
§6 — Law enforcement¶
- §6(a) AI to assess risk of a natural person becoming a victim.
- §6(b) AI as a polygraph-style truth-detection tool.
- §6(c) AI to evaluate evidence reliability during investigation.
- §6(d) AI to assess risk of offending / re-offending of individuals outside Art. 5 (which bans profiling for this purpose).
- §6(e) AI for profiling in detection, investigation, prosecution.
§7 — Migration, asylum, border control¶
- §7(a) AI as polygraph in border contexts.
- §7(b) AI to assess risks (security, illegal immigration, health) of a natural person seeking to enter.
- §7(c) AI to assist examination of asylum, visa, residence applications.
- §7(d) AI in biometric identification for migration/asylum/border.
§8 — Administration of justice and democratic processes¶
- §8(a) AI to research and interpret facts / law and apply it to a set of facts, or used in dispute resolution.
- §8(b) AI used to influence the outcome of an election or voting, or to influence how people vote (excluding AI systems that don't interact with natural persons directly, like administrative back-office).
Example: legal-research AI for judges (§8(a)) is high-risk. General AI chat for law firms doing client work is probably not under §8(a) if it's not used by the judiciary — but remember §4(b) / §5 may catch it for other reasons.
Worked examples¶
A. HR-tech SaaS adding AI screening¶
Product: Teamtailor-style ATS; last quarter, you shipped an "AI Smart Suggest" that ranks candidates by fit score.
Classification: Annex III §4(a). You are a provider placing a high-risk AI system on the EU market. Your customer (the hiring company) is the deployer.
Obligations on you (provider-side): Arts. 9, 10, 11, 12, 13, 14, 15, 16, 17, 47, 49, 73.
Obligations on your customer (deployer-side): Arts. 26, 27 (if public-sector), 29.
What Lex Custis gives you today: Art. 12 hash-chain audit log, Art. 11+15+53+73 dossier bundle, Art. 14 oversight records, Art. 73 incident workflow. Arts. 9, 10 deferred to v0.2 / commercial edition.
B. Fintech with credit scoring¶
Product: Consumer loan marketplace. Decision model runs in milliseconds per application.
Classification: Annex III §5(b). You are simultaneously subject to EBA MRM and BaFin BAIT — the AI Act layers on top, it does not replace.
Obligations: all of the above plus a particularly strict Art. 14 human-oversight test: the overseeing human must be able to interrupt the system through a stop button (Art. 14(4)(e)).
Worth noting: Art. 22 GDPR (automated decisions with legal effects) applies too. Don't forget Art. 10 data-governance audit if your training data includes protected attributes.
C. Edtech with proctoring¶
Product: Online-exam invigilation (TestWe, ProctorExam). Webcam + keystroke + face-detection.
Classification: Annex III §3(d). Also potentially §1(c) if you do emotion inference on the webcam, in which case you may be prohibited outright by Art. 5.
Obligations: provider obligations + FRIA (Art. 27) if your deployer is a public institution (most schools are).
D. Healthtech with triage¶
Product: Symptom checker recommending whether to see a GP, an urgent-care doctor, or ER.
Classification: Annex III §5(d) if it classifies emergency calls; otherwise the system may fall under MDR (medical-device regulation) and become high-risk via Annex I instead.
Obligations: full high-risk stack and MDR technical documentation. Plan for dual-audit.
If you're in, what's next?¶
- Go to the articles page to see what each Article actually requires.
- Install Lex Custis — getting-started/install.md.
- Generate your first Annex IV dossier for a sample period.
- Stress-test your Art. 73 incident workflow before a real incident hits.
- Talk to counsel for Arts. 9, 10, 17, 27 — those require organisational work that Lex Custis helps document but can't replace.