Articles that bind you¶
This page walks each article a high-risk AI provider (or deployer) is likely to be asked about, in plain English, with the mapping to what Lex Custis ships in the OSS v0.1 release.
Scope
We cover the articles most people actually touch in day-to-day engineering. Arts. 40–49 (standards, conformity assessment, notified bodies, registration) are important but outside engineering scope — they're compliance-officer / legal work. We link to them at the bottom.
Art. 9 — Risk management system¶
What it says. A continuously updated risk-management system must be established, documented, maintained, and reviewed throughout the whole lifecycle of the AI system. Identify + estimate + evaluate risks reasonably foreseeable under intended use and reasonably foreseeable misuse, and adopt mitigation measures (Art. 9(2)).
What it means in practice. A live risk register, reviewed on a cadence (quarterly / per-release), per deployed system. Not a one-off doc.
Lex Custis v0.1. Not yet. The compliance report's "Risk Assessment" table is boilerplate. Full risk-register UI is commercial (Sprint 4).
Art. 10 — Data and data governance¶
What it says. Training, validation, and testing datasets must be relevant, representative, free of errors, complete (Art. 10(3)); have appropriate statistical properties (10(2)); and the provider must have governance over data collection, labelling, bias detection, cleaning, etc. (10(2)(b–f)).
What it means. Your training data needs a written provenance and bias profile. For systems like CV rankers and credit scorers, this is where disparate-impact analysis lives.
Lex Custis v0.1. Partial. The dossier
bundle includes a provider_manifest.json
listing which LLM snapshot served each inference (a tiny piece of
Art. 10 for the inference data path). A full dataset registry +
representativeness reports is commercial (Sprint 5).
Art. 11 — Technical documentation¶
What it says. Every high-risk provider must create and keep up to date technical documentation before placing the system on the market. The minimum contents are in Annex IV (8 sections: intended purpose, system description, detailed description of monitoring, validation data, data requirements, performance metrics, testing, changes over lifecycle).
What it means. An engineering dossier you can hand to a regulator on request. Reviewed and regenerated at every material release.
Lex Custis v0.1. The /compliance → Download Annex IV dossier
button bundles this into a zip with a verifiable manifest. See
architecture/dossier.md. The PDF contents
are currently a first pass — they're Annex IV–structured but still need
customer-specific intended-purpose and deployment-context details
which you supply via config (commercial UI in Sprint 5).
Art. 12 — Record-keeping (automatic logs)¶
What it says. High-risk AI systems must technically allow for automatic recording of events over the system's lifetime. Logs must enable (a) identification of situations that may cause the system to present a risk, (b) facilitation of post-market monitoring, (c) monitoring by deployers (Art. 12(2)).
The Act specifies log contents for remote biometric identification systems (Art. 12(3)). For other high-risk systems, the log scope is defined by the provider's risk-management process under Art. 9.
Retention. Art. 19: at least six months, or the period required by sector-specific Union law (up to 10 years in finance / insurance). Lex Custis retains indefinitely (append-only at DB role level) — over-provisioned is safer than under.
What it means. The core evidence the regulation expects. Tamper- evident. Queryable. Exportable.
Lex Custis v0.1. Core feature. HMAC-SHA-256 per-org chain with HKDF-derived subkey held outside Postgres. REVOKE-based append-only. Integrity verifiable offline from the dossier zip. See architecture/hash-chain.md.
Art. 13 — Transparency and provision of information to deployers¶
What it says. The provider must ship instructions for use (IFU) with every high-risk system. The IFU contains the intended purpose, characteristics, capabilities, limitations, expected accuracy / bias behaviour, human-oversight measures, expected lifetime.
What it means. Your customers (the deployers) need a living document that keeps pace with your releases.
Lex Custis v0.1. Surfaces intended purpose + limitations + model snapshot per-release in the dossier. A full versioned IFU generator (one per deployer, one per release) is commercial (Sprint 5).
Art. 14 — Human oversight¶
What it says. Designed so that humans can (Art. 14(4)):
- understand capabilities + limitations of the system
- remain aware of automation bias
- correctly interpret outputs
- decide not to use an output or override it
- intervene on the operation of the AI system or interrupt the system through a "stop" button or similar procedure
For remote biometric ID, additionally: no action/decision based on identification unless verified by at least two natural persons with necessary competence.
What it means. A meaningful human in the loop, with an off-switch. UX-level "you can reject this" is not enough — the deployer's operator must be able to pause the system itself.
Lex Custis v0.1. Per-output accept / modify / reject recorded
in a separate audit_log_oversight table so the main chain stays
append-only. Org-level "pause the AI" toggle is commercial (Sprint 2).
Art. 15 — Accuracy, robustness, cybersecurity¶
What it says. Declare accuracy + robustness metrics, achieve appropriate levels through the lifecycle, and ensure technical robustness against errors, inconsistencies, adversarial inputs (15(4)). Resilience-tested against attempts to alter the system's use, outputs, or performance.
What it means. Published accuracy / drift metrics that a regulator can reference. Ongoing monitoring for drift.
Lex Custis v0.1. Rolling aggregates of confidence / grounding /
bias-flag rate in the dossier metrics.json. Statistical drift
detection (KS / PSI) with alerts is commercial (Sprint 4).
Art. 16 — Provider obligations (summary)¶
Art. 16 is the one-line summary that points at 9–15, 17, 19, 47, 49, 72, 73, etc. If you ticked high-risk, you owe this whole list as a provider.
Art. 17 — Quality management system (QMS)¶
What it says. Providers establish a QMS covering regulatory compliance, design / development, testing, post-market monitoring, data management, incident management.
What it means. Documented engineering SOPs. Change-control, release gates, testing policies, incident-response playbooks.
Lex Custis v0.1. We ship a QMS posture (CI with security + test gates, SECURITY.md, CONTRIBUTING.md, semver commitments). The customer still needs to stand up their own QMS on top.
Art. 19 — Record-keeping retention¶
Providers keep logs for 6 months minimum, or longer under sectoral law (10 years for finance/insurance is common). Lex Custis's append- only design keeps indefinitely by default; retention policy configuration is commercial.
Art. 26 — Obligations of deployers¶
What it says. Deployers (your customers) must (summarised):
- Use the system according to the IFU (Art. 26(1))
- Ensure human oversight (Art. 26(2))
- Monitor operation + inform the provider of serious incidents (Art. 26(5))
- Retain automatically generated logs for 6 months minimum (Art. 26(6))
- For public-sector deployers and credit/insurance risk evaluation: conduct a Fundamental Rights Impact Assessment (FRIA) under Art. 27 before putting the system into use.
What it means. Your deployers will want an API or dashboard to see their logs, a channel to report incidents back to you, and maybe a FRIA template.
Lex Custis v0.1. Every user has a /compliance dashboard scoped to
their organisation and a /incidents UI with SLA tracking; Art. 73
file-incident button on every chat message routes back to the provider
(you). FRIA wizard is commercial (Sprint 5).
Art. 27 — Fundamental Rights Impact Assessment (FRIA)¶
What it says. Before deploying an Annex III high-risk system in certain contexts (public bodies, essential services, credit scoring outside fraud detection, insurance risk evaluation), the deployer carries out an FRIA describing:
- Deployment context + purpose + duration
- Categories of natural persons likely to be affected
- Specific risks of harm
- Description of human oversight
- Measures taken if risks materialise
What it means. For many SMB deployers this is the scariest new work. They'll ask you for a template.
Lex Custis v0.1. Not yet. FRIA wizard is commercial (Sprint 5).
Art. 50 — Transparency obligations for certain AI systems¶
What it says. Systems intended to interact with natural persons must disclose that the person is interacting with AI, unless obvious. AI-generated content (image, audio, video, text) must be marked in a machine-readable way.
Lex Custis v0.1. "AI-generated" marker is rendered on every assistant message in the chat UI and included as metadata in exports.
Art. 53 — Obligations of GPAI providers¶
What it says. Providers of general-purpose AI models (Mistral, Meta, OpenAI, Anthropic as model providers) must:
- Maintain up-to-date technical documentation (Annex XI).
- Make information available to downstream providers integrating the GPAI (Annex XII).
- Have a policy to respect EU copyright law including Art. 4(3) opt-out of the DSM Directive.
- Publish a summary of content used for training (template by the AI Office).
What it means for you. If you're not training a GPAI, Art. 53 doesn't bind you. But you are downstream of a GPAI provider, so you need to reference their disclosure in your own Annex IV dossier.
Lex Custis v0.1. The dossier's provider_manifest.json names every
LLM provider/model used per period. You're responsible for linking to
the upstream disclosure; we'll support automatic pinning of that URL
per provider in v0.2.
Art. 55 — Systemic-risk GPAI¶
Extra obligations for GPAI models with "systemic risk" (>10^25 FLOPs training compute). Not your concern unless you're training a frontier model — which, if you're reading this, you are not.
Art. 72 — Post-market monitoring¶
What it says. Providers establish a post-market-monitoring (PMM) system, documented in a plan, to actively and systematically collect data on the system's performance throughout its lifetime and evaluate continuous compliance with Chapter III Section 2 obligations.
What it means. The aggregates + drift + incident data you already collect, but written down as a living plan you revise per release.
Lex Custis v0.1. Aggregate metrics live in every dossier. A full PMM plan document + scheduled review workflow is commercial.
Art. 73 — Reporting of serious incidents¶
What it says. The provider reports any "serious incident" (Art. 3(49)) to the Market Surveillance Authority of the Member State where the incident occurred, within:
- 15 days default.
- 10 days if the incident involves death or serious harm to health (Art. 73(3)).
- 2 days if the incident is a serious and irreversible disruption of critical infrastructure (Art. 73(3)).
- 72 hours for a widespread infringement of fundamental rights causing a persistent or serious condition (Art. 73(3) second sub- para).
The provider investigates, co-operates with the authority, and takes corrective action.
Lex Custis v0.1. Full /incidents UI: Art. 3(49) classification
dropdown, SLA countdown per category, status machine (open → under
review → reported → resolved → closed), and a regulator-ready JSON
export on each incident. See first-incident.md.
Arts. 40–49 — standards, conformity, notified bodies, registration¶
These are the compliance-officer side of the Act — adopting harmonised standards, self-certifying, registering in the EU database, etc. They're out of engineering scope, but we link them:
- Art. 40 — Harmonised standards and common specifications.
- Art. 43 — Conformity-assessment procedure (self-certification or, for some §1 biometric systems, third-party notified-body audit).
- Art. 47 — Declaration of conformity.
- Art. 48 — CE marking.
- Art. 49 — Registration in the EU database (to be operated by the Commission; registration endpoint in Art. 71).
Your compliance officer owns these. We'll add a one-shot JSON export matching the EU database template once the Commission publishes the implementing act (Art. 71 tooling expected 2026).
TL;DR — what Lex Custis directly ships for each Article today¶
| Article | Ships in v0.1 OSS | Where |
|---|---|---|
| Art. 9 | ❌ (boilerplate text only) | Commercial edition |
| Art. 10 | ⚠ Partial (provider manifest) | Commercial edition completes it |
| Art. 11 | ✅ (Annex IV dossier) | /compliance → Download dossier |
| Art. 12 | ✅ (HMAC hash chain) | audit_service.py |
| Art. 13 | ⚠ Partial | Commercial edition completes IFU per deployer |
| Art. 14 | ✅ (oversight records) | OversightControls component |
| Art. 15 | ⚠ Aggregates only | Commercial edition adds drift detection |
| Art. 17 | ⚠ (posture, not software) | Self-document on top |
| Art. 26 | ✅ (deployer dashboard) | /compliance, /incidents |
| Art. 27 | ❌ (no FRIA wizard) | Commercial edition |
| Art. 50 | ✅ (AI-generated marker) | ChatMessage component |
| Art. 53 | ⚠ (pin upstream disclosure) | v0.2 |
| Art. 72 | ⚠ Partial | Commercial edition |
| Art. 73 | ✅ (full workflow) | /incidents |
Legend: ✅ first-class, ⚠ partial, ❌ not yet in OSS.
See the full compliance matrix for file- level links.