Your first compliant chat¶
Once the stack is up, log in at http://localhost:3000/login as
admin@acme-hr.demo / demo-demo-demo-demo.
Step 1 — send a message¶
Click Chat in the sidebar. Ask a legal / compliance question:
"Is our CV-screening AI classified as high-risk under Annex III?"
The pipeline that runs on every message:
- Pre-check (never blocks): PII detection, topic classification, prompt-injection flag.
- RAG retrieval: searches your uploaded documents (Tier 1) and the shared legal knowledge base (Tier 2).
- LLM stream: tokens flow to the UI via SSE.
- Post-check (never modifies output): confidence score, source grounding, bias flags.
- Append-only audit entry: written with the correct HMAC hash-chain linkage.
Step 2 — the badges¶
Under each AI response you'll see a row of small badges:
- AI-generated — always on. Art. 50 marker.
- Confidence % — green ≥ 80, grey ≥ 50, faint < 50.
- Client Docs / Legal KB / Live Search — the RAG tiers that contributed context.
Below the badges, three action buttons: Accept, Modify, Reject.
These are the Art. 14 human-oversight inputs. Picking one writes a row
to audit_log_oversight, which joins back to the audit entry but lives
in its own table (so audit_logs stays strictly append-only at the DB
role level).
A fourth button next to the actions — Report as incident — files an Art. 73 serious incident linked to the specific message. That flow is covered in Your first incident.
Step 3 — what got logged¶
Visit /compliance and click the Audit Log tab. Your new entry is
at the top:
sequence_number— monotonically increasing per org.previous_hash— thecurrent_hashof the prior entry (orGENESIS_HASHfor the first).current_hash— the HMAC of this entry. Verified offline using the per-org HKDF subkey.llm_provider+llm_model— so a regulator can cross-reference with the provider's Art. 53 disclosures.prompt_pii_detected+prompt_pii_types— what PII was flagged.response_confidence_score— your aggregate confidence (0.0–1.0).response_bias_flags— any bias patterns detected.human_action— the oversight action you just took (from the join).
Click a row to see the full prompt + response.
Step 4 — integrity check¶
Still on /compliance, the Integrity tab. Click Run integrity
check. You'll see the animated stepper walk through:
- Loading audit entries.
- Computing the hash chain.
- Verifying sequence integrity.
- Validating HMAC signatures.
- Finalising.
Result: "Integrity Verified · All N entries verified successfully." If
somebody with UPDATE privileges on audit_logs were to rewrite a
prompt, this check would break at the mutated sequence number with an
HMAC-mismatch message — exactly the property the EU AI Act Art. 12
record-keeping obligation demands.
What just happened (in EU AI Act terms)¶
You just generated evidence for:
- Art. 12 (automatic event log) — one hash-chained entry.
- Art. 13 (transparency) — the "AI-generated" marker was shown.
- Art. 14 (human oversight) — if you clicked accept/modify/reject.
- Art. 15 (accuracy) — confidence metric stored for aggregation.
- Art. 26 (deployer monitoring) — dashboard statistics updated.
Aggregate this over a quarter and you have the raw material for your Annex IV dossier. That's the next step.