FRIA Wizard
A Fundamental Rights Impact Assessment is what EU AI Act Article 27
requires before a deployer in a public-mission role — a bank
running credit scoring, a public authority running benefits triage,
a hospital running diagnostic triage — puts a high-risk AI
system into operation. Article 27 names the artefact (FRIA),
names the contents (purpose, affected persons, frequency and
duration of use, categories of impacted persons, foreseeable risks
to fundamental rights, human-oversight measures, complaint
mechanism), and requires the deployer to notify the market-
surveillance authority. It does not specify the format. Adjudon's
FRIA Wizard supplies the format.
This page explains the seven sections the Wizard collects, the draft → review → approved state machine the document moves through, the completeness score that gates submission, and the chain anchor that turns an approved FRIA into a tamper-evident artefact a regulator can verify. The integration surface is documented at POST /fria; the regulatory context is in compliance/eu-ai-act.
What an FRIA is
An FRIA is a structured document, not a free-form policy memo. The
schema (FRIA) names every field Article 27 expects:
systemName— the high-risk AI system this FRIA covers. One FRIA per system; not per agent, not per workspace.purpose— the specific purpose for which the deployer uses the system. Article 27(1)(a).affectedPersons— categories of natural persons or groups likely to be impacted. Article 27(1)(b).usageContext— the foreseen periods and frequency of use, plus the operational and geographic context. Article 27(1)(c).durationOfUse— the duration over which the deployer intends to use the system. The Adjudon Wizard treats duration as a separate field from usage context because regulators read them separately; an open-ended pilot reads differently from a 12-month deployment.identifiedRisks[]— an array of structured risk entries, each carryingtitle,description,likelihood(low/medium/high),severity(low/medium/high),affectedRights(free-text array),mitigation, and aresidualRiskpost- mitigation rating. Article 27(1)(d).humanOversightSummary— the oversight measures put in place, mapped to Article 14. Article 27(1)(e).complaintsMechanism— the channel through which affected persons can lodge a complaint. Article 27(1)(f).
The Wizard collects each field through a guided UI; the API accepts the same fields directly for organisations that prefer to drive the artefact from their own GRC tooling.
The state machine
An FRIA lives in one of four states with a strict allowed- transitions map:
┌─────────────────────────────────────────────────────────┐
│ FRIA State Machine │
├─────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ submit ┌──────────┐ │
│ │ draft │ ───────────────→ │ review │ │
│ │ │ ←─────────────── │ │ │
│ └──────────┘ return-to- └──────────┘ │
│ ▲ draft │ │
│ │ │ approve │
│ │ (no path back from ▼ │
│ │ approved) ┌──────────┐ │
│ │ │ approved │ │
│ │ └────┬─────┘ │
│ │ │ │
│ │ │ archive │
│ │ ▼ │
│ │ ┌──────────┐ │
│ │ │ archived │ │
│ │ └──────────┘ │
│ │
│ allowed = { │
│ draft: ['review'], │
│ review: ['draft', 'approved'], │
│ approved: ['archived'], │
│ archived: [] │
│ } │
└─────────────────────────────────────────────────────────┘
Two properties of the state machine matter for compliance posture.
First, an approved FRIA cannot return to draft. The only
forward path is archived. If the underlying AI system changes
materially, the deployer creates a new FRIA — archiving the
previous one preserves the historical attestation. Re-opening a
previously approved FRIA would let an operator silently rewrite
history; the schema does not allow it.
Second, edits are blocked once the FRIA leaves draft or
review. The update() service throws on any mutation against
an approved or archived document. A reviewer who finds a flaw
during review must return-to-draft (the only legal back-edge from
review); they cannot edit the FRIA in place under review. This is
how the document earns its evidentiary weight.
Completeness scoring
The Wizard exposes a completeness percentage on every read. The score is the ratio of populated required slots to total required slots:
| Slot | Counted? |
|---|---|
systemName | ✓ |
purpose | ✓ |
affectedPersons | ✓ |
usageContext | ✓ |
durationOfUse | ✓ |
humanOversightSummary | ✓ |
complaintsMechanism | ✓ |
identifiedRisks (count ≥ 1) | ✓ |
Eight slots; each contributes 12.5% to the score. The Wizard
surfaces the percentage on the dashboard so a reviewer can see at a
glance which sections still need work; submission to review does
not require 100%, but a regulator reviewing an approved FRIA at
60% completeness will read it as evidence the deployer rushed the
artefact. Use the percentage as a soft gate.
The FRIA chain — separate from the Decision Hash Chain
Adjudon runs three independent SHA-256 hash chains, and they do not
mix. The Decision Hash Chain anchors trace decisions in
/api-reference/hash-chain. The Operations Audit Log chain
anchors admin events in /api-reference/audit. The FRIA chain
— surfaced as the chainHash field on the FRIA document
itself — anchors Article-27 attestations on transition to
approved.
Each chain has its own genesis, its own append rules, and its own verification path. Cross-mixing would mean that disclosure obligation A leaks into evidence pack B; treating each as a separate chain keeps the regulatory surfaces independent. An EU AI Act Art. 27 review reads the FRIA chain. An EU AI Act Art. 13 transparency review reads the Decision Hash Chain. A SOC 2 admin-event review reads the Operations Audit Log. The same hash function, the same tamper-evident property, three different evidentiary frames.
What this is NOT
- Not a substitute for legal counsel. The Wizard structures the
Article 27 artefact and gates the workflow; it does not opine on
whether your specific deployment qualifies as a high-risk use
case under Annex III, whether your
affectedPersonsdefinition is exhaustive, or whether your mitigation is legally adequate. That judgment is the deployer's, supported by their counsel. - Not a per-trace artefact. One FRIA covers a system. A trace is a record of one decision the system made. Confusing the two produces ten thousand FRIAs and zero evidence.
- Not a Data Protection Impact Assessment. A DPIA under GDPR Article 35 and an FRIA under EU AI Act Article 27 overlap on affected-persons fields but answer different regulators with different remedies. The Wizard's free-text fields can be reused, but the artefacts must be filed independently.
- Not auto-generated. The Wizard prompts; it does not draft. An LLM-drafted FRIA that the deployer copy-pastes is the same legal artefact as a hand-typed one — the deployer signs it. Adjudon does not run an LLM on the FRIA text.
- Not editable after approval. This is the second-most- important property on the page (the most important is the chain anchor). Re-opening an approved FRIA is forbidden; an out-of- date FRIA is archived and a new one is created.
Regulator mapping
| Regulator surface | What this concept satisfies |
|---|---|
| EU AI Act Art. 27(1)(a) | Purpose — purpose field |
| EU AI Act Art. 27(1)(b) | Categories of affected persons — affectedPersons field |
| EU AI Act Art. 27(1)(c) | Frequency, duration, context of use — usageContext + durationOfUse fields |
| EU AI Act Art. 27(1)(d) | Specific risks to fundamental rights — identifiedRisks[] array with structured likelihood/severity/mitigation/residual fields |
| EU AI Act Art. 27(1)(e) | Human-oversight measures — humanOversightSummary, cross-referenced to Art. 14 |
| EU AI Act Art. 27(1)(f) | Complaints mechanism — complaintsMechanism field |
| EU AI Act Art. 27(3) | Notify market-surveillance authority — the approved-FRIA PDF export is the artefact the deployer transmits |
| GDPR Art. 35 | DPIA overlap — affected-persons fields can be reused, but the DPIA artefact files separately |
How the Wizard plays in production
A bank's Head of AI Governance opens the Wizard on Monday morning,
creates a new FRIA for the credit-scoring model, fills in purpose
and affectedPersons over the morning, lets it sit at 25%
completeness through Tuesday while she chases the model owner for
usageContext. By Thursday she has filled in all seven required
text fields and the dashboard reads 87% — one risk entry is
still missing. She adds two identifiedRisks (one for protected-
characteristic disparate impact, one for explanation legibility),
the score climbs to 100%, she clicks Submit, the FRIA transitions
to review. Her colleague at the second-line Risk function reads
through, requests one mitigation tightening, the FRIA returns to
draft for the edit, comes back to review Friday morning,
gets approved Friday afternoon. The chainHash populates on
approve; the FRIA's PDF export now carries a verifiable anchor that
the bank can hand to BaFin alongside its market-notification
filing. The artefact is locked. If the credit-scoring model gets
retrained on a new feature set six months later, the bank archives
this FRIA and starts a new one.
See also
- FRIA API — the integration surface for the wizard CRUD and approval transitions
- EU AI Act Compliance — the regulator-specific evidence pack
- Hash Chain — the per-trace Decision Hash Chain (separate from the FRIA chain)
- Audit Log — the per-admin-event Operations Audit Log (separate from the FRIA chain)
- Multi-Clock Incidents — the deadline hub a regulator-reported incident lights up