Architecture Overview
Adjudon is a Decision Audit Layer for regulated AI deployments. Three concerns sit on the customer side of an HTTPS call — agents, decisions, identifiers — and three sit on ours: a confidence engine, a policy engine, and a tamper-evident SHA-256 hash chain. The seam between them is the only contract that matters.
The shape of the system
Three production surfaces:
| Surface | URL | Stack |
|---|---|---|
| Landing site | adjudon.com | Next.js 14, Cloudflare Pages |
| Dashboard | app.adjudon.com | React 18 + Vite, Cloudflare Pages |
| API | api.adjudon.com | Express 5, Fly.io Frankfurt |
The API hosts every customer-facing endpoint — about 42 route files,
roughly 260 endpoints. The dashboard reads from those same endpoints; the
landing site has no access to customer data. Every customer integration
crosses one HTTPS call: POST /api/v1/traces records an AI decision, the
chain hashes it, and the response returns the row's chainHash. Reviews,
alerts, webhooks, and exports are all downstream of that single ingestion.
The full request path:
┌─────────────────────────────────────────────┐
│ api.adjudon.com · Express 5 · Frankfurt │
│ │
[Customer agent]│ PII scrub │
──── POST ───▶│ ↓ │ async
/api/v1/traces │ Confidence Engine (3 pillars) │ ────────▶ Webhook dispatch
│ ↓ │
│ Policy Engine ── 201 / 202 / 403 │
│ ↓ │
│ HashChainEntry append │
│ ↓ │
│ MongoDB Atlas Frankfurt ◀────┐ │
│ │ │
│ GET /api/v1/hash-chain/ │ │
│ export ───────────────────┘ self- │
│ contained │
│ JSON bundle │
│ (offline │
│ replay) │
└─────────────────────────────────────────────┘
The pipeline holds tight latency SLOs: p50 < 10ms, p95 < 25ms,
p99 < 45ms end-to-end through ingestion, scrubbing, confidence,
policy, and chain append. The Express 5 server runs on Fly.io Frankfurt
with health-check responses below 100ms independent of database load.
Where data lives
| Service | Region | Purpose |
|---|---|---|
| MongoDB Atlas | Frankfurt (eu-central-1) | Primary database |
| Fly.io | Frankfurt | API server |
| Cloudflare Pages | EU edge | Landing + dashboard + docs |
| Stripe | Ireland | Billing + metered usage |
| Resend | EU | Transactional email |
| OpenAI | USA, under SCCs | Embedding generation (one documented exception) |
Frankfurt eu-central-1 is the rule. The one exception is OpenAI, used by
the Confidence Engine's third pillar for embedding generation under GDPR
Chapter V Standard Contractual Clauses; no customer trace payload is sent,
and the embedding vector returns to MongoDB Atlas Frankfurt. We document
this honestly so procurement does not arrive expecting a fully air-gapped
EU stack that does not exist. Cross-organization isolation is enforced in
code: every DecisionTrace query filters by organizationId, no
exceptions.
The Confidence Engine
Confidence is triangulated from three independent signals:
- Base probability, derived from the decision context, not from the model's own self-reported softmax score
- Variance, computed across a small ensemble of model calls; high variance means the model disagrees with itself
- Historical precedent, comparing the current decision against similar decisions in the organization's past trace history
The engine emits a single confidenceScore in 0.0–1.0 plus tags:
LOW_CONFIDENCE when the score falls below 0.6, HIGH_AMBIGUITY when
variance is high. Tags drive routing (review queue, policy gates, alerts);
the score appears in the chain row. An MDCG 2019-11 notified body or a
GDPR Art. 22 reviewer will not accept the model's own self-reported
softmax as evidence; triangulation is the operational answer.
The Multi-Clock Incident Hub
A single AI-related incident at a German bank rarely triggers one regulator. It can trigger up to five at once, each with its own clock:
| Regulator | Article | Early | Intermediate | Final |
|---|---|---|---|---|
| GDPR | Art. 33 | — | — | 72 h |
| EU AI Act | Art. 73 | 2 d | 10 d | 15 d |
| DORA | Art. 19 | 4 h | 72 h | 30 d |
| NIS2 | Art. 23 | 24 h | 72 h | 30 d |
| CRA | Art. 11 | 24 h | 72 h | 14 d |
The Multi-Clock Hub runs all five as parallel countdowns off one detection
event. Each IncidentClock carries regulator, articleRef,
clockStartedAt, an array of checkpoints with evidenceTraceId, and a
status of active, paused, completed, or breached. A breached
clock is not deleted — it stays in the index with status: 'breached'
so the post-incident audit can replay which deadlines were missed and when.
Replay without our login
Every DecisionTrace is hashed into a per-organization SHA-256 chain row:
{
"sequence": 17493,
"prevHash": "a3c1...0f9b",
"payloadDigest": "9e2d...4c7a",
"chainHash": "f1b0...2dde",
"createdAt": "2026-05-05T10:14:22.317Z"
}
The chain formula is published:
chainHash = sha256(prevHash || payloadDigest || sequence || createdAt)
payloadDigest = sha256(canonicalJson(traceView))
The chain is tamper-evident, not tamper-proof: a hostile insider who edits
a stored entry breaks the next entry's prevHash link. Tampering is loud.
When audit is scheduled, the bank's compliance team requests the bundle:
curl https://api.adjudon.com/api/v1/hash-chain/export \
-H "Authorization: Bearer $ADJUDON_API_KEY"
The response is a self-contained JSON document: every entry, every hash,
every sequence number. The auditor recomputes each chainHash and compares
to the stored value. Result: verified: true or brokenAt: <sequence>.
No third state. The verification runs against the downloaded bundle. No
Adjudon login, no Adjudon endpoint, no Adjudon network. The chain is
valid evidence even if Adjudon disappears between the export and the audit.
What we do not do
Adjudon is SaaS-only: no white-label, no customer-cloud, no on-premise. We do not sit in the request path between your application and your LLM — that category (Lakera, CalypsoAI, NeMo Guardrails) is a separate vendor type, and a regulated stack typically wants both layers.
We do not store raw trace payloads in logs (Cardinal Rule 4) or send
customer data to external ML training (Cardinal Rule 6). We do not modify
chain entries (Cardinal Rule 5) — GDPR right-to-erasure nullifies
the source DecisionTrace payload fields, but the chain shell and its
payloadDigest (computed before erasure) remain intact.
See also
- Audit Log & Security — chain formula, two parallel chains, four-step verify algorithm
- Multi-Clock Incidents — the five-regulator concurrent countdown model
- Data Residency & GDPR — full residency picture and the documented OpenAI exception
- Sub-Processors — complete vendor list with geography per row