Routing flagged decisions to Slack
Goal
Wire an Adjudon alert rule that fires on every flagged or escalated decision and posts a structured message to a Slack channel. The Compliance Officer wakes up to a Slack thread per review item, not a 200-row email digest.
You'll need
- An Adjudon Scale plan or above (the
slackAlertsfeature gate) - A workspace API key (
adj_live_*) withadminorownerrole - A Slack incoming-webhook URL for the destination channel
- 4 minutes
export ADJUDON_API_KEY="adj_live_..."
export SLACK_WEBHOOK="https://hooks.slack.com/services/T0/B0/XXXX"
If you don't have a Slack incoming webhook yet: in your Slack workspace, App Directory → Incoming Webhooks → Add to Slack → pick the channel → copy the webhook URL. Adjudon never sees your Slack token; the webhook URL is the delivery surface.
Code
curl -X POST https://api.adjudon.com/api/v1/alerts \
-H "Authorization: Bearer $ADJUDON_API_KEY" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"Slack — flagged decisions\",
\"severity\": \"warning\",
\"condition\": {
\"field\": \"status\",
\"operator\": \"eq\",
\"value\": \"flagged\"
},
\"actions\": [
{ \"type\": \"slack\", \"target\": \"$SLACK_WEBHOOK\" }
]
}"
That single POST creates the rule. Every trace ingestion that
resolves to status: 'flagged' from this point on triggers a
Slack message.
To verify before letting it fire on real traffic:
curl -X POST "https://api.adjudon.com/api/v1/alerts/$ALERT_ID/test" \
-H "Authorization: Bearer $ADJUDON_API_KEY"
A test payload lands in the configured channel. If it doesn't
arrive within a few seconds, the response body carries the
per-action failure reason (HTTP status from Slack, timeout,
malformed URL); the endpoint returns 400 TEST_FAILED rather
than a silent success.
What just happened
Adjudon's alert engine watches every trace as it lands. The
rule above — condition.field = status, operator = eq, value = flagged — matches any trace whose final status
is flagged. On match, the engine evaluates each actions[]
entry:
type: 'slack'→ POST a JSON body totarget(the Slack webhook URL) with the trace's identifier, agent name, matched policy, and confidence score formatted as Slack message blocks.type: 'webhook'(alternative) → POST the same payload to a custom HTTPS endpoint of your choice.type: 'email'(alternative) → route via the Resend EU sub-processor to the configured email address.
The Slack webhook delivery is fire-and-forget with a durable retry queue: 1 minute, 5 minutes, 30 minutes, 2 hours, 8 hours back-off if Slack returns 5xx or times out. The trace ingestion itself never waits on Slack — the performance contract protects the agent's loop from any downstream slowness.
The same trigger also fires the global
alert.triggered webhook event. If you
want PagerDuty paging or a custom internal SIEM ingest in
addition to Slack, subscribe a webhook to that event rather
than adding it to every rule's actions[].
What the Slack message looks like
The webhook delivers a single Slack message per match, with the trace's identifying fields rendered as a header block plus a fields section. A compliance officer reading the channel sees:
🟡 Adjudon — Decision flagged for review
Agent: customer-support-bot
Trace ID: trace_8f7d6c5b...
Status: flagged
Confidence: 72%
Matched policy: "High-value refund — needs human review"
[Open in Adjudon] [Resolve]
The two action buttons link to the Review Queue entry for the trace. Clicking through arrives at the dashboard view with the full prompt, rationale, alternatives, and the resolve / reject / escalate controls in one place. The Slack thread is the notification; the resolution still happens in Adjudon.
Common gotchas
- Plan-gated.
slackAlertsis Scale and above. Sandbox receives403 UPGRADE_REQUIREDon any alert mutation. - Webhook URL is the secret. Slack incoming-webhook URLs
grant write access to the configured channel. Treat them
the same way you treat an
adj_live_*key — rotate through Slack's app management when team membership changes. - No retry-storm guarantee on bulk-flag events. A policy change that suddenly flags thousands of traces fires thousands of Slack messages. Tighten the condition before flipping the rule active in production.
Tightening the rule
The example above fires on every flagged status. Most teams
narrow further:
{
"name": "Slack — low-confidence flags",
"condition": {
"field": "confidenceScore",
"operator": "lt",
"value": 0.6
},
"actions": [
{ "type": "slack", "target": "https://hooks.slack.com/..." }
]
}
Only three condition fields are accepted at schema level:
confidenceScore, status, cpiScore (the org-level
Compliance Performance Index). Custom fields are not supported
— if you need to alert on a trace metric that's not
surfaced here, route the trace through a webhook and run the
rule on your side.
Going further
- Pause without losing. PATCH the rule with
{ "isActive": false }to stop firing without deleting the configuration. Useful during a maintenance window or when iterating on conditions. - Multi-channel routing. Add multiple
actions[]entries to one rule — one Slack channel for compliance, one email distro for legal, one webhook for your internal ticketing — and the engine fans out on every match. - Severity-based routing. Use the
severityfield (critical,warning,info) to drive different downstream behaviour: a Slack workflow that pages on-call only when severity iscritical, and posts a thread reply otherwise. The severity travels in the message payload. - Delete is owner-only. The
Alerts API intentionally restricts
DELETEto theownerrole. UsePATCH isActive: falseto pause; reserve delete for actual permanent removal. - CPI-band rule. A rule on
field: 'cpiScore', operator: 'lt', value: 70fires on the org-level Compliance Performance Index, not per-trace. Useful for posting "compliance posture dropped below 70 this week" to a leadership channel.
See also
- Alerts API — the full alert-rule CRUD surface
- Webhook Events — the
alert.triggeredevent payload schema - Plans & Features —
the
slackAlertsfeature gate - Performance SLOs — why the Slack delivery never blocks trace ingestion