AIUC-1 — AI Agent Certification Standard
The first independent security, safety, and reliability certification for enterprise AI agents — positioned by its publisher as “SOC 2 for AI agents.” Created by the Artificial Intelligence Underwriting Company (AIUC) and audited by accredited third parties. The wiki’s CMM cites AIUC-1 readiness as D1 L4 evidence and AIUC-1 certification as D1 L5 evidence.
What it is
AIUC-1 is structured as six pillars with 50+ underlying safeguards (the standard publishes individual safeguards but not a single canonical total — the wiki should not invent one):
| Pillar | Focus |
|---|---|
| A. Data & Privacy | Lawful basis, data minimization, retention, cross-border |
| B. Security | Authentication, secrets, network, infrastructure |
| C. Safety | Harm prevention, refusal, content boundaries |
| D. Reliability | Failure modes, degradation behavior, observability |
| E. Accountability | Logging, traceability, incident response, audit trail |
| F. Society | Catastrophic-misuse / national-security externalities |
The Society pillar is the one the wiki’s CMM does not have an analogue for, per the validation page §2 AIUC-1 row.
Update cadence — a moving target
AIUC-1 is updated formally each quarter. The Q1-2026 update modified 26 requirements and added evidence-category labels (legal / technical / operational / third-party) plus a capability-specific scoping questionnaire. The Q2-2026 update is themed “Strengthening MCP security, agent permissions & third-party risk” — directly relevant to the wiki’s MCP Security and NHI coverage.
Implication for the CMM: a D1 L5 “AIUC-1 certified” claim is implicitly current at the most recent quarterly refresh, not “ever certified.” The CMM’s L5 evidence requirement reflects this — “AIUC-1 certified against the most recent quarterly refresh.”
Standards crosswalks
AIUC publishes crosswalks against: ISO 42001, NIST AI RMF, EU AI Act, MITRE ATLAS, OWASP LLM Top 10, OWASP AIVSS, IBM AI Risk Atlas, Cisco AI Security & Safety, CSA AICM. AIUC-1 is the only certification standard that maintains a current map across all of these — making it the anchoring artifact for the wiki’s standards crosswalk at L4+.
Accreditation status (May 2026)
- Schellman — first ANAB-accredited AIUC-1 auditor (Feb 3, 2026). Was previously the first ANAB-accredited ISO 42001 certification body.
- LRQA — pilot stage as of 2026; not yet accredited.
Two-actor audit model (unusual): AIUC issues the certification based on technical evaluation; the accredited auditor (Schellman) provides independent evidence collection and reporting. This split is different from ISO 27001 / SOC 2, where the auditing body issues the report directly. A peer reviewer should know the model before accepting “AIUC-1 certified” as L5 evidence.
Certified organizations (confirmed, as of 2026)
| Org | When | Notes |
|---|---|---|
| UiPath | March 2026 | First enterprise-automation cert; covered “more than 2,000 enterprise risk scenarios” |
| Intercom | 2026 | Fin agent |
| ElevenLabs | 2026 | First voice-AI certification; also a Technical Contributor to AIUC-1 |
Direct quotes
- “AIUC-1 is updated formally each quarter to ensure that the standard evolves as technology, risk, and regulation evolves.” — aiuc-1.com
- “The first security, safety, and reliability standard for AI agents.” — Schellman press release, Feb 3 2026
- “More than 2,000 enterprise risk scenarios.” — UiPath certification announcement
How the wiki uses it
| Use | Where |
|---|---|
| D1 L4 evidence | Agentic AI Security Capability Maturity Model — A 2026 Practical Proposal — “AIUC-1 readiness assessment complete” |
| D1 L5 evidence | Agentic AI Security Capability Maturity Model — A 2026 Practical Proposal — “AIUC-1 certified against the most recent quarterly refresh” |
| Standards crosswalk anchor | Agentic AI Security CMM — Standards Crosswalk Matrix — six-pillar map |
| Validation comparator | Validation: Agentic AI Security CMM vs Widely Adopted Standards §2 |
Caveats — what a peer reviewer would surface
Known concerns to flag
- Single accredited auditor (Schellman) is a capacity constraint. LRQA’s pilot is the only known second auditor; an enterprise that adopts the CMM L5 path is dependent on Schellman’s queue.
- Two-actor audit model is unusual. Issuer + auditor are different entities; the peer-review question is whether the accreditation regime (ANAB) and the issuer (AIUC) maintain independence.
- No single canonical safeguard count published. “50+” is the public framing; the wiki should not invent a fixed number.
- Quarterly update cadence means audit findings can age out fast — D1 L5 evidence has a freshness requirement that auditors and assessors need to enforce.
- AIUC is both standard-setter and certification issuer, with Schellman as the evidence-collecting auditor. This deliberately splits a role that ISO and SOC 2 keep unified; reviewers may push on whether the split improves or weakens independence.
See Also
- Agentic AI Security CMM 2026 — D1 L4/L5 evidence anchor
- Agentic AI Security CMM — Standards Crosswalk Matrix — six-pillar mapping
- Validation: Agentic AI CMM vs Widely Adopted Standards — §2 AIUC-1 row
- IEC 42001 — paired certification target