EU AI Act
The EU AI Act (Regulation (EU) 2024/1689, published July 12, 2024) is the world’s first comprehensive binding legal framework for AI. It establishes risk-based obligations: prohibited AI systems, high-risk AI system requirements, and transparency obligations for general-purpose AI models.
Enforcement Timeline
| Date | Obligation |
|---|---|
| August 2, 2025 | GPAI model transparency obligations; prohibited AI bans |
| August 2, 2026 | High-risk AI system obligations (Article 11, Annex IV documentation, etc.) — the primary enterprise deadline |
| Potentially December 2027 | If Digital Omnibus delay passes (see below) |
Q1 2026 Status
The August 2, 2026 deadline for high-risk AI system obligations is approaching — but the situation is complex:
Digital Omnibus Delay Proposal: The European Commission’s Digital Omnibus package (November 2025) proposes delaying enforcement of high-risk AI obligations by up to 16 months. If passed:
- Stand-alone AI systems: enforcement shifts from August 2026 to December 2027
- Embedded AI systems: enforcement shifts to August 2028
Legislative progress (as of Q1 2026):
- Council agreed negotiating mandate: March 13, 2026
- IMCO/LIBE committees adopted joint position: March 18, 2026 (101-9-8 vote)
- Trilogue negotiations pending
Readiness gap: Only 8 of 27 EU Member States are assessed as ready for full enforcement.
Harmonized standards: CEN and CENELEC missed their 2025 deadline for harmonized technical standards (required for AI Act compliance demonstration). Now targeting end of 2026. The relevant standards being developed are prEN 18282 and ETSI prEN 304 223 — these do not directly map to ISO 42001.
Key Requirements (High-Risk Systems)
- Article 11 + Annex IV — Technical documentation requirements effectively mandate AI-BOM-like disclosures: training data, validation data, model architecture, intended purpose, performance metrics, risk management system documentation
- Article 9 — Risk management system throughout lifecycle
- Article 10 — Data governance requirements
- Article 12 — Logging and record-keeping
- Article 14 — Human oversight measures
- Article 50 — Transparency obligations for GPAI models (applies August 2026 regardless of Omnibus delay)
AI-BOM Implications
EU AI Act Article 11 and Annex IV effectively mandate AI-BOM-like documentation for high-risk systems. However, without a ratified AI-BOM standard, compliance interpretations will vary. This creates near-term demand for AI-BOM tooling even in the absence of a formal standard.
Compliance Pathways
- ISO/IEC 42001 certification — positioned as primary compliance pathway, but harmonized standards (prEN 18282) are distinct from ISO 42001 and still in development
- NIST AI RMF alignment — acceptable for non-EU markets but does not satisfy EU Act requirements directly
- CEN/CENELEC harmonized standards — when finalized (end 2026), will provide presumption of conformity
Watch Items (2026)
- Digital Omnibus trilogue outcome — determines whether August 2026 or December 2027 deadline applies
- CEN/CENELEC harmonized standards — targeting end 2026; critical for certification pathway
- National AI Authority establishment across EU Member States
- AI Sandbox requirements — applicable August 2, 2026 regardless of Omnibus delay
- Article 50 transparency obligations — apply August 2026
See Also
- IEC 42001 — AI Management Systems — primary certifiable compliance pathway for EU AI Act
- NIST AI Risk Management Framework (AI RMF) — U.S. voluntary counterpart; alignment gap with EU requirements
- Agentic AI Security Capability Maturity Model — A 2026 Practical Proposal — Art. 9 Risk Mgmt → D1; Art. 10 Data → D6; Art. 11 + Annex IV (technical documentation) → D8 (full Annex IV item-by-item map in Agentic AI Security CMM — Standards Crosswalk Matrix); Art. 12 Logging → D7 + D9; Art. 14 Human oversight → D3 + D9; Art. 15 Cybersecurity → D4 + D5; Art. 72 Post-market monitoring → D9