The world's first comprehensive AI regulation. Risk-based rather than control-based: prohibitions for unacceptable risks, strict obligations for high-risk systems, transparency for limited-risk, freedom for minimal-risk. Prohibitions and AI literacy obligations are in force since February 2, 2025; GPAI obligations since August 2, 2025. The high-risk obligations are in transition — Regulation (EU) 2024/1689 sets August 2, 2026 as the application date, but the Digital Omnibus political agreement reached by the Council and Parliament on May 7, 2026 postpones standalone Annex III high-risk obligations to December 2, 2027 and Annex I-product obligations to August 2, 2028. The original dates remain legally in force until the amendment is published in the Official Journal.
Enforced through CE marking + market surveillance. Most high-risk AI uses internal conformity assessment (Annex VI); biometric ID requires notified-body assessment (Annex VII). Penalties scale with culpability — up to €35M / 7% global turnover for prohibited practices.
On May 7, 2026, the Council of the EU and the European Parliament reached provisional political agreement on the Digital Omnibus on AI, which postpones key application dates of Regulation (EU) 2024/1689.
Annex III high-risk: 2 Aug 2026
Annex I product AI: 2 Aug 2027
Art. 50(2) watermark grace: 6 months
Annex III high-risk: 2 Dec 2027
Annex I product AI: 2 Aug 2028
Art. 50(2) watermark grace: 3 months (eff. 2 Dec 2026)
The Omnibus also adds a new Article 5 prohibition covering AI systems used to generate non-consensual intimate imagery and child sexual abuse material, and modifies Article 4's AI literacy duty. The amendment must still be formally adopted and published in the Official Journal before the new dates take legal effect. Until then, the original Regulation timeline below remains the binding reference. The phased-timeline diagrams that follow this note show the original (legally-in-force) dates with the politically-agreed shifts annotated where critical.
It's a regulation, not a standard. Unlike SOC 2 (criteria), ISO 27001 (standard), HITRUST (framework) or even FedRAMP (federal program), the EU AI Act is binding EU law applicable to anyone placing AI on the EU market or operating it in the EU — regardless of where the provider is based. The closest comparison in the Atlas is HIPAA (also law, also enforced by regulators rather than auditors), but HIPAA's scope is industry-specific while the AI Act applies to any AI system in EU markets.
The risk-tier model is the Act's defining architecture. Rather than a single control catalog applied uniformly, the Act assigns obligations based on the AI system's risk to fundamental rights, health, and safety. A spam filter has near-zero obligations. A CV-screening AI used for hiring sits in Annex III as high-risk and triggers the full Articles 8–17 obligation set. An AI system designed to manipulate behaviour subliminally is prohibited outright. Same regulation; radically different obligations.
Conformity assessment looks like product safety law for a reason. The Act borrows mechanics from EU product safety regulation — CE marking, technical documentation, conformity assessment procedures (Annex VI internal vs Annex VII notified body), market surveillance authorities, EU declarations of conformity. If you've worked with CE-marked medical devices or machinery, the procedural shape is familiar. The novelty is applying it to AI rather than physical products.
The Digital Omnibus on AI was politically agreed May 7, 2026. The Council of the EU and the European Parliament reached provisional agreement on the amendment package, which postpones standalone Annex III high-risk obligations from August 2, 2026 to December 2, 2027, and Annex I-product obligations from August 2, 2027 to August 2, 2028. The Omnibus also adds a new Article 5 prohibition on AI systems used to generate non-consensual intimate imagery and child sexual abuse material, reduces the Article 50(2) watermarking grace period from 6 months to 3 months (effective December 2, 2026), and modifies Article 4's AI literacy duty. Important: the amendment must still be formally adopted and published in the Official Journal before the new dates take legal effect. Until publication, Regulation (EU) 2024/1689's original dates remain legally binding. Most legal practitioners are advising clients to prepare against the original deadlines until publication, while planning continuity work toward the agreed shifted dates.
Penalties are GDPR-scale. Article 99 sets penalty ceilings: €35M or 7% of global turnover for prohibited practices; €15M or 3% for most other violations including high-risk obligations; €7.5M or 1% for incorrect, incomplete, or misleading information to authorities. The "or" is "whichever is higher" — for a large multinational, percentage of turnover dominates. Member States set the actual penalty levels within these ceilings. Like GDPR, regulator discretion will define how these ceilings translate to practice.
Annex III is updatable by the European Commission. The Act includes mechanisms for adding new categories and removing categories that no longer warrant high-risk status. Expect Annex III to grow over time as new use cases reveal risks. Programs that classify once and never revisit will fall behind. Mature programs revisit classification at least annually and trigger reclassification reviews when a system's purpose changes meaningfully.
The Article 6(3) escape hatch matters. Even when an AI system fits an Annex III category, providers can exempt it from high-risk classification if the system "does not pose a significant risk of harm" — for instance, when it performs a narrow procedural task, only improves the outcome of a previously completed human activity, or detects decision-making patterns without replacing human assessment. This requires documented assessment by the provider, registered in the EU database. It's a real escape hatch but heavily scrutinized; misuse triggers reclassification by authorities and penalties.
Most organizations have at least one Annex III system without realizing it. Common surprises: HR using AI for resume screening (Annex III §4 — employment), customer service chatbots that decide service tier or eligibility (potentially §5 — essential services), security tools that rank users by risk score (potentially §1 — biometrics if biometric data involved). The classification exercise is not a quick checkbox; it's a workshop with legal, product, and engineering.
Limited-risk transparency obligations are easy to underestimate. Article 50 covers chatbots ("inform users they're talking to AI"), AI-generated content (label as artificial), deepfakes (label as artificially generated/manipulated), emotion recognition (inform subjects), and biometric categorization in non-prohibited contexts (inform subjects). These obligations apply to Limited-risk and high-risk systems alike. The mechanism is usually simple — a notice, label, or disclosure — but the audit trail and consistency-of-application can become substantial.
Minimal-risk freedom is real. The vast majority of AI systems in use today — recommendation engines, spam filters, fraud detection, inventory optimization, in-game NPCs — fall in minimal-risk and have no specific Act obligations. Article 95 encourages voluntary codes of conduct, and many providers will adopt them as differentiators or for parent-company alignment, but they're not required. The Act explicitly preserves freedom to develop and deploy minimal-risk AI.
The QMS is the spine. Article 17 requires a documented Quality Management System that ties together compliance strategy, design verification, data management, post-market monitoring, incident reporting, and recordkeeping. Crucially, compliance with ISO/IEC 9001 or ISO/IEC 42001 satisfies most QMS requirements. Organizations already pursuing 42001 certification (Volume IX) get most of the way to Article 17 conformity for free.
The RMS (Article 9) is the governance loop. A continuous, iterative risk management system across the full lifecycle. Maps almost exactly onto NIST AI RMF's MAP/MEASURE/MANAGE functions and ISO 42001 Cl. 6.1.2 / 6.1.4. Practitioners building a single AI risk function across all three frameworks tend to use AI RMF as the methodology, 42001 as the management-system shell, and Article 9 as the legal binding obligation.
Annex IV is the documentation menu. Article 11 references Annex IV, which lists everything technical documentation must contain: system description, design choices, computational/HW requirements, training methodology, datasets used (sources, scope, characteristics), performance metrics including bias-related metrics, validation/testing procedures, post-market monitoring plan. Annex IV is roughly 25-50 pages of disciplined documentation; the full technical-documentation file for a high-risk system typically runs 100-300 pages.
The deployer obligations are often missed. Article 26 places real duties on deployers — assign human oversight, follow instructions for use, monitor operation, inform workers and other affected persons, cooperate with authorities, retain logs. Article 27 adds FRIA (Fundamental Rights Impact Assessment) for public-sector deployers and certain private deployers providing public services. Many enterprises deploying third-party AI think they're off the hook because someone else is the provider — they're not.
Conformity assessment is the audit step. Most Annex III high-risk systems use Annex VI internal conformity assessment — the provider self-assesses against the requirements, drafts the EU declaration, applies CE marking. Only Annex III §1 (biometric ID) requires Annex VII third-party assessment by a notified body. This is lighter than expected for most providers. The catch: the provider remains fully liable; misclassification or substandard self-assessment surfaces during market surveillance investigations or after incidents, with penalties applied retrospectively.
| EU AI Act | ISO 42001:2023 | NIST AI RMF 1.0 | ISO 27001:2022 | SOC 2 (TSC) | NIST CSF 2.0 | HITRUST v11 | Shared evidence |
|---|---|---|---|---|---|---|---|
| Art. 5 — Prohibited | A.5 AIIA |
MAP 5.1 |
— | — | — | — | Use-case classification register, prohibition screening, AIIA |
| Art. 6 + Annex III — High-risk class | Cl. 6.1.4 AIIA |
MAP 1 · MAP 5 |
— | — | — | — | AI system inventory, classification rationale, Annex III mapping |
| Art. 9 — RMS (high-risk) | Cl. 6.1 |
MAP · MEASURE · MANAGE |
Cl. 6.1 |
CC3.1 – CC3.4 |
GV.RM · ID.RA |
03.b |
Risk register, methodology, treatment plan, lifecycle integration |
| Art. 10 — Data governance | A.7 |
MAP 4 · MEASURE 4 |
A.5.12 |
C1.1 |
PR.DS-01 |
06.c |
Datasheets, lineage docs, bias analysis, data quality reports |
| Art. 11 + Annex IV — Tech docs | A.5.4 · A.6.2.5 |
GOVERN 5 · MAP 1.6 |
A.5.34 |
— | — | 10.b |
Tech doc file, model cards, datasheets, performance reports |
| Art. 12 — Logging | A.6.2.6 |
MEASURE 3 |
A.8.15 |
CC7.1 |
DE.CM-01 |
09.aa |
Event logs, retention policy, traceability mappings |
| Art. 13 — Transparency to deployer | A.8 |
GOVERN 5 |
— | — | — | — | Instructions for use, model cards, capability/limitation docs |
| Art. 14 — Human oversight | A.4.6 · A.6.2.7 |
MANAGE 1 · MANAGE 2 |
— | CC4.1 |
— | 02.a |
Oversight design docs, override procedures, training records |
| Art. 15 — Acc., robust., security | A.6.2.8 |
MEASURE 2 · MEASURE 3 |
A.8.7 · A.8.8 |
CC7.2 |
PR.DS-01 |
09.j |
Eval reports, robustness tests, security tests, red-team |
| Art. 17 — QMS | Cl. 4–10 (full) |
GOVERN |
Cl. 4–10 |
CC1.1 – CC9.2 |
GV.OC · GV.PO |
00.a · 04.a |
QMS manual, policies, internal audit, mgmt review |
| Art. 26 — Deployer use | A.9 |
MANAGE 1 |
— | — | — | — | Instructions adherence, deployment controls, ongoing monitoring |
| Art. 27 — FRIA (deployer) | Cl. 6.1.4 AIIA |
MAP 5 |
— | — | — | — | FRIA per system, affected-rights analysis, mitigation register |
| Art. 50 — Transparency | A.8 |
GOVERN 5 |
— | — | — | — | User-facing disclosures, AI labels, deepfake markers, chatbot notice |
| Art. 53–55 — GPAI | A.10.3 |
GOVERN 6 |
— | — | — | — | Tech doc, training summary, copyright policy, eval results |
| Art. 72 — Post-market monitoring | Cl. 9.1 |
MEASURE 3 |
Cl. 9.1 |
CC4.1 |
DE.CM |
09.aa |
Monitoring plan, drift metrics, fairness metrics, dashboards |
| Art. 73 — Serious incidents | Cl. 10.2 |
MANAGE 4 |
A.5.24 – A.5.27 |
CC7.3 |
RS.MA |
11 |
Incident log, RCA, corrective actions, MSA reports |
The Act is regulatory layer, sitting on top of existing standards rather than replacing them. ISO 42001 is the management-system layer that satisfies Article 17 (QMS) and provides the structural shell. NIST AI RMF is the methodology layer that satisfies Article 9 (RMS) and informs MAP/MEASURE/MANAGE practices. ISO 27001 is the security layer that satisfies Article 15's cybersecurity requirement. SOC 2 / HITRUST add commercial assurance.
The dual-cert pattern emerging in 2025-2026 is ISO 42001 + EU AI Act conformity. Organizations pursuing both run unified evidence collection — every artifact serving 42001 also serves the Act, with FRIA being the only Act-specific net-new artifact. Add ISO 27001 for the security layer (Article 15), and the three frameworks together cover almost everything.
The crosswalk gaps tell the story. Many EU AI Act articles have no equivalent in older frameworks (SOC 2, ISO 27001, NIST CSF, PCI, HIPAA, HITRUST) — particularly the AI-specific ones (Art. 10 data, Art. 14 oversight, Art. 27 FRIA, Art. 50 transparency, Art. 73 incident reporting). This isn't a flaw in the older frameworks; it's a sign that AI governance is genuinely new content that the older frameworks couldn't have anticipated. Programs running only those older frameworks will discover the gaps the moment they engage with the Act.
For non-EU organizations the Act still applies. Like GDPR, extraterritorial scope catches anyone whose AI is placed on the EU market or whose output is used in the EU. A U.S.-based SaaS deploying an AI hiring tool used by an EU customer's HR team is, as a provider, subject to the Act for that system. The Authorized Representative requirement (Art. 22) means non-EU providers must designate an in-EU contact. Compliance planning starts with classification, not geography.
The convergence trajectory is clear. EU AI Act → ISO 42001 + NIST AI RMF mappings are being formalized through 2026. Standardization bodies are publishing harmonized standards (CEN/CENELEC) under Article 40 to provide presumption of conformity. The Code of Practice for GPAI is operational. Member states are designating notifying and market-surveillance authorities. By late 2026, the early scramble will be replaced by routine compliance procedures — though Annex III will continue to expand and harmonized standards will continue to evolve.