Volume X · EU AI Act · Edition 2026.1

The Compliance Atlas

Authoritative refs
Regulation (EU) 2024/1689
EU AI Office · NCAs
Verified May 12, 2026

The world's first comprehensive AI regulation. Risk-based rather than control-based: prohibitions for unacceptable risks, strict obligations for high-risk systems, transparency for limited-risk, freedom for minimal-risk. Prohibitions and AI literacy obligations are in force since February 2, 2025; GPAI obligations since August 2, 2025. The high-risk obligations are in transition — Regulation (EU) 2024/1689 sets August 2, 2026 as the application date, but the Digital Omnibus political agreement reached by the Council and Parliament on May 7, 2026 postpones standalone Annex III high-risk obligations to December 2, 2027 and Annex I-product obligations to August 2, 2028. The original dates remain legally in force until the amendment is published in the Official Journal.

Reading the Atlas

Internal — provider / deployer External — notified body · authority Bridge / hand-off

Enforced through CE marking + market surveillance. Most high-risk AI uses internal conformity assessment (Annex VI); biometric ID requires notified-body assessment (Annex VII). Penalties scale with culpability — up to €35M / 7% global turnover for prohibited practices.

Editorial note · Timeline update · May 2026

On May 7, 2026, the Council of the EU and the European Parliament reached provisional political agreement on the Digital Omnibus on AI, which postpones key application dates of Regulation (EU) 2024/1689.

ORIGINAL — STILL LEGALLY IN FORCE

Annex III high-risk: 2 Aug 2026
Annex I product AI: 2 Aug 2027
Art. 50(2) watermark grace: 6 months

POLITICALLY AGREED — PENDING ADOPTION

Annex III high-risk: 2 Dec 2027
Annex I product AI: 2 Aug 2028
Art. 50(2) watermark grace: 3 months (eff. 2 Dec 2026)

The Omnibus also adds a new Article 5 prohibition covering AI systems used to generate non-consensual intimate imagery and child sexual abuse material, and modifies Article 4's AI literacy duty. The amendment must still be formally adopted and published in the Official Journal before the new dates take legal effect. Until then, the original Regulation timeline below remains the binding reference. The phased-timeline diagrams that follow this note show the original (legally-in-force) dates with the politically-agreed shifts annotated where critical.

I.
Layer 01 — Lifecycle

Conformity assessment, CE marking, post-market surveillance

Articles 6, 16, 43, 49, 72
Annex VI · Annex VII

From classification through CE marking through ongoing surveillance

PROVIDER LIFECYCLE → Classify RMS & QMS Tech docs Conformity CE marking Post-market Incidents INTERNAL — PROVIDER OR DEPLOYER EXTERNAL — NOTIFIED BODY · AI OFFICE · MARKET SURVEILLANCE Classify AI system prohibited / high / lo Art. 5–6 + Annex III Risk Mgmt System Art. 9 · iterative across lifecycle Technical documentation Annex IV authority on request Conformity assessment Annex VI or VII internal vs notified body EU declaration Art. 47 · Annex V + EU database registry CE marking affixed Art. 48 enables EU market access Post-market monitoring Art. 72 · Annex IV data analysis · trend detection FRIA (deployer) Art. 27 · public sector + essential services QMS Art. 17 · documented ~ISO 9001 / 42001 fits Data governance Art. 10 training/test data quality Human oversight Art. 14 design + measures Transparency Art. 13 · Art. 50 user info + AI labels Authorized rep non-EU providers Art. 22 · in-EU contact Serious incident report Art. 73 · 15 days to MSA · 2-72hr critical Annex III check 8 high-risk areas + Annex I products Notified Body sel. if Annex VII required designated by NCA Tech doc review Annex IV against Art. 8–15 conformity check QMS audit if Annex VII on-site / remote Certificate issued 5-year validity if Annex VII path EU database entry public register Art. 71 · provider info Market surveillance national MSA + AI Office access to docs EU AI Office (EC) central oversight GPAI direct supervision National Comp. Authorities notifying + market surv. designated by member states Harmonized standards CEN/CENELEC dev presumption of conformity AI Board EU coordination advisory + harmonization Sandboxes Art. 57 ≥1 per Member State Penalty enforcement Art. 99 · MS-defined €7.5M-€35M / % turnover Cross-border coord EU-wide investigations via AI Board + Office CE MARKING IS THE OUTPUT No CE = no market access in EU classification → Annex III check QMS docs → NB audits incident → MSA review CE + EU DB

Phased timeline — what's already in force, what comes August 2, 2026

PHASE 1 · ALREADY IN FORCE Feb 2, 2025 → prohibitions + AI literacy — Article 5 prohibitions enforced — Subliminal manipulation — Exploitation of vulnerabilities — Social scoring by public auth. — Real-time biometric ID (LE) — Workplace/edu emotion AI — Article 4 AI literacy duty €35M / 7% turnover penalty PHASE 2 · IN FORCE Aug 2, 2025 → GPAI obligations + governance — Articles 53–55 GPAI duties — Tech docs for foundation models — Copyright compliance policies — Training data summaries — Systemic-risk evaluations — EU AI Office operational — National authorities designated GPAI Code of Practice live PHASE 3 · IMMINENT Aug 2, 2026 general application begins — Annex III high-risk apply — Articles 8–17 obligations — Article 50 transparency — Penalty regime fully active — National sandboxes mandatory — Conformity assessments due — EU database registration → 2 Dec 2027 (Omnibus, May 2026) PHASE 4 · LATER Aug 2, 2027 Annex I product AI applies — Article 6(1) high-risk AI as safety components in regulated products — Medical devices — Toys, machinery, vehicles — Pre-existing GPAI compliance → 2 Aug 2028 (Omnibus, May 2026) pending Official Journal pub.

Why the Act reads differently from everything else in the Atlas

It's a regulation, not a standard. Unlike SOC 2 (criteria), ISO 27001 (standard), HITRUST (framework) or even FedRAMP (federal program), the EU AI Act is binding EU law applicable to anyone placing AI on the EU market or operating it in the EU — regardless of where the provider is based. The closest comparison in the Atlas is HIPAA (also law, also enforced by regulators rather than auditors), but HIPAA's scope is industry-specific while the AI Act applies to any AI system in EU markets.

The risk-tier model is the Act's defining architecture. Rather than a single control catalog applied uniformly, the Act assigns obligations based on the AI system's risk to fundamental rights, health, and safety. A spam filter has near-zero obligations. A CV-screening AI used for hiring sits in Annex III as high-risk and triggers the full Articles 8–17 obligation set. An AI system designed to manipulate behaviour subliminally is prohibited outright. Same regulation; radically different obligations.

Conformity assessment looks like product safety law for a reason. The Act borrows mechanics from EU product safety regulation — CE marking, technical documentation, conformity assessment procedures (Annex VI internal vs Annex VII notified body), market surveillance authorities, EU declarations of conformity. If you've worked with CE-marked medical devices or machinery, the procedural shape is familiar. The novelty is applying it to AI rather than physical products.

The Act enforces through market access. Without CE marking and EU declaration, you cannot sell or operate a high-risk AI system in the EU. That's the lever.

The Digital Omnibus on AI was politically agreed May 7, 2026. The Council of the EU and the European Parliament reached provisional agreement on the amendment package, which postpones standalone Annex III high-risk obligations from August 2, 2026 to December 2, 2027, and Annex I-product obligations from August 2, 2027 to August 2, 2028. The Omnibus also adds a new Article 5 prohibition on AI systems used to generate non-consensual intimate imagery and child sexual abuse material, reduces the Article 50(2) watermarking grace period from 6 months to 3 months (effective December 2, 2026), and modifies Article 4's AI literacy duty. Important: the amendment must still be formally adopted and published in the Official Journal before the new dates take legal effect. Until publication, Regulation (EU) 2024/1689's original dates remain legally binding. Most legal practitioners are advising clients to prepare against the original deadlines until publication, while planning continuity work toward the agreed shifted dates.

Penalties are GDPR-scale. Article 99 sets penalty ceilings: €35M or 7% of global turnover for prohibited practices; €15M or 3% for most other violations including high-risk obligations; €7.5M or 1% for incorrect, incomplete, or misleading information to authorities. The "or" is "whichever is higher" — for a large multinational, percentage of turnover dominates. Member States set the actual penalty levels within these ceilings. Like GDPR, regulator discretion will define how these ceilings translate to practice.

II.
Layer 02 — Risk tiers

Four tiers of risk, four worlds of obligation

Articles 5, 6, 50, 51, 95
Annex III

The pyramid — prohibited at top, minimal-risk at base

Prohibited Article 5 High-risk Articles 6–7 + Annex III Limited-risk Article 50 — transparency Minimal-risk no specific obligations EXAMPLES Subliminal manipulation Exploiting vulnerable groups Social scoring (public) Real-time biometric ID (LE, except narrow exceptions) OBLIGATION Cannot place on market Cannot put into service Cannot use Penalty: €35M / 7% EXAMPLES — Annex III Biometric categorization Critical infrastructure Education access decisions Employment selection Essential services access FULL OBLIGATIONS RMS · QMS · tech docs Data governance Human oversight Conformity assessment CE marking · post-market EXAMPLES Chatbots Emotion-recognition systems Biometric categorization (allowed) Generative AI content Deepfakes TRANSPARENCY Inform user it's AI Label AI-generated content Label deepfakes Disclose emotion AI use EXAMPLES Spam filters AI in video games Inventory management Recommendation engines vast majority of AI today NO OBLIGATIONS Voluntary codes encouraged Article 95 codes of conduct Internal best practices Free to develop & deploy

Annex III — eight categories of high-risk AI systems

ANNEX III · 8 HIGH-RISK CATEGORIES (UPDATABLE BY EC) 1 — BIOMETRICS Biometric ID & categorization — Remote biometric ID (allowed) — Sensitive-attribute categorization — Emotion recognition (workplace & education = prohibited; other contexts = high-risk) 2 — CRITICAL INFRA Critical infrastructure mgmt — Road traffic safety — Water, gas, heating, electricity — Digital infrastructure (intersection with NIS2) 3 — EDUCATION Education & vocational training — Admissions / placement decisions — Evaluation of learning outcomes — Detection of prohibited behavior in tests & exams 4 — EMPLOYMENT Employment & workforce — Recruitment / selection — Promotion / termination — Task allocation — Performance monitoring most-used Annex III category 5 — ESSENTIAL SERVICES Access to essential services — Public benefits eligibility — Creditworthiness scoring — Insurance pricing (life, health) — Emergency call dispatch 6 — LAW ENFORCEMENT Law enforcement — Risk assessment of individuals — Polygraph-equivalent tools — Reliability of evidence — Profiling for criminal investig. 7 — MIGRATION Migration, asylum, border — Risk profiling at borders — Visa application assessment — Eligibility for asylum / status — Identity verification at borders 8 — JUSTICE & DEMOCRACY Justice & democratic processes — Judicial decision support — Alternative dispute resolution — Election / referendum influence on voting behavior

Why classification is the entire game

Annex III is updatable by the European Commission. The Act includes mechanisms for adding new categories and removing categories that no longer warrant high-risk status. Expect Annex III to grow over time as new use cases reveal risks. Programs that classify once and never revisit will fall behind. Mature programs revisit classification at least annually and trigger reclassification reviews when a system's purpose changes meaningfully.

The Article 6(3) escape hatch matters. Even when an AI system fits an Annex III category, providers can exempt it from high-risk classification if the system "does not pose a significant risk of harm" — for instance, when it performs a narrow procedural task, only improves the outcome of a previously completed human activity, or detects decision-making patterns without replacing human assessment. This requires documented assessment by the provider, registered in the EU database. It's a real escape hatch but heavily scrutinized; misuse triggers reclassification by authorities and penalties.

Most organizations have at least one Annex III system without realizing it. Common surprises: HR using AI for resume screening (Annex III §4 — employment), customer service chatbots that decide service tier or eligibility (potentially §5 — essential services), security tools that rank users by risk score (potentially §1 — biometrics if biometric data involved). The classification exercise is not a quick checkbox; it's a workshop with legal, product, and engineering.

The risk classification step takes longer than most teams expect — and it's the foundation everything else rests on. Get it wrong and your entire compliance program addresses the wrong system.

Limited-risk transparency obligations are easy to underestimate. Article 50 covers chatbots ("inform users they're talking to AI"), AI-generated content (label as artificial), deepfakes (label as artificially generated/manipulated), emotion recognition (inform subjects), and biometric categorization in non-prohibited contexts (inform subjects). These obligations apply to Limited-risk and high-risk systems alike. The mechanism is usually simple — a notice, label, or disclosure — but the audit trail and consistency-of-application can become substantial.

Minimal-risk freedom is real. The vast majority of AI systems in use today — recommendation engines, spam filters, fraud detection, inventory optimization, in-game NPCs — fall in minimal-risk and have no specific Act obligations. Article 95 encourages voluntary codes of conduct, and many providers will adopt them as differentiators or for parent-company alignment, but they're not required. The Act explicitly preserves freedom to develop and deploy minimal-risk AI.

III.
Layer 03 — High-risk obligations

Articles 8–17 — what high-risk providers must build

Articles 8–17 + Annex IV
Article 26–27 (deployers)

The ten cardinal obligations for high-risk AI providers

ARTICLES 8–17 · BUILDING THE COMPLIANCE STACK ART. 8 General compliance High-risk AI must comply with Articles 9–15. Compliance is the provider's responsibility throughout the lifecycle. ART. 9 Risk Management System Iterative process across whole lifecycle: — identify foreseeable risks — estimate & evaluate — adopt mitigations ART. 10 Data & data governance Training, validation, testing data must meet quality criteria: — relevant, representative — free of errors, complete — bias examined, addressed ART. 11 Technical documentation Annex IV checklist: — description of system — design choices — training methodology — performance metrics ART. 12 Record-keeping (logs) Automatic logging of events throughout system lifecycle. Enables traceability of decisions / outputs. Retention as appropriate. ART. 13 Transparency to deployers Instructions for use: — intended purpose — performance & limitations — risks of foreseeable misuse — specifications for input ART. 14 Human oversight Designed to be effectively overseen by humans: — understand capabilities — detect anomalies — interrupt or override ART. 15 Accuracy, robustness, sec. Appropriate level of: — accuracy (declared) — robustness to errors/attacks — cybersecurity — resilience to data poisoning ART. 16 Provider obligations — Conformity assessment done — EU declaration drawn up — CE marking applied — EU database registration — Tech docs retained 10y ART. 17 Quality Management System Documented QMS covering: — compliance strategy — design & verification procs. — data management procs. — ISO 42001 / 9001 satisfies ART. 26 Deployer obligations — Use as per instructions — Assign human oversight — Monitor operation — Inform workers if affected — Cooperate with authorities ART. 27 FRIA — deployer assessment Fundamental Rights Impact Assessment required for: — public-sector deployers — private bodies providing public services

How the obligations actually fit together

The QMS is the spine. Article 17 requires a documented Quality Management System that ties together compliance strategy, design verification, data management, post-market monitoring, incident reporting, and recordkeeping. Crucially, compliance with ISO/IEC 9001 or ISO/IEC 42001 satisfies most QMS requirements. Organizations already pursuing 42001 certification (Volume IX) get most of the way to Article 17 conformity for free.

The RMS (Article 9) is the governance loop. A continuous, iterative risk management system across the full lifecycle. Maps almost exactly onto NIST AI RMF's MAP/MEASURE/MANAGE functions and ISO 42001 Cl. 6.1.2 / 6.1.4. Practitioners building a single AI risk function across all three frameworks tend to use AI RMF as the methodology, 42001 as the management-system shell, and Article 9 as the legal binding obligation.

Annex IV is the documentation menu. Article 11 references Annex IV, which lists everything technical documentation must contain: system description, design choices, computational/HW requirements, training methodology, datasets used (sources, scope, characteristics), performance metrics including bias-related metrics, validation/testing procedures, post-market monitoring plan. Annex IV is roughly 25-50 pages of disciplined documentation; the full technical-documentation file for a high-risk system typically runs 100-300 pages.

The Act's high-risk obligations look much like ISO 42001 plus Annex IV documentation plus CE marking — but they're legally binding rather than voluntary.

The deployer obligations are often missed. Article 26 places real duties on deployers — assign human oversight, follow instructions for use, monitor operation, inform workers and other affected persons, cooperate with authorities, retain logs. Article 27 adds FRIA (Fundamental Rights Impact Assessment) for public-sector deployers and certain private deployers providing public services. Many enterprises deploying third-party AI think they're off the hook because someone else is the provider — they're not.

Conformity assessment is the audit step. Most Annex III high-risk systems use Annex VI internal conformity assessment — the provider self-assesses against the requirements, drafts the EU declaration, applies CE marking. Only Annex III §1 (biometric ID) requires Annex VII third-party assessment by a notified body. This is lighter than expected for most providers. The catch: the provider remains fully liable; misclassification or substandard self-assessment surfaces during market surveillance investigations or after incidents, with penalties applied retrospectively.

IV.
Layer 04 — Cross-framework

EU AI Act in a multi-framework AI program

Mapping to ISO 42001, NIST AI RMF
+ existing security frameworks
EU AI Act ISO 42001:2023 NIST AI RMF 1.0 ISO 27001:2022 SOC 2 (TSC) NIST CSF 2.0 HITRUST v11 Shared evidence
Art. 5 — Prohibited A.5 AIIA MAP 5.1 Use-case classification register, prohibition screening, AIIA
Art. 6 + Annex III — High-risk class Cl. 6.1.4 AIIA MAP 1 · MAP 5 AI system inventory, classification rationale, Annex III mapping
Art. 9 — RMS (high-risk) Cl. 6.1 MAP · MEASURE · MANAGE Cl. 6.1 CC3.1CC3.4 GV.RM · ID.RA 03.b Risk register, methodology, treatment plan, lifecycle integration
Art. 10 — Data governance A.7 MAP 4 · MEASURE 4 A.5.12 C1.1 PR.DS-01 06.c Datasheets, lineage docs, bias analysis, data quality reports
Art. 11 + Annex IV — Tech docs A.5.4 · A.6.2.5 GOVERN 5 · MAP 1.6 A.5.34 10.b Tech doc file, model cards, datasheets, performance reports
Art. 12 — Logging A.6.2.6 MEASURE 3 A.8.15 CC7.1 DE.CM-01 09.aa Event logs, retention policy, traceability mappings
Art. 13 — Transparency to deployer A.8 GOVERN 5 Instructions for use, model cards, capability/limitation docs
Art. 14 — Human oversight A.4.6 · A.6.2.7 MANAGE 1 · MANAGE 2 CC4.1 02.a Oversight design docs, override procedures, training records
Art. 15 — Acc., robust., security A.6.2.8 MEASURE 2 · MEASURE 3 A.8.7 · A.8.8 CC7.2 PR.DS-01 09.j Eval reports, robustness tests, security tests, red-team
Art. 17 — QMS Cl. 4–10 (full) GOVERN Cl. 4–10 CC1.1CC9.2 GV.OC · GV.PO 00.a · 04.a QMS manual, policies, internal audit, mgmt review
Art. 26 — Deployer use A.9 MANAGE 1 Instructions adherence, deployment controls, ongoing monitoring
Art. 27 — FRIA (deployer) Cl. 6.1.4 AIIA MAP 5 FRIA per system, affected-rights analysis, mitigation register
Art. 50 — Transparency A.8 GOVERN 5 User-facing disclosures, AI labels, deepfake markers, chatbot notice
Art. 53–55 — GPAI A.10.3 GOVERN 6 Tech doc, training summary, copyright policy, eval results
Art. 72 — Post-market monitoring Cl. 9.1 MEASURE 3 Cl. 9.1 CC4.1 DE.CM 09.aa Monitoring plan, drift metrics, fairness metrics, dashboards
Art. 73 — Serious incidents Cl. 10.2 MANAGE 4 A.5.24A.5.27 CC7.3 RS.MA 11 Incident log, RCA, corrective actions, MSA reports

How the EU AI Act sits relative to everything else

The Act is regulatory layer, sitting on top of existing standards rather than replacing them. ISO 42001 is the management-system layer that satisfies Article 17 (QMS) and provides the structural shell. NIST AI RMF is the methodology layer that satisfies Article 9 (RMS) and informs MAP/MEASURE/MANAGE practices. ISO 27001 is the security layer that satisfies Article 15's cybersecurity requirement. SOC 2 / HITRUST add commercial assurance.

The dual-cert pattern emerging in 2025-2026 is ISO 42001 + EU AI Act conformity. Organizations pursuing both run unified evidence collection — every artifact serving 42001 also serves the Act, with FRIA being the only Act-specific net-new artifact. Add ISO 27001 for the security layer (Article 15), and the three frameworks together cover almost everything.

The crosswalk gaps tell the story. Many EU AI Act articles have no equivalent in older frameworks (SOC 2, ISO 27001, NIST CSF, PCI, HIPAA, HITRUST) — particularly the AI-specific ones (Art. 10 data, Art. 14 oversight, Art. 27 FRIA, Art. 50 transparency, Art. 73 incident reporting). This isn't a flaw in the older frameworks; it's a sign that AI governance is genuinely new content that the older frameworks couldn't have anticipated. Programs running only those older frameworks will discover the gaps the moment they engage with the Act.

The Act is binding. ISO 42001 is voluntary. NIST AI RMF is suggestive. They cooperate.

For non-EU organizations the Act still applies. Like GDPR, extraterritorial scope catches anyone whose AI is placed on the EU market or whose output is used in the EU. A U.S.-based SaaS deploying an AI hiring tool used by an EU customer's HR team is, as a provider, subject to the Act for that system. The Authorized Representative requirement (Art. 22) means non-EU providers must designate an in-EU contact. Compliance planning starts with classification, not geography.

The convergence trajectory is clear. EU AI Act → ISO 42001 + NIST AI RMF mappings are being formalized through 2026. Standardization bodies are publishing harmonized standards (CEN/CENELEC) under Article 40 to provide presumption of conformity. The Code of Practice for GPAI is operational. Member states are designating notifying and market-surveillance authorities. By late 2026, the early scramble will be replaced by routine compliance procedures — though Annex III will continue to expand and harmonized standards will continue to evolve.