Volume IX · ISO/IEC 42001:2023 · Edition 2026.1

The Compliance Atlas

Authoritative refs
ISO/IEC 42001:2023
ISO/IEC 23894:2023 (AI risk)
Verified May 12, 2026

ISO/IEC 42001 is the first management-system standard for AI — the AI Management System (AIMS) cousin of ISO 27001's ISMS. Published December 2023. The first AIMS certifications were issued in 2024. The audit profession is figuring out the evidence playbook in real time, with notified bodies and accreditation bodies racing to publish guidance.

Reading the Atlas

Internal — AIMS owners External — certification body Bridge / hand-off

Lane structure parallels ISO 27001 — Stage 1 / Stage 2 / surveillance / recertification — but with AI-specific evidence: model documentation, AI impact assessments, AI lifecycle controls, third-party AI component management.

I.
Layer 01 — Lifecycle

The same 3-year cycle as 27001 — different evidence universe

ISO/IEC 17021-1
+ AIMS-specific protocols

AIMS × Notified body — the audit shape inherits 27001, but the evidence is novel

3-YEAR CERTIFICATION CYCLE → Pre-cert Stage 1 Stage 2 Surveillance Y1 Surveillance Y2 Recertification Y3 Cycle restarts INTERNAL — AIMS OWNERS EXTERNAL — CERTIFICATION BODY Define AIMS scope Cl. 4.3 AI systems · roles · data AI risk + impact assess. Cl. 6.1.2 · 6.1.4 AIIA mandatory Statement of Applicability 38 Annex A controls justify each in/out Implement controls Cl. 8 · operation AI lifecycle & data Internal audit Cl. 9.2 competent + independent Mgmt review Cl. 9.3 incl. AI ethics & impact AIMS operation continues model versions · drift · retraining improvement Cl. 10 CB selection UKAS · ANAB-accredited 42001 scope additional Org AI objectives Annex C reference ethics · fairness · safety AI system inventory in-scope models typically incomplete Third-party AI mgmt A.10.2 · A.10.3 vendor model evals Model documentation datasheets · model cards 23053 / 24028 references Surveillance prep ~⅓ AIMS scope tested CB rotates emphasis Recert prep · year 3 full scope re-examined like Stage 2 scaled Application review contract · audit days 17021-1 mandays Stage 1 audit documentation review readiness assessment Stage 2 audit implementation review examine real models Certificate issued 3-year validity AIMS scope on cert Surveillance #1 ~12 mo from Stage 2 partial AIMS scope Surveillance #2 ~24 mo from Stage 2 different rotation Recert audit scope re-issued ~33mo · 3-yr clock Auditor competence AI domain + ISMS scarce skill 2024-26 Stage 1 report readiness opinion go/no-go AI-specific findings model docs · AIIAs novel territory Cert decision CB tech-review panel independent of audit Mandatory clauses always tested Cl. 9 + 10 + chgs New AI systems added since Stage 2 special focus Special audits on AI incident · scope chg extraordinary review FIRST CERTS ISSUED 2024 Anthropic, Microsoft, AWS, others publicly certified through 2025–2026 SoA — primary doc audited at Stage 2 internal audit reports CB reviews effectiveness AIIA outputs CB scrutinizes model registry → surveillance scope

Why 42001 reads like 27001 with new evidence requirements

The shape is borrowed. ISO/IEC 42001 follows the Harmonized Structure that all ISO management system standards share — Clauses 1-3 (intro/normative refs/terms), Clauses 4-10 (the management system), and an Annex of controls. If you've audited or implemented 27001, the architecture of 42001 will feel immediately familiar: scope, leadership, planning, support, operation, performance evaluation, improvement, plus an Annex A control set with a Statement of Applicability.

The evidence is novel. What's different is what counts as artifacts. Risk Analysis becomes AI Risk Assessment (informed by ISO/IEC 23894:2023). The Statement of Applicability now covers AI-specific controls. The control evidence includes model cards, data sheets for datasets, AI Impact Assessments, red-team results, fairness evaluations, third-party model evaluations, and incident logs for AI-specific failures. Most of these don't exist in mature form at most organizations even today.

Auditor competence is the bottleneck. ISO/IEC 17021-1 requires lead auditors competent in the management system and the technical domain. For 42001, that means competent in AI — model architectures, training data governance, evaluation methodologies, AI risk concepts, AI ethics frameworks. This skill set is rare. UKAS, ANAB, and other accreditation bodies are still developing competence requirements; CBs are still hiring and training. The likely outcome is that early 42001 certifications will see audit teams with strong ISO 27001 backgrounds and contracted AI subject matter experts.

ISO/IEC 42001 is the audit profession announcing it intends to audit AI. The playbook will be written by the first hundred certifications.

The scope question is uniquely hard. An ISMS scope is a familiar concept — boundary by org unit, geographic location, and information assets. An AIMS scope must define which AI systems are in scope, which lifecycle stages are in scope (development, deployment, monitoring, retirement), and which roles the organization plays for each system (provider, deployer, both, supplier of components). Most organizations don't yet have an inventory of their AI systems sufficient to answer these questions. The AIMS scoping exercise itself often reveals more AI than expected.

The third-party question is uniquely thorny. When you deploy GPT-4 / Claude / Gemini / Llama through an API, you're a deployer of someone else's AI. ISO/IEC 42001's A.10 controls require you to manage risks from those third-party AI components — but the providers don't necessarily share enough information for thorough evaluation. As AI suppliers themselves become 42001-certified (Anthropic, Microsoft, AWS, others publicly certified through 2024-2026), that information sharing becomes easier.

II.
Layer 02 — Control universe

9 control objectives, 38 controls — Annex A applied

ISO/IEC 42001:2023 Annex A
+ Annex B implementation guide

The Annex A architecture — 9 objectives (A.2 through A.10), 38 controls

ANNEX A · 9 CONTROL OBJECTIVES (A.2–A.10) · 38 CONTROLS A.2 Policies for AI 2 controls A.2.2 AI policy · A.2.3 alignment A.3 Internal organization 2 controls A.3.2 roles · A.3.3 reporting concerns A.4 Resources for AI 6 controls data · tooling · system · human · compute A.5 Assessing impacts 5 controls AIIA process · documentation · review A.6 AI system life cycle 7 controls · largest objective design · dev · verify · deploy · op · monitor · retire A.7 Data for AI systems 5 controls acquisition · quality · provenance · prep A.8 Information for users 4 controls capabilities · risks · external reporting A.9 Use of AI systems 4 controls intended use · objectives · ongoing A.10 Third-party / customer 3 controls supplier · customer · third-party comp. ANNEX C — ORG OBJECTIVES THAT DRIVE CONTROL SELECTION Accountability clear lines of responsibility for AI outcomes AI expertise workforce competence in AI throughout lifecycle Availability AI system available when expected Fairness avoiding unfair discrimination Maintainability AI system can be updated and improved Privacy protection of personal information Robustness graceful degradation; resilience to inputs Safety avoidance of harm to people, env, property Security CIA properties of AI systems Transparency explainability of AI decisions and outputs → DRIVES SoA SELECTIONS Org's prioritized objectives shape which Annex A controls are required Annex C objectives are not optional priorities. Each must be considered, ranked by relevance, and tied to controls that address it.

How the Annex A controls fit together

A.6 (AI System Life Cycle) is the largest and most-novel. Seven controls covering the full AI system lifecycle — design, development, verification, deployment, operation, monitoring, retirement. Most organizations don't yet have lifecycle processes mature enough to evidence each stage. Models are deployed without formal verification gates; monitoring is ad hoc; retirement plans don't exist. A.6 is where most early 42001 implementations spend disproportionate time building from scratch.

A.7 (Data for AI Systems) is where data governance becomes AI governance. Five controls — data acquisition, quality, provenance, preparation, plus general data governance. The data lineage requirements are stricter than under traditional information governance. You must be able to demonstrate where training data came from, how it was processed, what biases it carries, and how it's been validated for the AI system's intended use. ISO/IEC 5259 series is the companion standard for data quality requirements.

A.8 (Information for Interested Parties) is the transparency category. Four controls covering what users, customers, affected parties, and external stakeholders are told about AI systems — capabilities, limitations, risks, and how to report issues. Maps closely to EU AI Act Article 50 transparency obligations and to common AI ethics framework principles. The Information for Users requirement is where most organizations realize they don't have user-facing model cards or risk communications.

A.10 (Third-Party and Customer Relationships) handles the supply chain. Three controls covering supplier management, customer relationships, and third-party AI components. This is uniquely difficult when the third party is OpenAI, Anthropic, or Google — providers whose terms and documentation continue to evolve. As more model providers achieve their own 42001 certification, due diligence on them becomes more standardized.

The 38 controls are not optional. Each is included or marked Not Applicable in the SoA, with documented justification. "Not relevant to our use case" is rarely a defensible reason.

The Annex C objectives drive the SoA. Unlike ISO 27001 where organizations select Annex A controls based on risk assessment alone, ISO 42001 expects organizations to also consider Annex C's organizational AI objectives — accountability, expertise, availability, fairness, maintainability, privacy, robustness, safety, security, transparency. A health-tech company prioritizes safety and fairness highly; a financial services firm prioritizes accountability and transparency. The SoA must reflect those prioritizations, with Annex A controls justified against them.

III.
Layer 03 — Evidence

The AI Impact Assessment is the new SoA

ISO/IEC 42001 Cl. 6.1.4
+ ISO/IEC 42005:2025 (AIIA)

AI Impact Assessment — six elements per AI system

AIIA · A.5.2 — A.5.5 · MUST BE DOCUMENTED PER AI SYSTEM ELEMENT 1 System & intended use — What the AI system does — Who the users are — Operating environment — Decisions it informs / automates — Foreseeable misuse ELEMENT 2 Affected parties — Direct users (operators) — Indirect users (subjects of decisions) — Third parties (society, environment) — Vulnerable populations — Stakeholders' values & expectations ELEMENT 3 Potential impacts — Beneficial: efficiency · access · safety — Harmful: bias · privacy · autonomy loss — Direct vs systemic effects — Reversible vs irreversible harms — Probability & severity estimates ELEMENT 4 Mitigations & controls — Technical controls (e.g., guardrails) — Process controls (e.g., human review) — Governance controls (e.g., approval gates) — Communication (e.g., model cards) — Monitoring (e.g., fairness metrics) ELEMENT 5 Residual risk — After mitigations: what risk remains? — Acceptable to whom (which stakeholders)? — Documented acceptance / treatment — Tied to risk tolerance — Sign-off authority defined ELEMENT 6 Update triggers — Significant model changes — New use cases / contexts — New affected populations — New regulatory requirements — Periodic review (typically annual) CB tests AIIA quality during Stage 2 — generic AIIAs that could apply to any system fail. Real AIIAs reference specific data, specific affected groups, specific evaluation results.

Model documentation — what the auditor expects to find

PER-MODEL EVIDENCE — MODEL CARDS · DATASHEETS · EVAL RESULTS MODEL CARDS What the model is — Model architecture — Training data summary — Intended use cases — Known limitations — Performance metrics — Evaluation methodology — Version & lineage Pattern: Mitchell et al. (2018) "Model Cards" Adopted as industry norm 2020+ DATASHEETS FOR DATASETS What the data is — Motivation for dataset — Composition (instances, labels) — Collection process — Labeling/annotation methodology — Pre-processing & cleaning — Recommended uses & misuses — Distribution & maintenance Pattern: Gebru et al. (2018) "Datasheets for Datasets" EVALUATION RESULTS How the model performs — Accuracy / performance benchmarks — Fairness metrics across subgroups — Robustness tests (adversarial) — Red-team / safety results — Drift monitoring data — Incident logs & investigations — Re-evaluation cadence CB will request representative samples across in-scope models

Where 42001 evidence work goes wrong

Generic AIIAs that could apply to any system. The most common Stage 2 finding will be AIIAs that are too abstract — describing risks like "potential bias" or "potential security issues" without specifics. A real AIIA references the actual training data, the actual affected populations, the actual evaluation results, the actual mitigations. Auditors test specificity by asking "show me the data behind this fairness claim" or "show me the red-team result that closed this risk."

AI system inventory missing or incomplete. Most organizations don't have a single source of truth for their AI systems. Models live in product team repos, data science notebooks, third-party APIs, embedded in vendor products, in shadow IT. AIMS scope cannot be defined without inventory. Stage 1 audits will repeatedly surface "we found three more systems after we drafted the SoA." Build the inventory first; everything else follows.

Third-party AI evaluation that's just terms-of-service review. A.10 controls require risk evaluation of third-party AI components — and "we read OpenAI's terms" is not evaluation. Real evaluation includes the model's documentation (cards, evals), the provider's own security/privacy posture (their SOC 2 if available, their 42001 if applicable), and your own testing of the model against your use case. As more model providers achieve 42001 cert, this evaluation step gets easier — but it doesn't go away.

The first wave of 42001 certifications is creating the evidence playbook. By 2027 there will be conventions; right now, well-prepared organizations are inventing them.

Model documentation as afterthought. Model cards and datasheets created retroactively, six months after a model went into production, often miss the original training context, don't reflect current performance, and read as PR rather than documentation. CBs will compare model documentation against actual model behavior — discrepancies surface as findings. The discipline is to write model cards during model development, treat them as living documents, version-control them alongside the model.

Internal audit of AI is hard because AI auditors are scarce. Cl. 9.2 requires competent, objective auditors. For AIMS, that competence includes AI domain knowledge — model evaluation, fairness analysis, AI risk concepts. Most organizations don't have this internal. External AIMS internal auditors are emerging as a service offering through 2025-2026, but supply is limited. Pre-Stage 2 organizations should plan to use a hybrid (information-security-trained internal auditor partnered with AI domain expert).

IV.
Layer 04 — Cross-framework

42001 in a multi-framework AI program

Mappings to NIST AI RMF · EU AI Act · 27001
+ data governance frameworks
42001 element NIST AI RMF 1.0 EU AI Act ISO 27001:2022 SOC 2 (TSC) NIST CSF 2.0 HITRUST v11 Shared evidence
Cl. 4 — Context GOVERN 1 Art. 9 RMS scope Cl. 4.1 · 4.3 CC1.1 GV.OC 00.a AIMS scope, AI system inventory, interested parties register
Cl. 5 — Leadership GOVERN 2 Art. 26 deployer obligations Cl. 5.1 · 5.3 CC1.2 · CC1.3 GV.RR 02.a AI policy, governance charter, RACI for AI
Cl. 6.1.2 — AI risk assess MAP 5 · MEASURE 2 Art. 9 RMS Cl. 6.1.2 CC3.2 ID.RA 03.b Risk register w/ AI risks, methodology doc, ISO 23894 reference
Cl. 6.1.4 — AI Impact Assessment MAP 1 · MAP 5 Art. 27 FRIA (deployer) AIIA per AI system, affected-parties analysis, mitigation register
A.2 — AI Policy GOVERN 1.1 Art. 17 QMS A.5.1 CC5.3 GV.PO 04.a AI policy, supporting standards, communication records
A.4 — Resources GOVERN 4 Art. 14 human oversight Cl. 7 CC1.4 PR.AT-02 02.e Org chart with AI roles, training records, compute capacity
A.5 — AI Impact Assessment MAP 5 Art. 27 · Annex IV docs AIIA process documentation, AIIA per system, review log
A.6 — AI lifecycle MAP · MEASURE · MANAGE Art. 9–15 high-risk reqs A.8.25A.8.32 (SDLC analog) CC8.1 PR.PS-06 10.a · 10.b Lifecycle policy, dev/eval gates, deployment criteria, monitoring
A.7 — Data for AI MAP 4 · MEASURE 4 Art. 10 data governance A.5.12 CC1.4 · C1.1 PR.DS-01 06.c Datasheets, lineage, quality reports, bias/representativeness analysis
A.8 — Information for users GOVERN 5 Art. 13 · Art. 50 A.5.34 (privacy notices) P1 13 Model cards, user-facing AI disclosures, capability/limitation docs
A.9 — Use of AI systems MANAGE 1 Art. 26 deployer use Intended-use specs, deployment criteria, ongoing monitoring SLAs
A.10 — Third-party AI GOVERN 6 Art. 25 value-chain A.5.19A.5.23 CC9.2 GV.SC 05.k Vendor AI inventory, due-diligence packages, contracts, model evals
Cl. 9.1 — Monitoring MEASURE 3 · 4 Art. 72 post-market Cl. 9.1 CC4.1 DE.CM 09.aa Drift monitoring, fairness monitoring, performance dashboards
Cl. 9.2 — Internal audit GOVERN 4.3 Art. 17 QMS Cl. 9.2 CC4.1 ID.IM 06.h Internal audit programme, AI competence records, findings log
Cl. 9.3 — Mgmt review GOVERN 1.5 Art. 17 QMS Cl. 9.3 CC4.2 GV.OV 06.h Mgmt review minutes incl AI-specific items, decisions, action items
Cl. 10 — Improvement / IR MANAGE 4 Art. 73 incident reporting A.5.24A.5.27 CC7.3 RS.MA 11 AI incident log, root cause, corrective actions, lessons learned

How 42001 fits with everything else

42001 is the management system layer; AI RMF is the methodology layer; EU AI Act is the regulatory layer. They're not competitors. A mature AI governance program implements 42001's management system, references AI RMF for methodology (especially MEASURE and MANAGE practices), and uses both as the foundation for EU AI Act compliance for high-risk systems. The crosswalk above shows the natural ordering: 42001's clauses and Annex A controls absorb AI RMF's GOVERN/MAP/MEASURE/MANAGE practices and satisfy EU AI Act Article 9 (risk management system) and Article 17 (quality management system).

27001 + 42001 is the dual-cert pattern emerging. Most organizations pursuing 42001 already have ISO 27001. The standards share Cl. 4–10 structure (the Harmonized Structure means policies, leadership, planning, support, operation, performance evaluation, improvement clauses are essentially identical). Internal audit and management review are unified. Risk methodology cascades from 27001's risk approach to 42001's AI-specific risk extensions. The marginal cost of adding 42001 to an existing 27001 program is significantly less than starting 42001 alone.

The dual-certification pattern is emerging: 27001 for the information assets, 42001 for the AI systems built from them. One auditor team, two certificates.

EU AI Act Article 27 (FRIA) maps closely to 42001's AIIA. Both require structured assessment of fundamental rights / human rights impacts before deploying high-risk AI. Organizations that build mature AIIA processes for 42001 satisfy most of the FRIA documentation expectation. Conversely, FRIA-driven assessments work as 42001 AIIAs with minor formatting changes. As EU AI Act enforcement begins August 2, 2026, expect convergence between the two frameworks' assessment artifacts.

The first 42001 certifications publicized through 2024–2026 — Anthropic, Microsoft, AWS, others — represent the industry's first attempt at formalizing AI governance at scale. These early certifications are doing the work of establishing what evidence looks like for AI lifecycle management, third-party model evaluation, AIIA documentation, and AI-specific incident response. Each early cert published makes the next one easier. Practitioners moving into AI governance have a narrow window to be early-mover credentialed in this space.