ISO/IEC 42001 is the first management-system standard for AI — the AI Management System (AIMS) cousin of ISO 27001's ISMS. Published December 2023. The first AIMS certifications were issued in 2024. The audit profession is figuring out the evidence playbook in real time, with notified bodies and accreditation bodies racing to publish guidance.
Lane structure parallels ISO 27001 — Stage 1 / Stage 2 / surveillance / recertification — but with AI-specific evidence: model documentation, AI impact assessments, AI lifecycle controls, third-party AI component management.
The shape is borrowed. ISO/IEC 42001 follows the Harmonized Structure that all ISO management system standards share — Clauses 1-3 (intro/normative refs/terms), Clauses 4-10 (the management system), and an Annex of controls. If you've audited or implemented 27001, the architecture of 42001 will feel immediately familiar: scope, leadership, planning, support, operation, performance evaluation, improvement, plus an Annex A control set with a Statement of Applicability.
The evidence is novel. What's different is what counts as artifacts. Risk Analysis becomes AI Risk Assessment (informed by ISO/IEC 23894:2023). The Statement of Applicability now covers AI-specific controls. The control evidence includes model cards, data sheets for datasets, AI Impact Assessments, red-team results, fairness evaluations, third-party model evaluations, and incident logs for AI-specific failures. Most of these don't exist in mature form at most organizations even today.
Auditor competence is the bottleneck. ISO/IEC 17021-1 requires lead auditors competent in the management system and the technical domain. For 42001, that means competent in AI — model architectures, training data governance, evaluation methodologies, AI risk concepts, AI ethics frameworks. This skill set is rare. UKAS, ANAB, and other accreditation bodies are still developing competence requirements; CBs are still hiring and training. The likely outcome is that early 42001 certifications will see audit teams with strong ISO 27001 backgrounds and contracted AI subject matter experts.
The scope question is uniquely hard. An ISMS scope is a familiar concept — boundary by org unit, geographic location, and information assets. An AIMS scope must define which AI systems are in scope, which lifecycle stages are in scope (development, deployment, monitoring, retirement), and which roles the organization plays for each system (provider, deployer, both, supplier of components). Most organizations don't yet have an inventory of their AI systems sufficient to answer these questions. The AIMS scoping exercise itself often reveals more AI than expected.
The third-party question is uniquely thorny. When you deploy GPT-4 / Claude / Gemini / Llama through an API, you're a deployer of someone else's AI. ISO/IEC 42001's A.10 controls require you to manage risks from those third-party AI components — but the providers don't necessarily share enough information for thorough evaluation. As AI suppliers themselves become 42001-certified (Anthropic, Microsoft, AWS, others publicly certified through 2024-2026), that information sharing becomes easier.
A.6 (AI System Life Cycle) is the largest and most-novel. Seven controls covering the full AI system lifecycle — design, development, verification, deployment, operation, monitoring, retirement. Most organizations don't yet have lifecycle processes mature enough to evidence each stage. Models are deployed without formal verification gates; monitoring is ad hoc; retirement plans don't exist. A.6 is where most early 42001 implementations spend disproportionate time building from scratch.
A.7 (Data for AI Systems) is where data governance becomes AI governance. Five controls — data acquisition, quality, provenance, preparation, plus general data governance. The data lineage requirements are stricter than under traditional information governance. You must be able to demonstrate where training data came from, how it was processed, what biases it carries, and how it's been validated for the AI system's intended use. ISO/IEC 5259 series is the companion standard for data quality requirements.
A.8 (Information for Interested Parties) is the transparency category. Four controls covering what users, customers, affected parties, and external stakeholders are told about AI systems — capabilities, limitations, risks, and how to report issues. Maps closely to EU AI Act Article 50 transparency obligations and to common AI ethics framework principles. The Information for Users requirement is where most organizations realize they don't have user-facing model cards or risk communications.
A.10 (Third-Party and Customer Relationships) handles the supply chain. Three controls covering supplier management, customer relationships, and third-party AI components. This is uniquely difficult when the third party is OpenAI, Anthropic, or Google — providers whose terms and documentation continue to evolve. As more model providers achieve their own 42001 certification, due diligence on them becomes more standardized.
The Annex C objectives drive the SoA. Unlike ISO 27001 where organizations select Annex A controls based on risk assessment alone, ISO 42001 expects organizations to also consider Annex C's organizational AI objectives — accountability, expertise, availability, fairness, maintainability, privacy, robustness, safety, security, transparency. A health-tech company prioritizes safety and fairness highly; a financial services firm prioritizes accountability and transparency. The SoA must reflect those prioritizations, with Annex A controls justified against them.
Generic AIIAs that could apply to any system. The most common Stage 2 finding will be AIIAs that are too abstract — describing risks like "potential bias" or "potential security issues" without specifics. A real AIIA references the actual training data, the actual affected populations, the actual evaluation results, the actual mitigations. Auditors test specificity by asking "show me the data behind this fairness claim" or "show me the red-team result that closed this risk."
AI system inventory missing or incomplete. Most organizations don't have a single source of truth for their AI systems. Models live in product team repos, data science notebooks, third-party APIs, embedded in vendor products, in shadow IT. AIMS scope cannot be defined without inventory. Stage 1 audits will repeatedly surface "we found three more systems after we drafted the SoA." Build the inventory first; everything else follows.
Third-party AI evaluation that's just terms-of-service review. A.10 controls require risk evaluation of third-party AI components — and "we read OpenAI's terms" is not evaluation. Real evaluation includes the model's documentation (cards, evals), the provider's own security/privacy posture (their SOC 2 if available, their 42001 if applicable), and your own testing of the model against your use case. As more model providers achieve 42001 cert, this evaluation step gets easier — but it doesn't go away.
Model documentation as afterthought. Model cards and datasheets created retroactively, six months after a model went into production, often miss the original training context, don't reflect current performance, and read as PR rather than documentation. CBs will compare model documentation against actual model behavior — discrepancies surface as findings. The discipline is to write model cards during model development, treat them as living documents, version-control them alongside the model.
Internal audit of AI is hard because AI auditors are scarce. Cl. 9.2 requires competent, objective auditors. For AIMS, that competence includes AI domain knowledge — model evaluation, fairness analysis, AI risk concepts. Most organizations don't have this internal. External AIMS internal auditors are emerging as a service offering through 2025-2026, but supply is limited. Pre-Stage 2 organizations should plan to use a hybrid (information-security-trained internal auditor partnered with AI domain expert).
| 42001 element | NIST AI RMF 1.0 | EU AI Act | ISO 27001:2022 | SOC 2 (TSC) | NIST CSF 2.0 | HITRUST v11 | Shared evidence |
|---|---|---|---|---|---|---|---|
| Cl. 4 — Context | GOVERN 1 |
Art. 9 RMS scope |
Cl. 4.1 · 4.3 |
CC1.1 |
GV.OC |
00.a |
AIMS scope, AI system inventory, interested parties register |
| Cl. 5 — Leadership | GOVERN 2 |
Art. 26 deployer obligations |
Cl. 5.1 · 5.3 |
CC1.2 · CC1.3 |
GV.RR |
02.a |
AI policy, governance charter, RACI for AI |
| Cl. 6.1.2 — AI risk assess | MAP 5 · MEASURE 2 |
Art. 9 RMS |
Cl. 6.1.2 |
CC3.2 |
ID.RA |
03.b |
Risk register w/ AI risks, methodology doc, ISO 23894 reference |
| Cl. 6.1.4 — AI Impact Assessment | MAP 1 · MAP 5 |
Art. 27 FRIA (deployer) |
— | — | — | — | AIIA per AI system, affected-parties analysis, mitigation register |
| A.2 — AI Policy | GOVERN 1.1 |
Art. 17 QMS |
A.5.1 |
CC5.3 |
GV.PO |
04.a |
AI policy, supporting standards, communication records |
| A.4 — Resources | GOVERN 4 |
Art. 14 human oversight |
Cl. 7 |
CC1.4 |
PR.AT-02 |
02.e |
Org chart with AI roles, training records, compute capacity |
| A.5 — AI Impact Assessment | MAP 5 |
Art. 27 · Annex IV docs |
— | — | — | — | AIIA process documentation, AIIA per system, review log |
| A.6 — AI lifecycle | MAP · MEASURE · MANAGE |
Art. 9–15 high-risk reqs |
A.8.25 – A.8.32 (SDLC analog) |
CC8.1 |
PR.PS-06 |
10.a · 10.b |
Lifecycle policy, dev/eval gates, deployment criteria, monitoring |
| A.7 — Data for AI | MAP 4 · MEASURE 4 |
Art. 10 data governance |
A.5.12 |
CC1.4 · C1.1 |
PR.DS-01 |
06.c |
Datasheets, lineage, quality reports, bias/representativeness analysis |
| A.8 — Information for users | GOVERN 5 |
Art. 13 · Art. 50 |
A.5.34 (privacy notices) |
P1 |
— | 13 |
Model cards, user-facing AI disclosures, capability/limitation docs |
| A.9 — Use of AI systems | MANAGE 1 |
Art. 26 deployer use |
— | — | — | — | Intended-use specs, deployment criteria, ongoing monitoring SLAs |
| A.10 — Third-party AI | GOVERN 6 |
Art. 25 value-chain |
A.5.19 – A.5.23 |
CC9.2 |
GV.SC |
05.k |
Vendor AI inventory, due-diligence packages, contracts, model evals |
| Cl. 9.1 — Monitoring | MEASURE 3 · 4 |
Art. 72 post-market |
Cl. 9.1 |
CC4.1 |
DE.CM |
09.aa |
Drift monitoring, fairness monitoring, performance dashboards |
| Cl. 9.2 — Internal audit | GOVERN 4.3 |
Art. 17 QMS |
Cl. 9.2 |
CC4.1 |
ID.IM |
06.h |
Internal audit programme, AI competence records, findings log |
| Cl. 9.3 — Mgmt review | GOVERN 1.5 |
Art. 17 QMS |
Cl. 9.3 |
CC4.2 |
GV.OV |
06.h |
Mgmt review minutes incl AI-specific items, decisions, action items |
| Cl. 10 — Improvement / IR | MANAGE 4 |
Art. 73 incident reporting |
A.5.24 – A.5.27 |
CC7.3 |
RS.MA |
11 |
AI incident log, root cause, corrective actions, lessons learned |
42001 is the management system layer; AI RMF is the methodology layer; EU AI Act is the regulatory layer. They're not competitors. A mature AI governance program implements 42001's management system, references AI RMF for methodology (especially MEASURE and MANAGE practices), and uses both as the foundation for EU AI Act compliance for high-risk systems. The crosswalk above shows the natural ordering: 42001's clauses and Annex A controls absorb AI RMF's GOVERN/MAP/MEASURE/MANAGE practices and satisfy EU AI Act Article 9 (risk management system) and Article 17 (quality management system).
27001 + 42001 is the dual-cert pattern emerging. Most organizations pursuing 42001 already have ISO 27001. The standards share Cl. 4–10 structure (the Harmonized Structure means policies, leadership, planning, support, operation, performance evaluation, improvement clauses are essentially identical). Internal audit and management review are unified. Risk methodology cascades from 27001's risk approach to 42001's AI-specific risk extensions. The marginal cost of adding 42001 to an existing 27001 program is significantly less than starting 42001 alone.
EU AI Act Article 27 (FRIA) maps closely to 42001's AIIA. Both require structured assessment of fundamental rights / human rights impacts before deploying high-risk AI. Organizations that build mature AIIA processes for 42001 satisfy most of the FRIA documentation expectation. Conversely, FRIA-driven assessments work as 42001 AIIAs with minor formatting changes. As EU AI Act enforcement begins August 2, 2026, expect convergence between the two frameworks' assessment artifacts.
The first 42001 certifications publicized through 2024–2026 — Anthropic, Microsoft, AWS, others — represent the industry's first attempt at formalizing AI governance at scale. These early certifications are doing the work of establishing what evidence looks like for AI lifecycle management, third-party model evaluation, AIIA documentation, and AI-specific incident response. Each early cert published makes the next one easier. Practitioners moving into AI governance have a narrow window to be early-mover credentialed in this space.