Volume XI · NIST AI RMF · Edition 2026.1

The Compliance Atlas

Authoritative refs
NIST AI 100-1 (Jan 2023)
NIST AI 600-1 (GenAI · Jul 2024)
Verified May 12, 2026

The capstone volume. NIST's voluntary framework for AI risk management — no certification body, no audit ritual, no enforcement. And yet AI RMF has become the foundational vocabulary that ISO 42001, the EU AI Act, federal procurement, and customer due-diligence questionnaires all use to describe AI governance work. Like CSF for cybersecurity: read by everyone, audited by no one — until you need a frame for what you're already doing.

Reading the Atlas

Internal — AI risk owners External — assurance pathways Bridge / hand-off

Like NIST CSF (Volume IV), AI RMF has no native cert. External validation comes by folding the framework into ISO 42001 audits, EU AI Act conformity assessments, customer due diligence, federal procurement reviews under OMB M-24-10, and SOC 2+ examinations.

I.
Layer 01 — Lifecycle

The AI risk loop with no native validator

NIST AI 100-1 · Part 2
AI RMF Playbook

Internal AI risk lifecycle × external assurance pathways

CONTINUOUS AI RISK CYCLE → GOVERN MAP MEASURE MANAGE Iterate Profile evolves Maturity grows INTERNAL — AI RISK OWNERS EXTERNAL — ASSURANCE PATHWAYS GOVERN setup policies · roles · culture cross-cutting MAP context categorize · purpose capabilities · impacts MEASURE risks analyze · benchmark test · evaluate MANAGE response prioritize · respond communicate · monitor Profile maintained current vs target org-specific tailoring Continuous improvement drift · re-evaluation closed-loop iteration Org maturity progresses trustworthiness embedded across AI portfolio AI RMF Playbook tasks · suggestions informative companion AI system inventory in-scope models often discovered late Trustworthy AI characteristics 7 properties of trustworthy AI valid · safe · secure · acct. Risk treatment options mitigate · transfer · accept avoid · 4 options total Profiles — Use-Case + Cross-Sec domain-specific tailoring GenAI · HR · finance · etc Internal audit / review no required cadence organization-defined Versioning the framework v1.0 (Jan 2023) baseline + profile updates ongoing ISO 42001 audit CB tests RMS effectiveness AI RMF satisfies methodology primary external pathway EU AI Act conformity AI RMF informs Art. 9 RMS + Annex IV documentation methodological backbone SOC 2+ AI examination CPA folds AI controls into report emerging path 2025-26 Federal procurement OMB M-24-10 review AI RMF as baseline for vendor sales to USG Customer due diligence enterprise buyers asking "how do you do AI RMF?" most-cited path 2024-25 Internal validation internal audit / review no formal output leads to other paths 3-year cert validity AI RMF baked into 42001 documentation no AI RMF cert itself CE marking outcome conformity declared via AI RMF-shaped RMS indirect validation SOC 2 report outcome CPA opinion w/ AI mappings marketed as SOC 2 + AI ATO-equivalent (if FedRAMP) 800-53 + AI RMF control overlay M-24-10 for AI uses Vendor questionnaires "are you AI RMF-aligned?" requires evidence de facto requirement Self-attestation orgs publish profiles no third-party validation low-cost option NO AI RMF CERTIFICATE Like CSF — read by everyone, audited via other frameworks GOVERN policies → 42001 mgmt-system shell MEASURE evals → EU Art. 9 RMS docs MANAGE controls → M-24-10 federal review profile maintenance → customer questionnaires

Why AI RMF feels like CSF reinvented for a new domain

Voluntary, vocabulary-providing, methodology-shaping. Like NIST CSF (Volume IV), AI RMF was deliberately designed as a framework rather than a standard or regulation. NIST has no enforcement authority over private industry; AI RMF is an offering — adopt it because the structure is good, not because anyone requires you to. That voluntariness is also what made AI RMF universally adopted: every other AI governance framework, from ISO 42001 to the EU AI Act to the OECD AI Principles, treats AI RMF as foundational vocabulary.

Released Jan 2023 — early enough to shape everything else. NIST AI 100-1 published January 26, 2023, eleven months before ISO/IEC 42001 (December 2023) and eighteen months before the EU AI Act entered force (August 2024). When ISO 42001's authors wrote Annex A controls, AI RMF's MAP/MEASURE/MANAGE practices were already establishing the field's vocabulary. When EU AI Act drafters specified Article 9's RMS requirements, AI RMF was the implicit reference. The result: organizations implementing AI RMF satisfy most of the methodology layer for both downstream frameworks before either was finalized.

The Generative AI Profile (NIST AI 600-1, July 2024) made AI RMF instantly relevant again. Just as ISO 42001 was being adopted, NIST released a GenAI-specific profile identifying 12 unique GenAI risks (CBRN proliferation, confabulation, dangerous content, data privacy, human-AI configuration, information integrity, information security, intellectual property, obscene/abusive/harmful content, value chain & component integration, harmful bias, environmental impact). The profile maps each risk to suggested actions across all four functions. For organizations using or building on foundation models, NIST AI 600-1 is now as essential as the core AI RMF.

AI RMF gave the field its grammar. ISO 42001 gave it a management system. The EU AI Act gave it a regulatory regime. They cooperate; they don't compete.

The seven trustworthy characteristics anchor everything. AI RMF specifies that trustworthy AI systems should be valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair (with harmful bias managed). These seven characteristics map to almost every AI governance framework's principles. ISO 42001 Annex C lists similar organizational AI objectives. EU AI Act Articles 13-15 enumerate similar requirements. The trustworthy-AI vocabulary is now lingua franca.

The external lane has no judge. Like NIST CSF, AI RMF's external column doesn't include certification body or regulator — it includes the assurance pathways that incorporate AI RMF: ISO 42001 audits use it as methodology, EU AI Act conformity assessments use it as RMS framework, federal procurement under OMB M-24-10 uses it as baseline, customer due diligence questionnaires ask "are you AI RMF-aligned?" The framework gets validated indirectly, by being folded into other validation regimes.

II.
Layer 02 — Functions

GOVERN at the center, MAP/MEASURE/MANAGE around it

NIST AI 100-1 Part 2
+ AI RMF Playbook

The four functions and their categories

4 FUNCTIONS · 19 CATEGORIES · 72 SUBCATEGORIES CROSS-CUTTING FOUNDATION GOVERN 6 categories · 19 subcategories Policies · accountability · workforce culture · transparency · supply chain FUNCTION · UNDERSTAND CONTEXT MAP 5 categories · 18 subcategories — MAP 1: Context · purpose · system understood — MAP 2: Categorization & classification — MAP 3: Capabilities · benefits · costs analyzed — MAP 4: Risks & benefits to all impacted parties — MAP 5: Impacts characterized (the broad impact landscape) Maps to: ISO 42001 Cl. 4 + 6.1.4 (AIIA); EU AI Act Art. 6 + Annex III + Art. 27 FUNCTION · ANALYZE MEASURE 4 categories · 18 subcategories — MEASURE 1: Methods identified · applied — MEASURE 2: Trustworthy characteristics evaluated · benchmarked — MEASURE 3: Mechanisms for tracking risks (drift · ongoing perf · incidents) — MEASURE 4: Feedback gathered · evaluated (impact assessment refresh) Maps to: ISO 42001 Cl. 9.1 + A.6.2.6; EU AI Act Art. 12 + Art. 15 + Art. 72 FUNCTION · RESPOND MANAGE 4 categories · 17 subcategories — MANAGE 1: Risk responses prioritized — MANAGE 2: Strategies for risk treatment (mitigate · transfer · accept · avoid) — MANAGE 3: AI risks documented & communicated — MANAGE 4: Improvement / response to incidents · monitored over time Maps to: ISO 42001 Cl. 8 + 10; EU AI Act Art. 16, 17, 26, 73 GOVERN — SIX CATEGORIES UNDERPINNING ALL OTHER FUNCTIONS GOVERN 1 Policies, processes, procedures & practices for AI risk management exist and are operational GOVERN 2 Accountability structures established · roles & responsibilities allocated GOVERN 3 Workforce diversity, equity, inclusion · accessibility prioritized GOVERN 4 Culture of risk management cultivated · leadership commitment GOVERN 5 Engagement & communication processes for AI risk information GOVERN 6 — Supply chain risks (third-party AI components) GOVERN is cross-cutting — its categories interact with all of MAP, MEASURE, MANAGE.

How the four functions actually interact

GOVERN is not a phase — it's the foundation. Unlike a maturity model where you complete one phase and move to the next, GOVERN runs continuously underneath everything else. Its six categories (policies, accountability, workforce, culture, engagement, supply chain) are the conditions that make MAP, MEASURE, and MANAGE possible. Organizations that treat GOVERN as "we'll do that later" find their MAP and MEASURE work has no decision-making structure to feed into.

MAP is bigger than risk identification. Five categories cover context establishment (MAP 1), categorization (MAP 2), capabilities/benefits/costs analysis (MAP 3), risks/benefits to all parties (MAP 4), and impact characterization (MAP 5). MAP 5 in particular is where most organizations underspecify — characterizing impacts on direct users, indirect users (people affected by AI decisions), third parties, and society broadly. The EU AI Act's Article 27 FRIA is essentially a regulatory codification of MAP 5.

MEASURE is where most programs stall. The four categories — methods, trustworthiness evaluation, ongoing tracking, feedback gathering — require evaluation infrastructure most organizations don't have. Measuring fairness across protected attributes, robustness against adversarial inputs, drift over time, and stakeholder feedback all require dedicated tooling and discipline. MEASURE is where AI red teams, evaluation engineers, and ML observability platforms earn their keep. Programs treat MEASURE as the differentiator between mature AI governance and AI governance theater.

GOVERN sets the rules. MAP says what we have. MEASURE says how it's behaving. MANAGE says what we're going to do about it.

MANAGE is where decisions get made. Risk treatment options (mitigate, transfer, accept, avoid), prioritization, documentation, and incident response. MANAGE 4 specifically — improvement and response to incidents over time — is the closed loop that distinguishes AI risk management from AI risk theater. Organizations that don't track AI incidents, perform root-cause analysis, and feed lessons back into MAP/MEASURE never improve.

The four functions are not sequential in practice. A new AI system goes through MAP-MEASURE-MANAGE roughly in order during initial deployment, but in steady-state operation all four run continuously. Drift detected in MEASURE feeds back to MAP (re-characterize the impact landscape). New regulatory requirements feed GOVERN updates. Customer feedback feeds MEASURE 4. The framework's loop is iterative, not linear.

III.
Layer 03 — GenAI Profile

NIST AI 600-1 — twelve risks unique to generative AI

NIST AI 600-1 (July 2024)
Generative AI Profile

The 12 GenAI risks identified in NIST AI 600-1

12 RISKS UNIQUE TO OR EXACERBATED BY GENERATIVE AI RISK 1 CBRN proliferation Easier access to chemical, biological, radiological, nuclear information / capabilities via GenAI RISK 2 Confabulation "Hallucinations" — confidently stated incorrect or fabricated outputs presented as facts RISK 3 Dangerous, violent or hateful content Generated content that incites, facilitates, or threatens harm to people or groups RISK 4 Data privacy Memorization & leakage of training data; PII exposure; unauthorized inference of private attributes RISK 5 Environmental impact Energy & water consumption of training and inference; e-waste from compute RISK 6 Harmful bias and homogenization Reinforcement & amplification of bias; convergence of cultural representations RISK 7 Human-AI configuration Over-reliance on AI; mis- calibrated trust; emotional attachment; anthropomorphization RISK 8 Information integrity Disinformation; deepfakes; scaled deceptive content; erosion of trust in info RISK 9 Information security Lower cost of cyber attacks; code-generation for malware; phishing scale; prompt injection RISK 10 Intellectual property Reproduction of copyrighted material; attribution problems; licensing of training data RISK 11 Obscene, degrading, abusive content Including non-consensual intimate imagery; CSAM; harassment RISK 12 Value chain & component integration Cascading risks in chains of third-party models, data, tools; opacity of upstream choices EACH RISK MAPS TO ALL FOUR FUNCTIONS For each of the 12 risks, NIST AI 600-1 lists ~10–25 suggested actions — distributed across GOVERN (policy & accountability), MAP (context & classification), MEASURE (evaluation & monitoring), and MANAGE (response & treatment). Total: ~200 specific GenAI-related actions, all optional. Programs select based on which risks apply.

Why the GenAI Profile is the document everyone reads

Released July 2024 — directly responsive to the post-ChatGPT environment. When AI RMF 1.0 was published in January 2023, ChatGPT was barely two months old; foundation models were a research area, not a deployment category. By mid-2024 GenAI was the dominant AI use case in the enterprise. NIST AI 600-1 closed the gap — providing GenAI-specific risk identification and mitigation suggestions on top of the core AI RMF.

The 12 risks reflect a maturing field. Some risks (confabulation, harmful bias, IP) were familiar from pre-GenAI work. Others (CBRN proliferation, value-chain integration, environmental impact) became salient specifically because foundation models scale capability and dependency in unprecedented ways. The "human-AI configuration" risk acknowledges that the human side of human-AI systems can fail too — over-reliance, anthropomorphization, miscalibrated trust.

Confabulation is the most-relevant risk for most organizations. If you deploy a GenAI system that can present incorrect information confidently, you have confabulation risk. Mitigations include retrieval-augmented generation, explicit uncertainty signaling, human-in-the-loop review, output validation, and user education. NIST AI 600-1 lists ~20 specific actions across all four functions for confabulation alone.

NIST AI 600-1 is now the de-facto reference for GenAI risk programs — even outside organizations that haven't formally adopted AI RMF 1.0.

Human-AI configuration is the most-overlooked. Risk 7 covers over-reliance on AI outputs, miscalibrated trust, emotional attachment, and anthropomorphization. These risks are easy to dismiss as "user education" problems but are fundamental to whether AI systems produce good outcomes in practice. UI design choices (does the system signal uncertainty?), workflow integration (do humans review or rubber-stamp?), and organizational culture (is "the AI said so" treated as authoritative?) all sit inside this risk category.

Value-chain & component integration is structural. Risk 12 acknowledges that most enterprise GenAI deployments are not built from scratch — they're stacks of foundation models from one provider, embedding models from another, vector databases from a third, retrieval components from a fourth, evaluation tools from a fifth. Each layer adds risk; each layer is opaque to the layers above. Programs that don't map and manage their AI value chain will discover the failure modes through incidents.

IV.
Layer 04 — Cross-framework

AI RMF as the methodology layer everything else inherits

Mappings to ISO 42001, EU AI Act
NIST CSF, federal procurement
AI RMF function ISO 42001:2023 EU AI Act NIST CSF 2.0 ISO 27001:2022 SOC 2 (TSC) HITRUST v11 Shared evidence
GOVERN 1 — Policies A.2 (AI policies) Art. 17 QMS GV.PO A.5.1 CC5.3 04.a AI policy, supporting standards, communication records
GOVERN 2 — Accountability A.3 internal org Art. 22 auth rep GV.RR Cl. 5.3 CC1.3 02.a RACI for AI, role descriptions, accountability matrix
GOVERN 3 — DEI & access A.3.2 roles Workforce composition, DEI training, accessibility audit results
GOVERN 4 — Culture Cl. 5.1 leadership Art. 4 AI literacy GV.OC Cl. 5.1 CC1.1 02.e Leadership comms, training records, engagement surveys
GOVERN 5 — Engagement A.8 info for users Art. 13 · Art. 50 GV.SC A.5.34 P1 13 Stakeholder maps, model cards, public disclosures
GOVERN 6 — Supply chain A.10 third-party Art. 25 value chain GV.SC A.5.19A.5.23 CC9.2 05.k Vendor AI inventory, due-diligence packages, contracts
MAP 1 — Context Cl. 4 Art. 6 + Annex III GV.OC Cl. 4 CC1.1 00.a AI system inventory, scope statement, classification rationale
MAP 2 — Categorization Cl. 6.1.4 AIIA Art. 6(3) escape Risk-tier classification, Annex III mapping, exception docs
MAP 3 — Capabilities · costs A.5.3 impact Art. 9 RMS ID.RA Cl. 6.1.2 CC3.2 03.b Cost-benefit memos, performance benchmarks, business cases
MAP 4 — Risks & benefits Cl. 6.1.4 Art. 27 FRIA ID.RA Cl. 6.1.2 CC3.2 03.a Stakeholder analysis, risk register, AIIA / FRIA outputs
MAP 5 — Impacts A.5.4 AIIA Art. 27 · Art. 9 ID.RA AIIA per system, affected-parties analysis, mitigation register
MEASURE 1 — Methods A.6.2.6 verify Art. 11 tech docs ID.IM A.8.8 CC4.1 10.k Evaluation methodology, benchmark choice, eval framework
MEASURE 2 — Trustworthy chars A.6.2.4 · A.6.2.7 Art. 15 robustness PR.DS-01 A.8.7 CC7.2 09.j Eval reports, fairness metrics, robustness tests, red-team
MEASURE 3 — Tracking Cl. 9.1 Art. 12 logs · Art. 72 DE.CM A.8.15 CC7.1 09.aa Drift monitoring, performance dashboards, log retention
MEASURE 4 — Feedback A.5.5 review Art. 72 post-market ID.IM Cl. 9.1 CC4.2 06.h User feedback, issue trackers, AIIA refresh records
MANAGE 1 — Prioritize A.9 use Art. 26 deployer GV.RM Cl. 6.1.3 CC3.4 03.b Risk treatment plan, prioritization rationale, decision logs
MANAGE 2 — Treatment Cl. 6.1.3 Art. 9 mitigation ID.RA Cl. 6.1.3 CC3.4 03.b Treatment options, residual risk, treatment evidence
MANAGE 3 — Communicate A.8 info for users Art. 13 · Art. 50 GV.SC A.5.34 P1 13 Risk communications, model cards, user disclosures
MANAGE 4 — Improve Cl. 10 Art. 73 incidents RS.MA A.5.24A.5.27 CC7.3 11 Incident log, RCA, corrective actions, lessons learned

Where AI RMF sits in the AI governance stack

The crosswalk above tells the architectural story. AI RMF is the methodology layer that almost everyone else inherits. ISO 42001 takes AI RMF's GOVERN/MAP/MEASURE/MANAGE practices and structures them as a management system. The EU AI Act's Article 9 (RMS) and Article 17 (QMS) require what AI RMF describes. NIST CSF 2.0 acknowledges AI RMF as the AI-specific extension of the GOVERN function. Most enterprise AI governance programs use AI RMF as the methodology because it doesn't require certification, doesn't have legal stakes, and provides a vocabulary everyone in the field recognizes.

The cleanest implementation pattern. Adopt AI RMF as your methodology. Use ISO 42001 as your management-system shell (which gives you a certification path). Address EU AI Act obligations as a regulatory overlay where applicable. The three frameworks cooperate cleanly — every artifact you create satisfies multiple frameworks. The marginal cost of adding ISO 42001 to an existing AI RMF program is low; the marginal cost of adding EU AI Act conformity is low if you're already 42001-certified.

AI RMF gives you the verbs. ISO 42001 gives you the management system. EU AI Act gives you the legal requirements. They're a stack, not alternatives.

The federal procurement angle is real and growing. OMB Memo M-24-10 (March 2024) directed federal agencies to use AI RMF for AI risk management. Federal contractors selling AI capabilities to government must demonstrate AI RMF alignment as part of procurement reviews. For CSPs already in FedRAMP (Volume VIII), this layers an AI RMF overlay onto existing 800-53 control implementations. Expect this requirement to expand through 2026-2027 as agencies mature their AI procurement processes.

The customer-driven path is currently the strongest signal. Enterprise buyers in 2025-2026 are routinely asking AI vendors "how do you implement NIST AI RMF?" — even when they don't audit the answer. Having a documented AI RMF profile (Use-Case Profile or Cross-Sectoral Profile) becomes a sales asset. Profiles can be summarized in vendor security questionnaires, customer-facing trust portals, or AI-specific addenda to MSAs. The bar is shifting from "do you do AI safely?" to "show me your AI RMF profile."

This volume closes the Atlas because it returns to first principles. Eleven volumes in, the pattern is consistent — every framework prescribes practices that map to a small set of governance verbs (govern, map, measure, manage; or in CSF's terms, govern, identify, protect, detect, respond, recover). AI RMF chose its four verbs intentionally to mirror that earlier work. The Atlas as a whole demonstrates that compliance frameworks are not separate kingdoms but variations on shared structural commitments — different vocabulary, different audit mechanics, common architectural moves.