Volume IV · NIST CSF 2.0 · Edition 2026.1

The Compliance Atlas

Authoritative refs
NIST CSWP 29 (Feb 2024)
ISACA CSF 2.0 Audit Program
Verified May 12, 2026

CSF is read by everyone and certified by no one. The 2024 update added the GOVERN function — the single largest implementation lift in the framework's history. Most adopters experience CSF as a self-assessment exercise. Where it gains audit traction, it does so through other frameworks that absorb its subcategories.

Reading the Atlas

Internal — Profile cycle External — assurance options Bridge / hand-off

CSF has no PCAOB, no AICPA opinion, no certification body. The right frame for the external lane is "what assurance pathways exist if you want third-party validation" — and the answer is several, none of them native.

I.
Layer 01 — Lifecycle

Profile cycle in, assurance pathway out

NIST CSWP 29 §3
Profiles & Tiers

Internal — the Profile cycle. External — paths to third-party validation.

PROGRAM TIMELINE → Step 1 — orient Step 2 — current Step 3 — target Step 4 — gap analysis Step 5 — implement Step 6 — reassess Cycle continues INTERNAL — PROFILE-DRIVEN ASSESSMENT EXTERNAL — VALIDATION PATHWAYS (NONE NATIVE) Mission & risk priorities Org Profile orient CSF §3.2 Current Profile subcategory-level state honest baseline Target Profile desired state · risk-driven not all subcat = aspirational Gap analysis Δ Current vs Target prioritized by impact Action plan & budget owner · timeline · cost budget tied to risk Implement & track controls + measurement CSWP 29 implementation ex. Reassess Profile annual or event-driven cycle restarts GOVERN scope strategy · roles · policy the 2024 addition Tier assessment Tier 1 → Tier 4 implementation maturity Community Profiles sector starting points utilities · finance · health Informative References 800-53 · ISO · CIS controls cross-mappings provided Internal validation CISO · risk committee no external validator native Board reporting trend Tier movement Profile heat-maps · ROI Continuous improvement tied to incident learnings no certificate to maintain SOC 2+ examination CPA folds CSF subcategories into the SOC 2 report most common path FedRAMP authorization via NIST 800-53 baselines CSF maps; 800-53 controls federal cloud only ISACA Audit Program CSF 2.0 audit/assurance prog internal audit deploys 2024 release Regulatory adoption NYDFS · TX RAMP · CMMC examiners use CSF as lens sector-specific Cyber-insurance review underwriting questionnaires map to CSF subcategories premium/coverage impact Customer due diligence enterprise procurement RFPs reference CSF tiers de-facto requirement CPA opinion type attestation under SSAE 18 opinion on CSF mapping marketed as SOC 2 + NIST 3PAO assessment FedRAMP-accredited authorizing officials issue ATO 800-53 evidence collected Sample work programs test steps per subcategory tier-aligned procedures use as IA template Examiner expectations map controls to CSF demonstrate trend progress over annual exams Cyber-coverage outcomes renewal · sublimit · retention tied to Tier movement documentation matters Vendor scorecards SecurityScorecard, BitSight map external posture to CSF no opinion · just signal CSF HAS NO CERTIFICATE Validation comes from frameworks that absorb CSF, not CSF itself Informative References enable framework crosswalk Profile heat-map → scorecard signal Tier movement → SOC 2+ language implementation evidence → examiner artifacts

Why CSF reads differently than every other framework in the Atlas

CSF was designed as a translator. It sits above other frameworks rather than alongside them. Its Informative References explicitly map subcategories to NIST 800-53, ISO 27001 Annex A, CIS Controls, and a dozen others. The framework's design assumes you'll implement using something else and document using CSF. This is why CSF feels like meta-architecture rather than a control library.

The 2024 GOVERN function is the inflection point. Until 2024, CSF had five functions (Identify, Protect, Detect, Respond, Recover). The 2.0 update added GOVERN — strategy, roles & responsibilities, policy, oversight, and supply chain risk management. This wasn't decoration. It elevated cybersecurity from a controls discipline to an enterprise risk discipline, putting CSF in conversation with COSO ERM and ISO 31000. Most legacy CSF implementations underinvest in GOVERN.

Profiles are the unit of work, not categories. Beginners read CSF top-down (Functions → Categories → Subcategories) and try to "implement" all 106 subcategories. Mature programs work bottom-up: start with risk priorities, define a Target Profile that addresses them, identify the subcategories that matter, ignore the rest. A pharmaceutical company's Target Profile prioritizes ID.AM (asset management) and PR.DS (data security). A SaaS startup's prioritizes PR.AA (access) and DE.CM (continuous monitoring). Both are correct.

CSF doesn't tell you what to do. It tells you where the conversation about what to do should happen.

Tiers vs. Profiles is the most-misunderstood distinction. Tiers (1–4: Partial, Risk Informed, Repeatable, Adaptive) describe the rigor of the cybersecurity risk management process. Profiles describe the state of subcategory implementation. A Tier 4 organization may have a small Target Profile (because their risks don't require breadth). A Tier 2 organization may have an ambitious Target Profile (because they're early in the journey). Both are valid. Tiers are not maturity grades; they're process attributes.

II.
Layer 02 — Control universe

Six functions, twenty-two categories, one hundred six subcategories

NIST CSWP 29 §2
The Core

The six functions — GOVERN at the center, ID/PR/DE/RS/RC around the perimeter

2024 ADDITION · CENTRAL GOVERN 6 categories · 31 subcategories ID — IDENTIFY Identify Asset Management (ID.AM) Risk Assessment (ID.RA) Improvement (ID.IM) 3 categories · 21 subcategories "You can't protect what you don't know exists." — start here. PR — PROTECT Protect 5 categories · 22 subcategories PR.AA · PR.AT · PR.DS · PR.PS · PR.IR DE — DETECT Detect Continuous Monitoring (DE.CM) Adverse Event Analysis (DE.AE) 2 categories · 11 subcategories SIEM · EDR · NDR · UEBA "What you don't see, you can't stop." RS — RESPOND Respond Incident Mgmt (RS.MA) Analysis (RS.AN) Reporting/Comms (RS.CO) Mitigation (RS.MI) 4 categories · 13 subcategories Activated by detection. Tabletop matters. RC — RECOVER Recover Incident Recovery Plan (RC.RP) Communications (RC.CO) 2 categories · 8 subcategories Smallest function · biggest miss in real incidents Most teams have backups · few have tested restores GOVERN CATEGORIES ↓ GV.OC Org Context · GV.RM Risk Mgmt · GV.RR Roles & Resp · GV.PO Policy · GV.OV Oversight · GV.SC Supply Chain

Implementation Tiers — process maturity, not control completeness

Tier 1 · Partial ad hoc, reactive — Risk mgmt informal · case-by-case — Limited awareness of cyber risk — No sharing of cyber info — Supply chain risk not considered — Ad hoc incident response "We deal with it when it happens." Tier 2 · Risk Informed awareness without enforcement — Mgmt approves risk mgmt practices — Not org-wide policy — Cyber prioritized informally — Some sharing externally — Awareness of supply-chain risk "We know what we should do." Tier 3 · Repeatable documented, consistent — Org-wide cyber risk policy — Approved & enforced — Reviewed regularly — Cyber adapts to threat landscape — Supply chain risk integrated "We do it the same way every time." Tier 4 · Adaptive continuous improvement — Cyber adapted from prior events — Lessons learned drive change — Real-time threat intelligence — Sharing as part of culture — Supply chain risk in every decision "We learn faster than threats evolve." Tiers describe risk-management process maturity. They do NOT correspond to subcategory implementation completeness.

The shape of the framework

GOVERN is the gravity well. The 2024 update added GOVERN at the center because the prior five-function model treated cybersecurity as something that happens "to" an organization rather than something the organization actively manages. GV.SC (Supply Chain) alone has six subcategories that did not exist in CSF 1.1. GV.OV (Oversight) is where Boards now ask CISOs to demonstrate program performance. If a 2026 CSF program has weak GOVERN, it has a weak program.

The asymmetry is intentional. PROTECT is large because protective controls are diverse. DETECT is medium because detection is concentrated in a few capability areas. RECOVER is small because recovery is procedural, not technological. Don't assume function size = importance — RC has 8 subcategories but loses the company if it fails. ID.AM has 4 subcategories and is the foundation everything else rests on.

A complete subcategory inventory is not a complete program. A complete program is one where the right subcategories operate at the right Tier for the right risks.

Subcategories are outcome statements, not control requirements. "ID.AM-01: Inventories of hardware managed by the organization are maintained" doesn't tell you to use ServiceNow CMDB. It tells you what good looks like. The Informative References point to NIST 800-53 CM-8, ISO 27001 A.5.9, CIS Control 1, and others — any of which can satisfy the outcome. This is why CSF is portable across regulated and unregulated industries: the outcomes are framework-agnostic.

III.
Layer 03 — Evidence & maturity

What "evidence" means when there's no auditor

CSWP 29 §3 + ISACA
CSF 2.0 audit/assurance program

Evidence types per Tier — what to actually collect

SUBCATEGORY TIER 1 EVIDENCE TIER 2 EVIDENCE TIER 3 EVIDENCE TIER 4 EVIDENCE ID.AM-01 HW asset inventory Spreadsheet · last updated whenever someone remembers CMDB exists, owner named, refresh on incident CMDB integrated to procurement + MDM · monthly reconciliation Real-time discovery · auto-class + tier-aware risk scoring PR.AA-05 Access perms managed Manual ticket request approver = whoever's free Workflow tool · standard roles quarterly UAR (sometimes) JML automated · UAR enforced SoD ruleset reviewed Continuous access analytics drift detection · ML risk scoring DE.CM-01 Network monitored Logs collected somewhere no review cadence SIEM ingests · alerts to inbox someone usually checks SOC reviews tier-1 alerts 24/7 runbook for top use cases SOAR auto-triages · ML detect threat intel feeds enrich alerts RS.MA-01 Incident response plan Some Confluence page last updated 2022 IR plan exists · roles named never tabletoped Annual tabletop · runbooks post-incident reviews tracked Quarterly red-team exercises findings drive control updates RC.RP-01 Recovery plan executed Backups run somewhere restores never tested Annual restore test · 1 system DR doc exists Quarterly restore tests · all tier-1 RTO/RPO measured · gap-tracked Continuous resilience drills chaos engineering · blameless RCAs GV.SC-01 Supply chain risk mgmt Vendor list exists no risk classification Tiered vendors · DDQ for tier-1 contracts have security clauses Annual reassessment · SOC 2 review offboarding controls verified Continuous monitoring (BitSight, SecurityScorecard) · SBOM tracking

Current → Target → Action — what a Profile actually contains

CURRENT PROFILE · WHERE YOU ARE Today's posture — Subcategory-level inventory — Honest assessment of state — Evidence references — Gaps acknowledged, not hidden EXAMPLE LINE PR.AA-01: Identities and credentials are issued — "Manual provisioning via ticket. No SSO. No MFA on admin." Status: implemented partially Tier alignment: 1.5 Δ TARGET PROFILE · WHERE YOU GO Desired posture — Risk-driven, not aspirational — Subset of subcategories matter — Time horizon: 12–18 months — Explicitly omitted = "not now" EXAMPLE LINE PR.AA-01: Identities and credentials are issued — "SSO via Okta, MFA enforced on all admin + customer access" Status: target — fully implemented Tier alignment: 3 ACTION PLAN · HOW YOU CLOSE The work — Owner per subcategory — Deadline · cost · dependencies — Risk if not closed — Measurement of progress EXAMPLE TASK PR.AA-01: Q1 — Okta procurement Q2 — Pilot on internal apps Q3 — Customer SSO migration Q4 — MFA enforcement org-wide Owner: VP Eng · Budget: $180k Risk: high · audit pressure rising

Where CSF programs lose credibility

Self-attestation that nobody validates. Without an external auditor, Profiles drift toward optimism. A subcategory marked "implemented" because someone wrote a policy six months ago — but the policy isn't followed and there's no monitoring — looks identical on the heat-map to a subcategory that's actually operating. Mature programs solve this by adding a one-line evidence reference per subcategory: where is the artifact that proves it? If the answer is "we'd have to dig," the subcategory isn't really implemented.

The Tier-vs-Subcategory confusion. Boards ask "what tier are we?" Programs answer with a single number. Wrong answer. You can be Tier 4 in DETECT and Tier 1 in RESPOND. Tier is best assessed function-by-function, then aggregated narratively. A heat-map that shows tier per function tells the truth; a single-number summary hides it.

Action plans without budgets are wishes. A Target Profile with 30 gap items and no budget is a backlog, not a plan. The discipline of CSF — and the part that creates audit-credible evidence — is tying each gap to an owner, a deadline, a cost, and a residual risk. The Board approves the budget; the budget approves the plan; the plan creates evidence.

The Profile is the artifact. The implementation evidence is the proof. Without both, CSF is theater.

The Informative References are the bridge to evidence. Each subcategory points to NIST 800-53, ISO 27001, and CIS Controls. If you can't produce evidence to a CSF subcategory, work backward through its references — implement the 800-53 control, document the evidence, and you've satisfied the CSF outcome. This is how mature programs get to "evidence-backed Profile" without inventing CSF-specific artifacts.

IV.
Layer 04 — Cross-framework

NIST CSF — the universal translator

Informative References
maintained by NIST
CSF function/subcat SOX SOC 2 (TSC) ISO 27001:2022 PCI DSS v4.0.1 HIPAA HITRUST v11 Shared evidence
GV.OC — Org Context ELC · scoping CC1.1 Cl. 4.1 · 4.2 Req 12.5 §164.306 00.a Org context, mission statement, interested parties
GV.RM — Risk Mgmt Strategy ELC · risk memo CC3.1CC3.4 Cl. 6.1 Req 12.3 §164.308(a)(1) 03.a Risk methodology, risk appetite, treatment plan
GV.RR — Roles & Resp ELC · governance CC1.3 Cl. 5.3 Req 12.4 §164.308(a)(2) 02.a Org chart, RACI, info security policy
GV.PO — Policy ELC · policy mgmt CC5.3 A.5.1 Req 12.1 §164.316 04.a Policy library, approvals, review evidence
GV.OV — Oversight AC oversight CC4.1 · CC4.2 Cl. 9.3 Req 12.4.1 §164.308(a)(8) 06.h Mgmt review minutes, board cyber reports
GV.SC — Supply Chain ITGC + BPC · TPRM CC9.2 A.5.19A.5.23 Req 12.8 · Req 12.9 §164.308(b) 05.k Vendor inventory, due-diligence, contracts, SOC 2s
ID.AM — Asset Mgmt ITGC · scoping CC6.1 A.5.9A.5.11 Req 9.5 · Req 12.5 §164.310(d)(2) 07.a CMDB exports, asset registers, classification labels
ID.RA — Risk Assessment ELC · risk memo CC3.2 Cl. 6.1.2 · A.5.7 Req 12.3 §164.308(a)(1)(ii)(A) 03.b Threat model, risk register, scoring methodology
PR.AA — Identity & Access ITGC — Access CC6.1CC6.3 A.5.15A.5.18 · A.8.2 Req 7 · Req 8 §164.308(a)(3) · §164.308(a)(4) 01.b · 01.c · 01.v JML tickets, UAR exports, IAM config, MFA enforcement
PR.AT — Awareness & Training ELC · COSO Comp.4 CC1.4 A.6.3 Req 12.6 §164.308(a)(5) 02.e · 02.f Completion reports, phishing test results, attestations
PR.DS — Data Security ITGC + BPC CC6.1 · C1.1 A.8.10A.8.12 · A.8.24 Req 3 · Req 4 §164.312(a)(2)(iv) 06.c · 10.f Encryption inventory, KMS config, classification policy
PR.PS — Platform Security ITGC — Operations CC6.6 · CC6.8 A.8.7 · A.8.9 · A.8.32 Req 2 · Req 5 · Req 6.5 §164.308(a)(5)(ii)(B) 09.h · 09.j · 10.h CIS benchmarks, hardening guides, change tickets
PR.IR — Infra Resilience ITGC — Operations CC6.6 · A1.2 A.8.20 · A.8.22 Req 1 §164.312(e)(1) 09.m Network diagram, segmentation tests, firewall rules
DE.CM — Continuous Monitoring ITGC — Operations CC7.1 · CC7.2 A.8.15 · A.8.16 Req 10 §164.312(b) 09.aa SIEM rules, log retention, alert tuning, anomaly detection
DE.AE — Adverse Event Analysis BPC · ITGC ops CC7.3 A.5.25 Req 10.4 §164.308(a)(1)(ii)(D) 11.a Triage logs, alert categorization, false-positive rates
RS.MA — Incident Mgmt BPC CC7.3CC7.5 A.5.24A.5.27 Req 12.10 §164.308(a)(6) 11.a11.c Incident tickets, IR plan, tabletop reports
RS.CO — Comms during incident ELC · disclosure CC2.3 A.5.26 Req 12.10.2 §164.404 (breach) 11.b Stakeholder list, comms templates, regulatory notice records
RC.RP — Recovery Plan BPC · resilience A1.3 A.5.29 · A.5.30 Req 12.10 §164.308(a)(7) 12.b · 12.c DR plan, last-test report, RTO/RPO documentation

Why CSF is the lingua franca

Every framework in the Atlas can be expressed in CSF terms. NIST publishes Informative References mapping CSF subcategories to NIST 800-53 (the most granular control catalog), ISO 27001 Annex A, CIS Critical Controls, COBIT, and others. Vendors of GRC platforms use CSF as the master taxonomy because it sits one level up from the prescriptive frameworks. This is also why CSF gets read by examiners who don't have authority to require it: it gives them a vocabulary.

Translating CSF into other frameworks is straightforward; translating other frameworks into CSF is messier. A SOC 2 CC6.1 control may map to PR.AA-01, PR.AA-05, and PR.PS-01 — three subcategories from one criterion. A NIST 800-53 AC-2 control maps cleanly to PR.AA-01 because both were designed in the same NIST family. If you start from CSF and work outward, the crosswalk works. If you start from SOC 2 and try to project into CSF, the mapping is many-to-many and ambiguous.

CSF is best used as the meta-framework. Treat it as the shared dictionary, not the implementation library.

The shared-evidence model is identical to the other frameworks. JML tickets, UAR exports, change tickets, vendor SOC 2s, IAM configs, encryption inventories — these artifacts satisfy CSF subcategories and the matching SOC 2/ISO/PCI/HIPAA controls simultaneously. CSF doesn't change the evidence; it changes how you organize it for governance reporting.