Maturity Model Spread — Why PwC, Microsoft RAI, Anthropic RSP/ASL, and OWASP Don’t Share Axes
What this page is
A scoping analysis for the comparison candidate “Maturity model spread (PwC, Microsoft, Anthropic, OWASP) — same axes?” parked in Comparisons Index §More candidates. The headline finding: the four artifacts measure fundamentally different things, so a head-to-head comparison would be misleading without an explicit category-aware framing. The page documents the mismatch so a future attempt at the comparison starts from the right scoping rather than re-deriving it.
The candidate stays parked. This page is the “we looked, here’s what’s not commensurable” record.
The four artifacts and what each measures
| Artifact | Type | Unit of measurement | Coverage in this wiki |
|---|---|---|---|
| PwC AI Maturity / Responsible AI Maturity | Vendor consultancy maturity ladder | Organizational AI program maturity (governance + ops + tech) | None — passing mention only |
| Microsoft Responsible AI Standard (RAI) | Principle-adoption framework with internal Crawl / Walk / Run maturity | Adoption maturity for fairness, accountability, transparency, privacy, inclusiveness, reliability | Microsoft Responsible AI Standard (RAI) exists |
| Anthropic Responsible Scaling Policy (RSP) / AI Safety Levels (ASL) | AI safety policy with ASL-1 → ASL-5 tiers | The model’s dangerous-capability level — not the org’s maturity | Not yet a wiki page (gap) |
| OWASP (multiple artifacts; no single AI CMM) | Threat / control catalogs | Threat coverage, not maturity. SAMM is general software-security MM; Agentic AI Top 10 / AIVSS / LLM Top 10 are threat catalogs | OWASP ASI Top 10, OWASP AIVSS, OWASP LLM Top 10 exist; no SAMM page |
The headline finding — three different kinds of artifact
The four don’t share an axis. They split into three categorical groups:
| Group | What it measures | Members |
|---|---|---|
| Organizational program maturity | What the org does — its governance, ops, controls, lifecycle | PwC, Microsoft RAI |
| Model-capability risk tier | What the model is — dangerous-capability level of a specific frontier system | Anthropic RSP / ASL |
| Threat / control coverage | What’s covered — threat enumeration and control existence, but not maturity per se | OWASP (Top 10s, AIVSS, SAMM) |
Forcing them onto a single axis would be misleading. PwC and Microsoft RAI compete on the org-maturity axis. Anthropic ASL is a model-capability metric — it does not measure organizations. OWASP is a control-catalog companion to all of the above; it does not measure maturity.
What a category-aware comparison would look like (if revived)
A useful comparison page would do four things rather than fake commensurability:
- Establish the axis-mismatch upfront. The table above (or its successor) is the headline, not buried.
- Compare PwC vs Microsoft RAI head-to-head on the organizational-program-maturity axis where they genuinely overlap.
- Position Anthropic RSP / ASL as adjacent-not-overlapping. ASL informs an org’s program but is not itself an org-program maturity model.
- Position OWASP as a control-catalog companion to all three. Use OWASP to populate the threat-coverage column inside whichever maturity model the org adopts.
- Close with how the wiki’s CMM relates to each. Org-maturity-axis overlap with PwC and Microsoft RAI; reuses OWASP threat IDs at L3+; does not try to be ASL.
Why this is parked, not pursued
Three reasons the comparison was not pursued in the session that produced this analysis:
- Authoritative versions drift. PwC and Microsoft RAI both publish frequently and revise doctrines. A snapshot is research-grade only as a Q-dated snapshot, and the wiki avoids “pretend authoritative across versions” framing.
- Collateral pages required. The comparison would need at least an Anthropic RSP / ASL page, a thin PwC page, and ideally an OWASP SAMM page to be properly cross-linked. ~3 stubs of preparatory work before the comparison can be written.
- The category-mismatch finding is the actual interesting content. Once that finding is established, the head-to-head detail (PwC vs Microsoft RAI on org maturity) is comparatively routine consultancy review work — useful but not high-leverage for the wiki’s current focus.
The candidate is worth reviving when one or more of the following becomes true: (a) the wiki adds a Anthropic RSP / ASL page for an unrelated reason; (b) Microsoft RAI publishes a major revision worth tracking; (c) a reader specifically needs the PwC vs Microsoft head-to-head for a procurement or assessment decision.
What does not belong on the comparison
- A single-number “best maturity model”. The four don’t compete because they measure different things. Picking “the best” is incoherent. An organization adopts the right tool for its question — org-maturity self-assessment (PwC, Microsoft RAI), model-capability-tier discipline (ASL), or threat coverage (OWASP).
- A direct cross-mapping into the wiki’s CMM. The CMM overlaps with PwC / Microsoft RAI on the org-maturity axis but is not a competitor to ASL or to OWASP catalogs. Mapping CMM-vs-ASL would repeat the category mistake. Use the CMM’s crosswalk matrix for control mapping; ASL is a separate discipline.
Relations
- Parent index entry: Comparisons Index §More candidates (the candidate stays parked there)
- Adjacent comparison: Cybersecurity Capability Maturity Models — Exemplars and Design Lessons (CMMI / BSIMM / SAMM / CMMC / NIST CSF 2.0 — all org-maturity, share an axis)
- Wiki’s own CMM: Agentic AI Security CMM 2026 — sits in the org-maturity group; relates to PwC and Microsoft RAI by axis, not to ASL or OWASP catalogs
- Methodology relevance: Standards Validation Methodology §10 explicitly excludes maturity-model peers (PwC, Microsoft RAI Maturity, Anthropic RSP) from the standards-validation scope on the same categorical grounds — they are peers of the wiki’s CMM, not authorities