NIST AI Risk Management Framework (AI RMF)

The NIST AI RMF is the de facto voluntary U.S. standard for AI risk management, structured around four core functions: Govern, Map, Measure, and Manage. Published January 2023, it serves as a governance touchstone cited by 78% of large enterprises implementing AI solutions. State-level safe harbor provisions in Colorado, Texas, and Virginia reference AI RMF compliance.

Structure

  • Govern — Establish organizational accountability, culture, and processes for AI risk
  • Map — Identify and categorize AI risks in context
  • Measure — Analyze and assess identified risks
  • Manage — Prioritize and respond to identified AI risks

The Generative AI Profile (NIST AI 600-1), published July 2024, extended coverage to prompt injection, data poisoning, and model extraction for GenAI systems.

Q1 2026 Developments

Status: AI RMF 1.0 remains the published version. Confirmed “in revision” per the AI Action Plan, but no RMF 1.1 or 2.0 draft has materialized as of April 2026.

CAISI AI Agent Standards Initiative (February 17, 2026) is the most consequential development — the first U.S. government program explicitly targeting agentic AI interoperability and security standards. Three pillars:

  1. Facilitating industry-led agent standards
  2. Conducting research and guidelines
  3. Stakeholder engagement

An RFI on AI agent security (January 8, 2026) received responses from the OpenID Foundation, Perplexity, and others. A companion ITL AI Agent Identity and Authorization Concept Paper (comments due April 2, 2026) signals NIST views agent identity as a critical near-term gap.

Additional Q1 2026 publications:

  • IR 8605A (COSAiS annotated outline for predictive AI control overlays, January 8) — adapts SP 800-53 controls for AI; future overlays for generative and agentic AI planned but unscheduled
  • Cyber AI Profile (NISTIR 8596) — completed public comment period January 30; applies CSF 2.0 across three focus areas; will progress to initial public draft in 2026
  • NIST AI 800-4 (March 6) — first federal-level report mapping gaps in post-deployment AI monitoring; identifies human factors monitoring as the biggest blind spot across six monitoring categories

Strengths

  • CAISI initiative directly addresses the agentic identity gap
  • COSAiS project is the first attempt to provide specific, implementable SP 800-53 control overlays for AI
  • Cyber AI Profile bridges AI RMF and CSF 2.0, enabling organizations already using CSF to extend coverage to AI
  • Broad adoption and state-level regulatory reference creates accountability mechanisms

Gaps and Shortcomings

  • Describes “what” rather than “how” — no testable control requirements with evidence criteria
  • Does not distinguish model development from runtime security
  • Agentic AI-specific controls acknowledged as a gap but no published guidance exists yet
  • MCP/A2A protocol security, plugin/skill supply chains, agent identity management, and cognitive file integrity are completely unaddressed
  • No AI incident response specificity (IoCs, playbooks, forensic guidance)
  • ML-BOM/AI-BOM requirements absent
  • Platform-level vs. prompt-level enforcement distinction not articulated
  • COSAiS overlays are annotated outlines, not implementation guides

Coverage Against OWASP ASI Top 10

ASI CategoryCoverage
ASI01: Agent Goal Hijack○ None
ASI02: Tool Misuse○ None
ASI03: Identity & Privilege○ None
ASI04: Supply Chain◐ Partial
ASI05: Data Disclosure◐ Partial
ASI06: Memory Poisoning○ None
ASI07: Insecure Inter-Agent○ None
ASI08: Cascading Failures○ None
ASI09: Missing Guardrails◐ Partial
ASI10: Rogue Agents○ None

Watch Items (2026)

  • NIST IR 8605B/C — COSAiS overlays for generative and agentic AI (SP 800-53 control adaptations); unscheduled but expected H2 2026
  • NISTIR 8596 initial public draft — Cyber AI Profile bridging CSF 2.0 and AI RMF
  • RMF revision (1.1 or 2.0) — no draft timeline announced

See Also