AI-BOM: AI Bill of Materials

What It Is

An AI Bill of Materials (AI-BOM) is a structured inventory of all components that compose an agentic AI system — analogous to a Software Bill of Materials (SBOM) for traditional software — extended to capture the artifact categories unique to AI deployments: model weights, training data attestations, skills/plugins, MCP servers, and cognitive identity files.

The term is used in two related but distinct senses:

  1. Static AI-BOM: a manifest produced at build/deploy time listing all AI system components and their provenance. Enables supply chain auditing.
  2. Runtime AI-BOM (Miggo Security’s usage): continuous discovery and tracking of what AI components are actually running in production — analogous to a CMDB but for AI artifacts. Enables behavioral drift detection.

Why It Matters

ML-BOM adoption lags 48% behind SBOM requirements as of June 2025 (Lineaje survey). JFrog reported a 6.5-fold increase in malicious models on Hugging Face in 2024–2025. Meanwhile, three Q1 2026 supply chain incidents (ClawHavoc — Agentic Skill Marketplace Supply Chain Attack, SANDWORM_MODE npm worm — AI Toolchain Poisoning, LiteLLM Supply Chain Compromise (Google ADK Dependency)) demonstrate that attackers are exploiting AI supply chains without requiring model compromise — they target the plugins, skills, and framework dependencies.

Without an AI-BOM, security teams cannot answer: “What model version is running in production? What skills are installed on this agent? Where did that MCP server come from? Has this training data been attested?”

Components to Track

An AI-BOM for an agentic deployment should cover:

Component CategoryWhat to TrackWhy It Matters
Model weightsName, version, provider, SHA-256, training data attestationModel substitution, backdoored weights
Skills / pluginsName, version, publisher, install source, SHA-256, behavioral scopeClawHavoc-class supply chain attacks
MCP serversName, version, origin, transport security, allowed toolsTool poisoning, unauthorized tool exposure
Cognitive identity filesSOUL.md, IDENTITY.md — hash, change historyBehavioral hijacking without code changes
Framework dependenciesLangChain, CrewAI, AutoGEN, etc. — version, licenseDependency confusion, LiteLLM-class compromises
RAG data sourcesCorpus version, last scan date, access controlsRAG poisoning, indirect prompt injection
Orchestration codeVersion, signing, SLSA provenance levelCode-level tampering

Format and Standards

  • CycloneDX ML extension: the most AI-specific format; supports model metadata, dataset references, algorithm documentation. Recommended for static AI-BOMs.
  • SPDX: more mature tooling ecosystem; less AI-specific but acceptable for framework-level dependencies.
  • SLSA (Supply chain Levels for Software Artifacts): the provenance framework; aim for SLSA Level 2+ for models deployed in production.

Agentic-specific fields not yet covered by existing standards:

  • Cognitive identity file hashes
  • MCP server behavioral scope declarations
  • Agent-to-agent communication topology
  • Skill permission scopes (what tools/APIs can this skill invoke?)

Runtime AI-BOM (Miggo Pattern)

Miggo Security’s Runtime Defense Platform uses an AI-BOM discovery approach:

  1. At deploy time: inventory all AI components (model, framework, skills, MCP servers).
  2. At runtime: use DeepTracing to observe actual component behavior — what tools each component invokes, what data it accesses, what network destinations it calls.
  3. Continuously: compare runtime behavior against the inventoried-at-deploy baseline. Deviation = alert.

This extends the static AI-BOM into a live behavioral inventory, analogous to what EDR does for processes vs. what a static CMDB does for assets.

Maturity Progression

Align AI-BOM maturity with the Agentic AI Security Capability Maturity Model — A 2026 Practical Proposal D8 Supply Chain & AI-BOM domain:

CMM LevelAI-BOM Capability
Level 1No AI component inventory
Level 2Manual model inventory; ad hoc tracking
Level 3Automated AI-BOM generation at build time; SHA-256 for all components
Level 4Signed AI-BOMs; ML-BOM for all production models; runtime discovery
Level 5Full provenance verification (SLSA); continuous runtime BOM diffing; threat intel integration

Level 4 corresponds to “ML-BOM for all production models” in the CMM’s Domain 5 (Supply Chain) criteria.

Implementation Priorities

  1. Start with model inventory: know what model version is running in each agent.
  2. Add skills/plugins: every installed skill should be tracked with source, hash, install date.
  3. Layer in MCP servers: as MCP adoption grows, MCP server provenance becomes critical.
  4. Automate generation: build AI-BOM generation into the CI/CD pipeline, not as a manual step.
  5. Feed to SIEM: AI-BOM data enables correlation — when an incident occurs, the BOM tells you what was running.

Known Gaps

  • No universal standard for agentic-specific AI-BOM fields (cognitive files, MCP scope, skill permissions).
  • Runtime AI-BOM tooling is nascent — Miggo is the most specific implementation evidence available as of Q1 2026.
  • No enforcement mechanism equivalent to SBOM mandates (e.g., Executive Order 14028 for software) specifically for AI components.

See Also