Emerging Best Practices Index
Controls, playbooks, and patterns that are converging across vendors but not yet codified into a published framework. When something here matures into a named framework, promote it to wiki/frameworks/ and leave a stub redirect.
Pages
- Agent Observability — Improving agent observability requires moving from a “black-box” model, where only final outputs are seen, to a “glass-box” security para…
- Agent Sandboxing
- Agent Token Chargeback — The agent token chargeback practice attributes agentic-AI token spend to the consuming business unit and use case via a variable chargeba…
- AI-BOM: AI Bill of Materials
- AI Security Posture Management (AI-SPM) — AI-SPM is the AI analog to CSPM (Cloud Security Posture Management): a continuous discipline of inventorying AI assets, detecting misconf…
- Anti-Patterns and Failure Modes — How the RA + CMM Go Wrong — Closes peer-review-readiness §4: *“No anti-patterns / failure-modes catalog.
- Credential Proxy Pattern for AI Agents
- Distributed Kill Switch — The distributed kill switch practice puts the authority to halt an agent or agentic workflow into the hands of every team member in the l…
- Data Security Posture Management (DSPM) for AI — DSPM maps where sensitive data lives across cloud repositories and SaaS, classifies it, and ties that map to AI usage.
- Guardian Agent Metagovernance (Guards for the Guardians) — When you deploy a guardian agent to oversee other AI agents, you create a new privileged identity in the system — one that can block, red…
- Multi-Agent Runtime Security — Cascade Detection, Behavioral Baselines, Inter-Agent IR — The depth-companion to the wiki’s single-agent observability page, focused on what’s specific to multi-agent meshes: cascade-failure dete…
- NHI Governance for AI Agents
- Oversharing Controls for AI Search — AI oversharing is the failure mode where an AI search tool retrieves and combines content that is technically RBAC-permitted but contextu…
- Plan-Validate-Execute Pattern — A structural pattern for handling high-stakes irreversible actions in agentic systems: rather than letting the agent execute autonomously…
- Prompt Injection Containment for Agentic Systems
- RAG Hardening
- Securing AI — Talking Points — A four-point briefing outline for an AI security pitch / customer conversation.
- Supply Chain Security for Agentic AI
Still needed
Egress control patterns (dedicated page), tool annotation systems, evaluation loops (offline + online), agent quality measurement, structured I/O for agents, RAG poisoning defenses.