Mythos-ready Security Program — CISO Playbook

Operational instrument from The “AI Vulnerability Storm”: Building a “Mythos-ready” Security Program (CSA + SANS + [un]prompted + OWASP Gen AI Security Project, 2026-04-12, v1.0). Designed for “the CISO who needs to walk into a room Monday morning with a plan.”

The playbook composes three load-bearing artifacts: a 10-question triage instrument, a 13-row draft Risk Register cross-walked to OWASP / MITRE ATLAS / NIST CSF 2.0 / CSA AICM, and an 11-row Priority Actions table with explicit start times and time horizons. The 90-day Executive/Board Briefing template follows.

The playbook is the operational answer to the Zero Day Clock’s TTE collapse (2.3y in 2018 → 9h in 2026), to the Mythos / Glasswing capability step-change, and to the parallel pressure of Citizen Coders proliferating coding agents into non-developer functions.

Minimum viable resilience is the entry tier

Before pursuing maturity, achieve minimum viable resilience: realign measurements (cost of exploitation, early detection of compromise, blast radius containment) to a higher tier. Many pre-AI program assumptions are now broken — TTE in hours, incident frequency rising, CVE system at scale risk, shadow IT fragmenting central control as Citizen Coders proliferate, threat intelligence lagging behind discovery and exploitation.

Section 1 — 10 Questions to Understand Your Security Program State and Influence

A short triage instrument to reach ground truth on program state and gauge influence on business functions. Use real examples, not policy statements.

#QuestionContext
1What is our actual stance on AI today?Allowed, tolerated, restricted, or unknown.
2Can employees use agentic coding tools in the enterprise today?Looping LLM tool use including coding agents (regardless of writing code); do you have security guardrails in place?
3Can employees contribute to open source without legal ambiguity?A legal / IP question, not a tech philosophy question.
4Do we have disciplined control of repos, artifacts, and software, including for agentic supply chain (MCP servers, plugins, skills)?Source control, package paths, artifact provenance, what is actually allowed in CI/CD and through coding agents.
5Is there a real cooling-off / security gate between code change and production?Demonstrates enforcement of security in release cycles and control of software supply chain.
6Is security operational, or primarily advisory?Extent to which the security function can directly affect outcomes vs. review-and-escalate.
7What is the fastest this company has made a security-driven production change in the last year?Use a real example, not a policy statement.
8Are our critical “crown jewels” explicitly tracked and current?Not theoretically important systems — the actual few that matter most and their dependencies.
9Do we know how to get urgent work prioritized by our key third parties?Feature requests, bug reports, security escalations, relationship ownership, leverage.
10Does executive leadership have a working definition of urgency?”If everything is a crisis, nothing is urgent.”

Section 2 — Mythos-ready Risk Register (DRAFT)

13 risks across three severity tiers. Each risk maps to framework references and to one or more Priority Actions in Section 3.

Framework prefix legend (per Appendix B of the source):

  • LLMxx — OWASP Top 10 for LLM Applications 2025
  • ASIxx — OWASP Top 10 for Agentic Applications 2026
  • AML.Txxxx — MITRE ATLAS adversarial-ML techniques
  • GV/ID/PR/DE/RS — NIST CSF 2.0 functions (Govern / Identify / Protect / Detect / Respond)
  • AICM: xxx — CSA AI Control Matrix V1.0.3 controls

Severity legend: Critical = immediate exposure if unaddressed; High = significant exposure within 45 days; Medium = organizational risk requiring structured attention, no direct exploitable exposure but weakens higher-priority controls.

Risk type legend: Threat = external actor capability (raise cost, can’t eliminate); Vulnerability = internal exploitable condition (addressable via remediation); Capability gap = defensive function missing or below required level; Governance = organizational/structural failure that amplifies every other risk.

Critical (5)

#RiskDescriptionTypeFramework refsMaps to PA
1Accelerated Threat Exploitation — AI-autonomous exploit generation at machine speedAI models have been discovering vulnerabilities and creating exploits for over a year. Mythos accelerates this; non-frontier open-weight models can already achieve much of this at accessible cost. Each patch also becomes an exploit blueprint, as AI accelerates patch-diffing and reverse engineering of fixes.ThreatAML.T0040, AML.T0043, PR.PS, PR.IR, AICM: TVM, MDS, AISPA 4, 5
2Insufficient AI Automation Capabilities — defenders operating at human speed while attackers operate with AI augmentationAsymmetry is not just technological but cultural: teams that do not adopt AI coding agents cannot match the speed or scale of AI-augmented threats, regardless of their technical skill.Capability gapGV.OC, GV.RM, DE.CM, RS.MA, AICM: GRC, HRS, MDSPA 1, 2
3Unmanaged AI Agent Attack Surface — privileged AI agents outside existing control frameworksAgents (often coding agents) are necessary to counter AI-speed threats, but they’re privileged, insecure by default, much of attackers’ current focus, and not covered by existing security controls. Introduces defensive risks (insecure privileged agents inside your environment) and supply-chain risks (MCP servers, VS Code extensions, agentic skills, rules).VulnerabilityLLM06, ASI02, ASI03, AML.T0047, PR.AA, GV.SC, AICM: MDS, IAM, STA, AIS, CCCPA 3
4Inadequate Incident Detection and Response Velocity — detection and response at human speed against machine-speed attacksAI has reduced sophistication and time needed to construct complex attacks. Alert triage volumes, SIEM correlation speed, and containment authorization latency were designed for human-paced threats.Capability gapASI08, AML.T0047, DE.CM, DE.AE, RS.MA, AICM: SEF, LOGPA 9, 10
5Cybersecurity Risk Model Outdated — stakeholder decisions based on pre-AI risk modelsReporting metrics built on pre-AI assumptions about exploit timelines and attack complexity may no longer reflect actual exposure. Outdated models could lead to underfunding of the controls that prevent incidents.GovernanceGV.OC, GV.RM, RS.CO, AICM: GRC, A&APA 6

High (7)

#RiskDescriptionTypeFramework refsMaps to PA
6Incomplete Asset and Exposure InventoryAI-accelerated attacker capabilities change which assets are at highest risk. Attackers can scan an entire OS codebase at accessible cost and enumerate exposure faster than the org can inventory. Proliferation of coding agents to non-developers further fragments central IT visibility.VulnerabilityASI04, AML.T0000, ID.AM, GV.SC, AICM: UEM, DCS, MDS, STAPA 7
7Unsecured Software Delivery PipelineCode from humans and AI agents ships without consistent security review. AI-generated code introduces vulnerabilities at higher volume than manual development; same defect rate, more code, more capable adversary. Without LLM-driven review integrated into the pipeline, exploitable flaws reach production before defenders can find them.VulnerabilityLLM01, LLM05, LLM08, ASI01, AML.T0018, AML.T0051.001, PR.PS, ID.IM, AICM: AIS, CCC, TVM, STAPA 1
8Network Architecture Insufficient for Lateral Movement ContainmentFlat or insufficiently segmented network gives every successful exploit leverage. AI-driven attacks worsen this via automated multi-hop lateral movement. With AI-accelerated discovery, architectural segmentation becomes the primary control limiting blast radius.VulnerabilityPR.IR, PR.PS, AICM: DCS, IAMPA 8
9Continuous Vulnerability Management Maturity Gap — reactive posture, no VulnOps functionQuarterly pen tests and reactive patching cycles cannot keep pace with continuous AI-driven discovery. Existing CVE/NVD infrastructure and patch prioritization workflows were built for dozens of critical CVEs per month, not hundreds.Capability gapASI10, ASI06, AML.T0018, ID.RA, ID.AM, DE.CM, AICM: TVM, AIS, STA, GRCPA 11
10Threat Detection Dependent on Lagging IntelligenceCVE- and KEV-based intelligence structurally outpaced by AI discovery rates. Novel vulnerabilities have no KEV listing by definition.Capability gapAML.T0000, DE.CM, ID.RA, GV.OV, AICM: TVM, LOGPA 9, 10
11Innovation Governance and Oversight DeficitWithout cross-functional governance, onboarding and deployment of any new control runs into approval friction that slows adoption. AI-accelerated attacker timelines mean this friction now has a harder deadline.GovernanceGV.OC, GV.RM, GV.RR, GV.OV, AICM: GRC, A&APA 2, 4
12Regulatory and Liability Exposure from AI-Discovered VulnerabilitiesThe EU AI Act (August 2026) introduces automated audit, incident reporting, and cybersecurity requirements around AI. Existing regulations use reasonableness as a test. When AI can find significantly more vulnerabilities at accessible cost, the standard of what constitutes reasonable defensive effort shifts. Boards face direct-financial-exposure questions about whether they used available AI defensive tools.GovernanceGV.OC, GV.RM, GV.RR, AICM: GRC, A&APA 1, 4

Medium (1)

#RiskDescriptionTypeFramework refsMaps to PA
13AI Hype and Confusion Causing Systematic InactionSignal-to-noise collapse in threat and technology guidance. Volume of AI-related security guidance, commentary, and vendor claims exceeds anything the industry has experienced. Teams that dismiss the shift as hype, or exhaust attention on low-signal content, will miss critical threat-landscape changes they need to react to.GovernanceGV.OC, GV.RM, AICM: GRC, HRSPA 1

Section 3 — Priority Actions (DRAFT, Aggressive Time Table)

11 actions with explicit start times and time horizons. “For the CISO who needs to walk into a room Monday morning with a plan.”

#ActionCategoryRiskStartHorizonWhat it means
1Point Agents at Your Code and PipelinesRisk ControlCriticalThis weekOngoingTurn agents and LLM capabilities inward on your own code and dependencies. Start by asking an agent for a security review of any code; build toward a full audit within CI/CD; shift left by adding capabilities directly into developers’ coding agents. All code (human or AI-generated) should pass LLM-driven security review before merge. Commercial: Claude Code Security (Anthropic), Codex Security (OpenAI). Open source: OpenAnt (Knostic), raptor (Claude Code framework), exploitation-validator agentic skill, agentic skills from Trail of Bits.
2Require AI Agent AdoptionOperational EnablerCriticalThis weekOngoingFormalize AI agent usage (mostly coding agents) as part of all security functions, with mandatory security controls and oversight in place. While defensive AI tech has not yet caught up, these agents empower staff to be effective in the new threat landscape, allowing acceleration beyond “human speed.” Optional adoption programs have not been shown to overcome cultural barriers; adoption is a limiting factor for the rest of this table.
3Defend Your AgentsRisk ControlCriticalThis month45 daysWithout agents, most tasks on this list will be untenable, but agents must be defended. Agents are not covered by existing controls and introduce both cyber defense and agentic supply-chain risks. The agent harness — prompts, tool definitions, retrieval pipelines, and escalation logic — is where the most consequential failures occur; audit it with the same rigor as the agent’s permissions. Before deploying agents in or adjacent to production environments, define scope boundaries, blast-radius limits, escalation logic, and human override mechanisms. Do not wait for industry governance frameworks. Define your own now.
4Establish Innovation Acceleration GovernanceGovernanceCriticalThis week6 monthsCross-functional mechanism (Security, Legal, Engineering) to evaluate new offensive threats and accelerate onboarding of defensive technologies. Without this in place, every other action runs into approval friction that slows deployment to the attacker’s advantage.
5Prepare for Continuous PatchingRisk ControlCriticalThis week45 daysWith increased vulnerability discovery and reporting — and Glasswing making Mythos available to significant software vendors — prepare triage and deployment capacity to handle a potential flood of patches as new critical vulnerabilities are disclosed.
6Update Risk Models and ReportingGovernanceCriticalThis week45 daysReview and update security risk metrics, reporting, and business risk calculations to reflect AI-accelerated exploit timelines and attack complexity. Pre-AI assumptions about patch windows, exploit scarcity, and incident frequency may no longer hold. Outdated models could underfund controls. Communicate and collaborate with stakeholders; map and prioritize potential effects on business, reporting, and projections.
7Inventory and Reduce Attack SurfaceRisk ControlHighThis month90 daysUse agents to accelerate inventory; build toward full-coverage inventory over 45 days. Generate real SBOMs. Aggressively shut down unneeded or unmaintained functionality, phase out suppliers that no longer comply with updated vulnerability-management requirements, isolate or air-gap at-risk systems. You cannot patch, segment, or defend what you don’t know exists.
8Harden Your EnvironmentRisk ControlHighThis month6 monthsBasics remain valid. Implement egress filtering (it blocked every public log4j exploit). Enforce deep segmentation and zero trust where possible. Lock down dependency chain. Mandate phishing-resistant MFA for all privileged accounts. Every boundary increases attacker cost. Aspects that can be accelerated with AI: software minimization (reduces operational overhead of second-order functions such as patching) — e.g., minimizing base OS images, replacing third-party libraries with framework primitives.
9Build a Deception CapabilityRisk ControlHighNext 90 days6 monthsAttack-tool / vulnerability-independent: identifies attacks and attackers based on TTPs. Deploy canaries and honey tokens, layer behavioral monitoring, pre-authorize containment actions, build response playbooks that execute at machine speed.
10Build an Automated Response CapabilityRisk ControlHighNext 90 days12 monthsImprove detection engineering and incident response to be systemic and, to the degree possible, autonomous. Examples: asset and user behavioral analysis, pre-authorized containment actions, response playbooks that execute at machine speed.
11Stand Up VulnOpsRisk ControlCriticalNext 6 months12 monthsLong-term, there is no alternative to building a permanent Vulnerability Operations (VulnOps) function — staffed and automated like DevOps, but for autonomous vulnerability research and remediation. Owns continuous discovery of zero-day vulnerabilities across the entire software estate (own code through third-party); establishes automated remediation pipelines. Design VulnOps around triage discipline from the start.

Section 4 — Executive and Board Briefing Template

Two talking points and a five-component aggressive 90-day plan.

Talking Point: AI Accelerates Both Sides. AI is making us faster and more competitive; the same capabilities make attackers faster and more dangerous. Time-to-disruption compressed from weeks to hours; permanent acceleration, not a temporary spike. Turned inward, these tools let us find and fix our own weaknesses before adversaries do.

Talking Point: An Aggressive Plan Is Needed. An appropriately funded foundation lets programs adapt rather than merely react in a crisis. The speed and volume of what we must handle has changed. This is not an open-ended AI initiative.

90-day aggressive plan (clear owners and outcomes):

  • Increase People and Capacity. Repurpose existing staff and onboard headcount / contractor capacity to handle increases in triage, remediation, and incidents — protect experienced staff from burnout, especially as the first wave of Glasswing patches hits.
  • Deploy AI Tooling. Formalize AI agent usage across all security functions as standard practice: scanning own code, AI-driven review before code ships, augmenting teams with purpose-built agents.
  • Harden Infrastructure. Update asset inventories; reduce unnecessary exposure; enforce segmentation, Zero Trust, egress filtering, phishing-resistant authentication. Validate across internal systems and key third-party providers (MSPs, SOCs).
  • Accelerate Procurement and Governance. Align Security + Legal + Engineering on threat evaluation and fast-track priority defensive-technology onboarding. Current approval cycles are too slow for the coming environment.
  • Update Playbooks. Update technical + communications response plans to execute at required speed and scale — including pre-authorized containment and coordination for simultaneous incidents.
  • Track Progress. Regular check-ins throughout the 90-day period to capture results and identify roadblocks.

Section 5 — How to Adapt

The full Risk Register and Priority Actions assume an aggressive time table that may not be realistic for every organization. Adapt by:

  • Organization size, complexity, and budget. Complicated environments and entirely-SaaS environments adapt differently; some have agility, others have budget.
  • Mutual constraints. Some recommendations are contradictory if followed as-is — e.g., delay patching due to supply-chain risks with a cooldown period directly competes with patch faster. Apply nuance in decision-making, policy, mitigating controls, or per-incident handling.
  • Below the Cyber Poverty Line. Engage ISACs, CERTs, and sector coordinating groups now. Defenders must leverage coordinating groups — especially when considering organizations that fall below the Cyber Poverty Line, as introduced by Wendy Nather.

Adjacent Wiki Instruments

  • Canadian-Bank Assessor Scorecard — first playbook on the wiki; sector-specific (Canadian FRFI) ~65-question scorecard with L1-L5 section maturity. Use for organization-specific maturity scoring (compares an organization to a regulatory floor); use the Mythos-ready playbook for industry-wide near-term operational response.
  • Agentic AI Security CMM 2026 — measures agent-security maturity across nine domains; the Mythos-ready Risk Register is complementary (it catalogs Mythos-era enterprise risk across the broader cyber program). A CMM cross-walk to the Mythos-ready PA-table is a deferred wiki task.
  • Frontier AI for Vulnerability Discovery — production-paths thesis on the offensive/defensive AI-vuln-discovery axis. This playbook’s PA 1 names the specific tools to deploy.