SDLC in the AI-Attacker Era

Question

How do SDLC, supply chain, identity, and attack-surface assumptions need to evolve when adversaries have frontier AI capability, and which existing controls remain load-bearing vs. which need rework? Specifically: which assumptions in SLSA, SSDF, CSAF, and ISO 27001 were calibrated against a human-paced attacker and now require explicit recalibration? Where does the existing Agentic AI Security CMM (which addresses securing AI systems) need extension to address securing non-AI systems against AI-augmented attackers?

Current Position

The wiki’s existing coverage of supply chain, governance, and SDLC controls — supply chain security for agents, AI-BOM, coding-agent governance, least-agency, plan-validate-execute — was framed for securing AI systems. The inverse framing — securing classical SDLC against AI-augmented attackers — is structurally similar in tooling but materially different in threat model:

  1. Reconnaissance asymmetry. When attackers can run frontier models continuously against public-facing surface, the cost of reconnaissance collapses. Attack-surface reduction becomes load-bearing in ways that “minimize exposed services” did not previously capture — every public endpoint is now a sustained, AI-paced target.
  2. Time-to-exploit collapse. Coordinated disclosure timelines (typically 90 days) were calibrated against human-paced exploit development. Frontier-AI-assisted exploit synthesis may compress that timeline; defenders need to assume a vulnerability disclosed Monday is exploitable by Tuesday.
  3. Coding-agent governance applies symmetrically. Knostic’s coding-agent governance surface (rules-file integrity, IDE extension provenance, typosquat defense, destructive-action classification) describes how to secure one’s own AI-augmented developers. The same surface, viewed inversely, describes the attacker’s productivity stack. Defenders need to understand what they are defending against.

This thesis is currently a re-framing exercise more than a new-content exercise. The existing pages above are the load-bearing material; the work is annotating which of them carry into the sec-against-ai framing and which are scope-specific to sec-of-ai.

Supporting Evidence

  • Supply chain security for agents introduces AI-BOM, skill registry scanning, pre-install vetting — these primitives apply unchanged when the attacker’s tooling is the agentic stack.
  • Plan-Validate-Execute is a HITL pattern for high-stakes irreversible actions on the defender side; an analogous pattern (mandatory human review for AI-assisted code merges) is the SDLC translation.
  • IEC 42001 and NIST AI RMF anchor the AI-management-system side; Microsoft ZT4AI supplies the zero-trust framing. The translation question is: what does ZT4AI look like when applied to non-AI systems facing AI-augmented attackers?

Counter-Evidence

Calibrated incident data

Public incident reports do not yet systematically attribute attacker capability to frontier-AI assistance. Whether an exploit was AI-assisted is rarely a published field. This makes “the threat model is changing” hard to source rigorously.

SLSA / SSDF / CSAF updates for AI-augmented attackers

NIST SSDF v1.1 (Feb 2022) addresses the secure-development side but not the AI-augmented-adversary side; its threat assumptions remain human-paced. SP 800-218A (July 2024) extends SSDF for AI model development but explicitly does not address deployment, operation, or the inverse problem — defending non-AI systems against AI-augmented attackers. No SLSA, CSAF, or comparable revision yet explicitly addresses AI-augmented adversaries. The frameworks remain calibrated against human-paced threat. Whether they should be updated, or whether the rules carry unchanged with tighter tolerances, is unresolved. The Glasswing announcement commits to “collaborate with leading security organizations” on this exact gap — explicit named areas include vulnerability-disclosure processes, SDLC and secure-by-design, supply-chain security, and standards for regulated industries — but no concrete deliverable has landed yet.

New Evidence — Glasswing-Partner Citations (2026-05-13)

Anthropic’s Project Glasswing announcement (May 12, 2026) surfaces two executive citations that directly support this thesis’s core argument:

  • CrowdStrike — Elia Zaitsev (CTO): “The window between a vulnerability being discovered and being exploited by an adversary has collapsed — what once took months now happens in minutes with AI.” This is the canonical wiki citation on the time-to-exploit-collapse argument. The CrowdStrike framing also explicitly notes: “adversaries will inevitably look to exploit the same capabilities.”
  • Palo Alto Networks — Lee Klarich (CPTO): “There will be more attacks, faster attacks, and more sophisticated attacks. Now is the time to modernize cybersecurity stacks everywhere.”

Both quotes are from launch partners of the coalition initiative whose stated purpose is to apply Mythos to defensive work before “such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.” The asymmetry that motivates Glasswing is the same asymmetry this thesis tracks: defenders need to recalibrate against AI-augmented attackers, not against human-paced ones.

Anthropic’s 2026 Agentic Coding Trends Report makes the dual-use case explicit at vendor-strategic level. Trend 8 (“Agentic coding improves security defenses — but also offensive uses”) establishes three predictions:

  • Security knowledge becomes democratized — “any engineer can become a security engineer capable of delivering in-depth security reviews, hardening, and monitoring.”
  • Threat actors scale attacks — “While agents will benefit defensive uses, they will also benefit offensive uses too.”
  • Agentic cyber defense systems rise — “Automated agentic systems enable security responses at machine speed, automating detection and response to match the pace of autonomous threats.”

The report’s closing position is the cleanest articulation of the thesis’s core asymmetry: “The balance favors prepared organizations. Teams that use agentic tools to bake security in from the start will be better positioned to defend against adversaries using the same technology.”

Anthropic’s named Priority 4 for the year ahead — “Embedding security architecture as a part of agentic system design from the earliest stages” — explicitly positions secure-by-design as a strategic priority for organizations adopting agentic coding, alongside multi-agent coordination, oversight scaling, and democratization. This is a vendor-strategic recommendation (not a technical-capability claim) and is therefore an unusually clean anchor for the thesis.

New Evidence — Microsoft SDL for AI (2026-02-03, ingested 2026-05-14)

Microsoft’s 2026-02-03 SDL-for-AI announcement (Yonatan Zunger, Microsoft Security Blog) supplies the cleanest vendor-stated articulation of the thesis’s speed-and-sociotechnical-risk argument. From the post: “AI accelerates development cycles beyond SDL norms. Model updates, new tools, and evolving agent behaviors outpace traditional review processes, leaving less time for testing and observing long-term effects. Usage norms lag tool evolution, amplifying misuse risks.”

Microsoft’s prescribed mitigation pattern — “iterative security controls, faster feedback loops, telemetry-driven detection, and continuous learning” — is the SDL-framework translation of the time-to-exploit-collapse argument already anchored on this thesis via CrowdStrike’s Glasswing citation. Where CrowdStrike framed the asymmetry as vulnerability discovery vs. exploitation time, Microsoft frames it as tool evolution vs. usage-norm formation time. The two framings cover different segments of the same speed gap; both reinforce that classical secure-SDLC tempo assumptions need recalibration.

The announcement also makes Microsoft SDL the first major-vendor classical secure-SDLC framework to publish an explicit AI extension scope, partially closing the SLSA / SSDF / CSAF revision gap noted above (though SDL is not a NIST standard, so the gap-on-the-standards side remains open).

How This Has Evolved

  • 2026-05-13 (morning) — Seeded 2026-05-13 as part of the scope expansion. Position: synthesis-heavy, deferred to ingestion of an SLSA/SSDF/CSAF revision.
  • 2026-05-13 (evening)Glasswing announcement ingested. Two Glasswing-partner executive citations (CrowdStrike, Palo Alto Networks) now directly support the thesis. Position is now partially anchored without waiting for SLSA/SSDF/CSAF revisions; the next promotion trigger is when Anthropic’s 90-day public report or a Glasswing standards-collaboration deliverable lands.
  • 2026-05-13 (night)Anthropic 2026 Agentic Coding Trends Report ingested. Trend 8 (“Agentic coding improves security defenses — but also offensive uses”) and Priority 4 (“Embedding security architecture as part of agentic system design from the earliest stages”) provide vendor-strategic-level corroboration to the thesis. The asymmetric-defender-advantage framing is now anchored at three levels: practitioner research (XBOW/MDASH/Big Sleep), coalition initiative (Glasswing), and vendor strategic forecast (this report).
  • 2026-05-13 (late night)PwC Middle East 2026 Agentic SDLC report ingested. Independent regional (GCC + Jordan + Egypt) survey-based corroboration: security ranks #1 barrier (37.7%) for GenAI-in-SDLC adoption; PwC’s enabler #1 is “early compliance guardrails” — exactly the secure-by-design framing this thesis tracks. METR 2025 RCT counter-evidence (16 experienced devs 19% slower with AI) is now anchored on the wiki via its concept page and bounds the productivity / threat-velocity claims symmetrically — for both defenders and attackers, real-world productivity gains are smaller than capability gains suggest. This is the strongest non-vendor source on the wiki for the thesis’s core argument.
  • 2026-05-14Microsoft SDL-for-AI announcement (2026-02-03) ingested. Adds the first major-vendor classical secure-SDLC framework with an explicit AI extension scope; Microsoft’s speed-and-sociotechnical-risk framing is the SDL-translation of the time-to-exploit-collapse argument already anchored on this thesis. New evidence section added above; the SLSA/SSDF/CSAF standards-side revision gap remains open (Microsoft SDL is a vendor framework, not a NIST/SLSA/OASIS standard).
  • 2026-05-14NIST SP 800-218 (SSDF v1.1) and SP 800-218A (AI Profile) ingested directly. The federal-anchor citation surface for secure-SDLC is now substantively documented on the wiki. Partial progress on the SSDF revision gap: 218A is the federal AI extension, but it addresses AI model development (training-data integrity, model-weight protection, AI-specific threat modeling) — not AI-augmented adversary recalibration of classical SDLC. The federal-side instrument for securing classical SDLC against AI-augmented attackers (the inverse framing this thesis tracks) remains unpublished. The federal-side instrument for secure development of AI models is now substantively in place via 218A. Worth noting that 218A includes PW.1.1.C2“During risk modeling, consider checking that the AI model is not in a critical path to make significant security decisions without a human in the loop” — which is a federal-anchor citation for the human-parity-line principle and the Plan-Validate-Execute HITL pattern.

Open Sub-Questions

  • Does the Agentic AI Security CMM need an extension (new domain D10 “AI-Threat-Calibrated SDLC”) or a parallel companion CMM (“Enterprise SDLC vs AI-Augmented Adversaries”)? Current judgment: too early — defer the artifact decision until evidence accrues.
  • How does the agent availability threats surface translate to defending against availability attacks by AI-augmented adversaries (e.g., autonomous DDoS with adaptive evasion)?
  • See Gaps Index for related open questions.