Nicolas Lidzborski
Principal Software Engineer at Google working on Google Workspace security. 25 years in security overall, ~3 years focused on Generative AI security. Goes by “Nico” in person.
Talks ingested in this wiki
- Securing Workspace GenAI at Google — [un]prompted Conference, March 4, 2026 (Stage 1, Lecture 07). Three-year retrospective on GenAI security lessons; introduces the prompt-as-code framing, the four-layer “Architecting the Fortress” structural blueprint, and the Plan-Validate-Execute pattern for high-stakes actions.
Distinctive contributions to this wiki’s vocabulary
- “Prompt as code” — structural framing of why GenAI security cannot rely on syntactic filtering: every input token is a potential instruction, and the natural-language grammar is fuzzy
- Agency gap — the non-deterministic disconnect between user intent and autonomous AI execution; named contribution
- Orchestration hijacking — compromised orchestration layer where the LLM-as-planner is manipulated; named contribution
- Plan-Validate-Execute — Google’s structural pattern for high-stakes irreversible actions
Position in the field
Lidzborski’s perspective is the Google Workspace counterpart to Andrew Bullen’s Stripe containment work: both are practitioner deep-dives into platform-layer containment from a major production deployment. Bullen emphasizes the egress + tool-policy enforcement surface; Lidzborski emphasizes the input + orchestration + output surface. The complementarity reflects each org’s deployment shape (Stripe: programmatic agents + payment infrastructure; Google: knowledge-worker productivity at scale).
Gap
Background details (academic affiliation, prior roles, public writing outside the [un]prompted talk) not in the source material. To be filled when additional sources surface.