Vibe Coding
Vibe coding is an informal term for generating or modifying code by describing the “vibe” or high-level intent in natural language, relying on LLM inference rather than exact specifications. The term was coined by Andrej Karpathy (computer scientist, OpenAI co-founder, ex-Tesla AI director) in a February 2025 post on X, describing the idea of “fully giving in to the vibes” when using AI to generate and run code for quick, throw-away projects. The term is now widely adopted across the AI development community and is formally cataloged in advisory thought-leadership including PwC’s 2026 Agentic SDLC report.
Andrej Karpathy entity page
Karpathy is the originator of this term and the broader “Software 2.0” framing. He has multiple wiki-relevant publications and roles (OpenAI co-founder, Tesla AI, “From Models to Agents” talks). A dedicated wiki entity page is a gap; ingest candidate.
Definition
Per PwC’s 2026 Agentic SDLC report (which formalizes the term in advisory context):
“Informal term for generating or modifying code by describing the ‘vibe’ or high-level intent in natural language, relying on LLM inference rather than exact specifications, this term is widely adopted in the AI space and supported with many community members worldwide.”
Karpathy’s original framing emphasized throw-away projects — quick prototypes where the developer doesn’t need to fully understand or maintain the generated code. The term has since broadened to cover any natural-language-driven code generation where the operator iterates on intent rather than implementation.
Distinguishing Features
What separates “vibe coding” from generic AI-assisted coding:
- Intent over specification. The operator describes what the code should do in natural language, not how it should be structured.
- LLM inference fills gaps. Ambiguity in the prompt is resolved by the model based on training-data norms, not by explicit operator decisions.
- Iterative refinement via vibes. The operator runs the code, sees the result, and prompts adjustments based on observed behavior rather than reviewing the implementation.
- Often disposable. The original framing assumes the artifact is throw-away; sustained-maintenance use is a separate (and more contested) mode.
PwC names “Vibe-coder” as one of the new / growth roles in the agentic SDLC era (defined: “Define requirements, plan, build, test, and review results through natural language either via keyboard or voice commands”) — sitting alongside Prompt & LLM Engineer, Context Engineer, AI test-orchestration lead, xOps AI Analyst, AIOps Engineer, and AI governance & risk lead.
Tension With the Collaboration Paradox
Vibe coding’s claim of “fully giving in to the vibes” sits in tension with the collaboration paradox (60% of developer work uses AI; only 0-20% is “fully delegated”). Two interpretations:
- Vibe coding is bounded to the 0-20% band. Karpathy’s original framing applies to throw-away projects where verification cost is low; the practitioner data shows that production work falls in the active-collaboration band even for AI-fluent users.
- Vibe coding is the operating mode for the 0-20% that gets fully delegated. Under this reading, vibe coding is the productized form of full delegation, but it remains a minority of work.
Both interpretations are consistent with the data. The wiki’s position: vibe coding is a real and productively-used mode for some classes of work (prototyping, exploratory analysis, scripting), but cannot be the default mode for high-stakes production work where the Plan-Validate-Execute pattern applies. See also Anthropic’s Trends Report Trend 4 (“Human oversight scales through intelligent collaboration”) for the convergent vendor-strategic framing.
Security Implications
Vibe-coded artifacts inherit specific risk patterns:
- Vulnerability density: LLM-generated code from underspecified prompts tends to fall back on training-data norms, including common-but-insecure patterns. METR 2025 RCT findings (experienced devs 19% slower with AI tools) suggest the verification cost of vibe-coded artifacts is non-trivial; see METR 2025 RCT.
- Cognitive file integrity exposure: vibe-coded changes to identity files (system prompts,
SOUL.md,IDENTITY.md) can introduce subtle behavioral shifts that the operator doesn’t notice. See Cognitive File Integrity for the defensive control. - Coding-agent governance: applying vibe coding through coding agents (Cursor, Claude Code, Copilot) bypasses traditional code-review chokepoints. See Knostic’s AI Coding Agent Governance framework for the operational response.
- Supply-chain exposure: vibe-coded artifacts often pull in dependencies the operator hasn’t vetted. See Supply Chain Security for Agents for the AI-BOM perspective.
CMM / RA Maps-to
- CMM D3 (Supply Chain) L3+ — vibe-coded artifacts inherit dependency-graph risk; D3 controls (AI-BOM, dependency scanning) gate this.
- CMM D9 (Operations & Human Factors) — vibe coding sits on a spectrum from disposable prototypes to production deployment; D9 controls (code review, change management) govern the transition.
Provenance Note
The Karpathy X post (February 2025) is the canonical origin citation. PwC’s 2026 report is the canonical advisory citation. As of mid-2026, “vibe coding” appears in:
- Multiple Anthropic, OpenAI, and Google product materials (as a usage mode for code-assistant tools).
- Job postings for “Vibe-coder” or “Vibe-aware engineer” roles (per PwC’s external-signals analysis).
- The Forbes “fastest tech-history adoption” framing referenced in PwC’s adoption-curve section.
See Also
- Andrej Karpathy — originator (entity stub, gap).
- PwC Agentic SDLC paper — formal-advisory citation.
- Collaboration Paradox — tension with full-delegation framing.
- Plan-Validate-Execute — canonical wiki HITL pattern that bounds vibe-coding’s applicability.
- AI Coding Agent Governance — operational response framework.
- Anthropic 2026 Trends Report — adjacent vendor-strategic framing.