General Analysis

Sources: Funding announcement (BusinessWire) · Axios Pro coverage · Tech Startups coverage

Homepage URL above is unverified — primary domain not stated in launch coverage; re-validate.

What

Agentic AI security research-and-product company founded in 2025 by Rez Havaei (CEO; previously Cohere and NVIDIA), Maximilian Li (Harvard), and Rex Liu (Caltech). Treats AI security as an empirical measurement problem rather than a rules-and-guardrails problem — runs adversarial simulations against live agent systems and quantifies how often, and how badly, agents fail (Source: Tech Startups).

The same Havaei / Li / Liu team produced the Claude → Stripe coupons via iMessage exploit research already filed in this wiki (multi-MCP context-pollution exploit) — General Analysis is the productization of that research line.

Funding

$10M seed round, April 29, 2026 — led by Altos Ventures with 645 Ventures, Menlo Ventures, Y Combinator, and angels. Fourth-largest agentic-AI-security seed in the 12-month window.

Relevance

Cross-cutting in the RA (testing surface, not a runtime plane). In the CMM, maps to D7 (Observability & Detection) at the continuous red-team / CART evidence slot — same row as Mindgard CART, SplxAI (now part of Zscaler), Promptfoo, Garak, PyRIT, and AgentDojo.

Differentiator from those peers: explicitly live-system, agent-level rather than model-level. The research line that produced the coupon exploit suggests their probe library targets multi-MCP / multi-agent attack chains, not single-prompt-injection harnesses.

Product

Methodology disclosed in launch coverage:

  1. Live-system adversarial simulations against production agent stacks
  2. Failure-frequency and severity measurement — rejects “prove safe” framing in favor of “drive numbers down”
  3. Risk quantification to help defenders prioritize controls that actually reduce risk without crippling agent utility

Product specifics (probe library, harness API, integration shape) not disclosed in launch material.

Notable Statements

  • Co-founder: “You cannot prove an agent is safe. You can only measure how often it fails, and how badly, and drive both numbers down.” — matches the spirit of Evidence-Centered Benchmark Design and the CMM’s D7 L4 measurement-based evidence stance.

See Also