Apostol Vassilev
Apostol Vassilev — NIST researcher in the Computer Security Division (Information Technology Laboratory); ORCID 0000-0002-9081-3042; one of NIST’s lead voices on adversarial machine learning and secure AI development. Co-author of SP 800-218A (SSDF Community Profile for Generative AI and Dual-Use Foundation Models) and lead author of NIST AI 100-2e2023 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations — the federal-anchor reference for adversarial-ML threat classification that is cited as an informative reference throughout SP 800-218A and across multiple wiki pages.
Stub
Biographical detail beyond NIST publication bylines is not transcribed here. Confirm role title, team affiliation within the Computer Security Division, and prior publication history before adding further detail. The AI 100-2e2023 Adversarial ML Taxonomy deserves its own paper page on the wiki and is filed as an adjacent gap.
Surfaced contributions on this wiki
- 2024 (July) — Co-author of NIST SP 800-218A (SSDF Community Profile for Generative AI and Dual-Use Foundation Models). Part of the six-author NIST/CISA team that operationalized EO 14110 § 4.1.a.
- 2024 — Lead author of NIST AI 100-2e2023 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (with Oprea, Fordyce, Anderson). Wiki cross-references include the threat-class taxonomy used by MAAIS, the CMM D4/D6 anchor citations, and model-layer attacks.
Why this entity appears
Vassilev’s publication trajectory makes him a load-bearing voice on the federal-anchor side of adversarial ML and secure AI development. The wiki cites his work via institutional names (NIST AI 100-2, SP 800-218A) on multiple pages; this entity provides the authored-anchor surface for those citations and consolidates the wiki’s federal-research bench alongside Taesoo Kim (Microsoft) and the Arora/Hastings arXiv pairing.
Adjacent gaps
- NIST AI 100-2e2023 Adversarial Machine Learning Taxonomy needs its own framework or paper page on the wiki. Currently cited inline but lacks an anchor.
- Co-authors of AI 100-2e2023 (Alina Oprea — Northeastern; Alie Fordyce — NIST; Hyrum Anderson) are potential future stubs.
See also
- NIST — affiliation
- NIST SP 800-218A — the publication this entity is surfaced from
- NIST AI RMF 1.0 — adjacent NIST AI publication ecosystem
- Model-layer attacks — concept that cites the AI 100-2e2023 taxonomy