Andrej Karpathy is an AI researcher, educator, and builder. Co-founder of OpenAI, former Director of AI at Tesla (led Autopilot), and creator of widely influential educational resources including the Neural Networks: Zero to Hero course and the micrograd / nanoGPT repos. Coined vibe coding (Feb 2025) and agentic engineering as its more disciplined successor. Runs an independent AI research and education practice.
Background
- Co-founded OpenAI (2015), part of the founding research team
- Tesla AI Director (2017–2022) — led Autopilot and the Dojo supercomputer program
- Stanford PhD in computer vision / deep learning under Fei-Fei Li
- Independent researcher and educator since 2023
Key Contributions
Vibe Coding (Feb 2025)
Coined in a tweet that became one of the most-referenced ideas in modern software development: directing AI agents to write code while the human evaluates results rather than reading every line. Now broadly used — and often misused — to mean any AI-assisted development.
Agentic Engineering (Feb 2026)
Karpathy's preferred successor to vibe coding, introduced in a one-year retrospective. Agentic — the default is no longer writing code directly; you orchestrate agents while acting as oversight. Engineering — it's a learnable discipline with real depth, not a vibe.
"Never Felt So Behind" (Dec 2025)
A landmark tweet expressing that programming is being fundamentally refactored — the bits a programmer contributes are increasingly sparse. Identified the compounding leverage available from properly stringing together agent tooling: agents, subagents, prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and mental models.
LLM Knowledge Bases (Apr 3, 2026)
Publicly described a shift in his own workflow from manipulating code to manipulating knowledge — building LLM-maintained markdown wikis as personal knowledge bases. A major signal for the direction of AI-native knowledge infrastructure. See 📝Karpathy's Flag in the Ground — LLM Knowledge Bases.
Autoresearch (2026)
A self-contained autonomous research system (autoresearch repo, ~630 lines, single-GPU): the human iterates on a prompt .md, the AI agent iterates on training code. Every output dot is a complete 5-minute LLM training run. Designed to engineer agents that make research progress indefinitely without human involvement.
Links
- X/Twitter: @karpathy
- GitHub: github.com/karpathy
- LLM Knowledge Bases post: x.com/karpathy/status/2039805659525644595
- Autoresearch repo: github.com/karpathy/autoresearch
Karpathy's public arc validates the MythOS thesis directly. His LLM Knowledge Bases post describes the same core architecture MythOS has been building since 2017 — an AI-maintained, agent-accessible single source of truth. His observation that there is room for "an incredible new product instead of a hacky collection of scripts" names the gap MythOS fills. His shift from code manipulation to knowledge manipulation reflects the token throughput reallocation MythOS is designed to maximize. His AGENTS.md schema pattern mirrors MythOS's memo-as-instruction architecture.
Contexts
🏷️#ai 🏷️#agentic 🏷️#knowledge-management-platform 🏷️#machine-learning 🏷️#open-source 🏷️#openai 🏷️#research 🏷️#vibe-coding
