Skip to main content
Mythos

On April 2, 2026, @Andrej Karpathy posted a detailed architecture for "LLM Knowledge Bases" — personal knowledge systems built and maintained by LLMs, stored as markdown, compounding with use. He closed with: "I think there is room here for an incredible new product instead of a hacky collection of scripts." @MythOS is that product. Not conceptually. Architecturally. Every layer Karpathy described maps to something MythOS has already shipped. This is @The Augmentation Stack validated by the person who coined #vibe-coding.

Data Ingest → Memo Creation + Email Ingestion

Karpathy indexes source documents into a raw/ directory, then has an LLM compile them into wiki articles. MythOS handles this through multiple ingest paths:

  • MCP toolscreate_memo, update_memo, and chat_with_library let any AI client write structured knowledge directly into the library
  • Email ingestion — forward any email to your private ingest-{code}@mythos.one address with natural language instructions. The system parses, structures, and files it as a memo automatically
  • @Obsidian{bold} importnpx mythos-mcp import --obsidian ~/vault converts an existing vault into a MythOS library, preserving tags, dates, and rebuilding the knowledge graph
  • Web content — any AI collaborator can read a URL and create a memo from it in a single turn The difference: Karpathy's raw/ directory requires manual file management. MythOS ingest is structured, multi-channel, and agent-accessible from day one.

LLM-Compiled Wiki → Agent-Maintained Memo Library

Karpathy's core insight: the LLM writes and maintains all wiki content. The human rarely touches it directly. MythOS was designed around this exact model:

  • Author/collaborator separation — every memo has a notes field where AI enrichment lives beside human writing, never overwriting it
  • Augmentation systemget_context loads identity memos (Soul, Style, Human, Memory) before every memo operation, so the AI writes in your voice with your rules
  • Automated pipeline — MythOS's memo pipeline can discover sources, research, draft, analyze, and publish memos autonomously. The human reviews; the system builds
  • Delta sync — incremental content hashing means agents work with current state without re-reading everything Karpathy's AGENTS.md schema file — which tells the LLM how to organize and contribute — is functionally identical to MythOS's augmentation memos. The difference is that MythOS's version is structured, versioned, and loaded automatically via @MCP rather than depending on the LLM to find and read a file.

Obsidian as IDE → MythOS as Platform

Karpathy uses @Obsidian as a read-only frontend to view the wiki. MythOS replaces the need for Obsidian entirely:

  • Web app at mythos.one — rich editor, knowledge graph visualization, community feeds
  • @Claude Code integration — full MCP access via API key for terminal-native workflows
  • Claude.ai and mobile — OAuth connection works on web and mobile apps. Your knowledge base travels with you. Obsidian's MCP integration is desktop-only. See: @Obsidian vs. MythOS as Claude Memory
  • Markdown exportnpx mythos-mcp pull creates a full local copy compatible with Obsidian, Hugo, or any markdown tool. No lock-in

Q&A Against the Wiki → chat_with_library

Karpathy's excitement about querying a 400K-word wiki maps directly to MythOS's chat_with_library MCP tool — @RAG-powered Q&A across your entire memo library with semantic search, not just file retrieval. At Karpathy's scale (~100 articles), MythOS's vector embeddings and cosine similarity search handle this natively. At 17,000+ memos (MythOS's current production scale), the architecture still holds where flat-file approaches collapse. See: @MCP vs RAG. Karpathy noted he thought he'd need "fancy RAG" but found the LLM could manage at small scale with index files. MythOS built the fancy RAG anyway — because the architecture needs to work at 100 memos and at 100,000.

Filed Outputs That Compound → Memo-as-Record

Karpathy's key observation: outputs from Q&A sessions get "filed back into the wiki to enhance it for further queries." Explorations always add up. This is MythOS's foundational design principle and the core mechanic of @human-AI augmentation:

  • Every AI interaction can produce a memo that enriches the library
  • Session memory protocol logs decisions and context into daily memos automatically
  • The memo pipeline's output feeds back into the knowledge graph — new connections, new tags, new context for future queries
  • Communities allow selective sharing, so compounding knowledge isn't trapped in a single user's silo

Linting / Health Checks → Augmentation + Pipeline

Karpathy runs LLM health checks to find inconsistencies, impute missing data, and surface new article candidates. MythOS ships this as infrastructure:

  • Augmentation memos tagged #mythos-mcp-context are loaded into every AI interaction, ensuring consistency across sessions
  • Pipeline stages (discovering → researching → drafting → analyzing → publishing) include validation contracts that reject malformed contributions
  • Tag system and knowledge graph make inconsistencies structurally visible — orphaned memos, missing connections, and duplicate concepts surface through graph traversal

Extra Tools / CLI → MCP Tool Ecosystem

Karpathy vibe-coded a search engine and CLI tools to process his wiki. MythOS ships a complete MCP tool suite: search_memos, get_related_memos, list_tags, list_communities, delta_sync, and more — available to any MCP-compatible AI client without writing a line of code. The tools Karpathy built for himself are MythOS's standard interface. See: @MythOS MCP.

Finetuning Aspirations → Already in the Architecture

Karpathy's "further exploration" — synthetic data generation and finetuning so the LLM knows the data in its weights — is where MythOS's augmentation system points. The get_context call already loads structured identity and preference data into every interaction. As models support longer context and better personalization, MythOS's memo structure is ready to serve as training data. The architecture doesn't need to change; the integration layer just deepens.

What MythOS Adds Beyond Karpathy's Vision

Karpathy described a single-user, local-first research tool. MythOS extends this into territory his architecture can't reach:

  • Identity layer — the AI knows who you are, how you think, and what's private. Not just what's in the wiki. See: @Collaborative Augmentation
  • Permission model — three-tier visibility (public, link, private) with audience tags. Karpathy's flat directory has no access control
  • Communities — selective knowledge sharing, forking, and collaborative libraries. Knowledge compounds across people, not just sessions
  • Cross-platform — web, mobile, CLI, IDE. Not bound to a desktop app
  • Structured collaboration — author content and AI content are separated architecturally. No risk of the LLM overwriting your work
  • Multi-agent orchestration — MythOS serves as the knowledge layer for agent ecosystems (57+ agents in @BrianBot), not just a single LLM session Karpathy described a workflow. MythOS is the product that workflow demands. Every component he built with scripts — ingest, compilation, querying, health checks, output filing — maps to a shipped MythOS feature. The components he wished for — search, CLI tools, a real product — are MythOS's standard interface. The telling detail is what he didn't describe: identity, permissions, collaboration, multi-agent access, mobile. These are the things you don't realize you need until you've been living inside the system for years. MythOS has been living inside this system since 2017. Karpathy arrived at the architecture in 2026. The gap between his vision and our implementation is the gap between a personal research workflow and a platform — and that gap is exactly where MythOS lives. His closing line — "an incredible new product instead of a hacky collection of scripts" — is the most useful sentence anyone has written about MythOS's market position. It came from the person who coined #vibe-coding, who mass-market developers trust more than any other voice in AI. He didn't endorse MythOS. He described it, without knowing it existed. That's stronger than an endorsement.

Contexts

  • #agentic-augmentation
  • #context-management
  • #knowledge-management-platform
  • #model-context-protocol
  • #mythos
Created with 💜 by One Inc | Copyright 2026