The Cognitive Memory Framework
How Human Memory Maps to AI Agent Systems
| # | Cognitive Function | Human Equivalent | Plugin / System | What It Does |
|---|---|---|---|---|
| 1 | Working Memory | What you're actively thinking about | Context Window | Holds active context — limited size is the root constraint |
| 2 | Context Management | What gets into working memory | LCM | Compresses and reconstructs context — prevents loss from compaction |
| 3 | Episodic Memory | Experiences / diary | Core Logs | Stores raw conversations — pure storage, no understanding |
| 4 | Episodic Retrieval | Remembering past events | QMD | Searches and retrieves from logs — diary-based, not knowledge |
| 5 | Semantic Memory | Facts, knowledge, understanding | TrueMem (Graph) | Stores structured facts + relationships — this is the gap most tools miss |
| 6 | Consolidation | Experience → knowledge | TrueMem Librarian | Extracts facts from conversations into the graph |
| 7 | Retrieval (Semantic) | Recall of relevant knowledge | TrueMem Retrieval | Injects relevant facts into context alongside QMD |
| 8 | Forgetting | Removing outdated knowledge | Temporal Logic | Expires old facts, updates graph — time-aware memory |
| 9 | Procedural | Skills / how to do things | OpenClaw Skills | Executes workflows and actions — independent from knowledge |
Sources & Research
This framework maps established cognitive science to real AI engineering. The memory types come from peer-reviewed research. The implementations come from leading AI organizations.
Cognitive Memory Framework — Scientific Basis
The framework mapping used in this talk is grounded in established cognitive science: Tulving (1972) — episodic vs semantic memory, Atkinson & Shiffrin (1968) — multi-store memory model, Baddeley (2000) — working memory model. The mapping of cognitive functions to AI systems is original.
Tulving — Episodic and Semantic Memory (1972)
The foundational distinction between episodic memory (personal experiences) and semantic memory (general knowledge/facts). Directly maps to QMD (episodic) and TrueMem (semantic) in our framework.
PMC: Understanding Memory Dysfunction →Atkinson & Shiffrin — Multi-Store Model (1968)
The classic three-stage model: sensory → short-term (working) → long-term memory. Maps directly to context window → LCM → TrueMem in our framework.
Frontiers: Memory Models and Their Origins →MNESIS Model — Working Memory + Consolidation
Modern model integrating working memory, episodic buffer, and procedural memory. Shows how consolidation transforms episodic experience into semantic knowledge — exactly what the Librarian does.
PMC: Mathematical Modeling of Human Memory →Cursor Engineering
Wilson Lynn's post on scaling long-running autonomous coding. Planner → Worker → Judge architecture. Built a browser from scratch in Rust — 1M lines in a week.
cursor.com/blog →Cursor Math Breakthrough
Michael Trule: Cursor's coding harness solved Problem 6 of an unpublished Stanford/MIT/Berkeley math proof — without being designed for math. The harness generalized.
@michael_trule →Anthropic — Agentic Coding Trends 2026
Engineers delegating tasks where they can "sniff check" correctness. The meta-skill: knowing if work is correct matters more than doing the work.
anthropic.com/research →Google DeepMind — AlphaProof
Separates generation, verification, and revision into distinct roles. Same principle as scientific peer review and legal adversarial proceedings.
deepmind.google →OpenAI Codex
Parallel sandbox environments for isolated task execution. Each agent works in its own container with fresh context — no cross-contamination.
openai.com/codex →The Convergence
Four independent organizations built the same structure: decompose → parallelize → verify → iterate. None coordinated. All converged on the same architecture.