📚Academy
likeone
online

Persistent Memory Architecture

An AI without memory is a stranger every time you meet it.

Memory is the foundation of convergence. Without it, every session starts from zero. With it, your AI accumulates wisdom across thousands of interactions — becoming more useful with every conversation.

What you'll learn

  • Why chat history is not real memory
  • The three layers of AI memory: working, episodic, and semantic
  • How to design a brain architecture using key-value stores and embeddings
  • Strategies for memory retrieval that actually work at scale

Chat History Is Not Memory

Most AI systems store your previous messages and call it "memory." But scrolling through 10,000 messages to find a decision you made last month is not memory — it's a filing cabinet with no labels.

Real memory is structured, searchable, and contextual. It knows not just what was said, but what it meant, when it mattered, and how it connects to everything else you've built together.

The Three Layers of AI Memory

Working Memory. The current conversation context. What's happening right now. This is what every AI already has — the context window. It's fast but it evaporates when the session ends.

Episodic Memory. Records of specific events, decisions, and interactions stored permanently. "On March 15th, we decided to use Stripe for payments because of their webhook reliability." This is your project history — timestamped, retrievable, accumulating.

Semantic Memory. Distilled knowledge — facts, preferences, rules, identity. Not tied to a specific moment but always true. "Faye prefers concise responses." "The deploy pipeline uses Vercel." This is the brain's permanent knowledge base.

A Practical Brain Schema

A convergence-ready memory system needs at minimum:

Key-value store — for semantic memory. Keys like identity.user, system.infrastructure, directive.rules. Fast reads, human-readable, easy to update.

Vector embeddings — for episodic memory. Every important interaction gets embedded and stored. When the AI needs context, it searches semantically — not by keyword, but by meaning.

Session state — for working memory continuity. session.active_work, session.next_steps. So the next session picks up exactly where this one left off.

The three layers of AI memory.

Building a Brain in Practice

Here is a concrete implementation using Supabase (PostgreSQL). This is the actual pattern used by production convergence systems — not a toy example:

Step 1: Create the table. A single brain_context table with columns: key (text, primary key), value (jsonb for flexibility), category (text — identity, directive, session, system), and updated_at (timestamp, auto-updated). This is your semantic memory store.

Step 2: Populate the foundation. Start with 10-15 keys that define your AI's world: identity.user (who you are), directive.rules (what the AI must always do), directive.guardrails (what the AI must never do), system.infrastructure (what tools are available), session.active_work (what is being worked on right now).

Step 3: Add vector support. Enable pgvector and add an embedding column. Every brain entry gets embedded when created or updated. This enables semantic search — finding memories by meaning, not just by key name.

Step 4: Build the boot sequence. On every session start, the AI reads its critical keys in parallel: identity, directives, active work, and next steps. This takes under a second and gives the AI full context immediately.

Memory Architecture Mistakes

The everything-in-one-key mistake. Storing all your rules in a single giant JSON blob under system.config. The AI has to read the entire blob even when it only needs one rule. Break it up: directive.name_safety, directive.autonomy, directive.privacy. Each key is independently readable and updatable.

The no-category mistake. Every key lives in a flat namespace with no grouping. After 500 keys, the AI cannot efficiently find what it needs. Use hierarchical categories: identity.*, directive.*, session.*, project.*. The AI reads an entire category at once instead of guessing individual keys.

The write-never-read mistake. Diligently storing every decision and interaction, but never building retrieval into the agent loop. The brain fills up but the AI never consults it. Memory that is not read is not memory — it is a log file. Build reads into the boot sequence and decision-making loop.

When Memory Gets Large

A brain with 100 keys is easy. A brain with 10,000 keys requires strategy. The AI cannot read everything on boot — that would consume the entire context window. Three solutions:

Tiered loading. Boot reads only critical keys (identity, active work, directives). Other keys are loaded on-demand when the AI encounters a relevant task. This keeps boot fast while preserving access to everything.

Semantic retrieval. Instead of loading keys by name, embed the current task description and retrieve the most semantically relevant memories. The AI gets context it did not know it needed — surfacing connections that key-name-based retrieval would miss.

Memory consolidation. Periodically merge old entries into summaries. 200 daily session logs become one quarterly summary. This keeps the brain's active size manageable while preserving historical knowledge in compressed form.

Memory Retrieval That Scales

Storing everything is easy. Retrieving the right thing at the right time is the hard problem. A brain with 10,000 entries is useless if the AI can't find the one entry it needs in the moment it needs it.

Hierarchical keys solve this for semantic memory. Instead of one giant document, organize knowledge into namespaced keys: directive.* for rules, identity.* for who you are, infrastructure.* for technical systems. The AI reads what it needs, not everything.

Semantic search solves this for episodic memory. Embed the query, find the nearest vectors, retrieve the context. Tools like pgvector make this possible inside a standard Postgres database — no exotic infrastructure required.

The Boot Sequence Pattern

The most important moment in a convergence session is the first 500 milliseconds — the boot sequence. This is when the AI reads its critical memories and establishes context. A well-designed boot sequence reads in parallel:

Identity keys: Who the user is, their preferences, their voice. This shapes every subsequent response.

Directive keys: The rules the AI must follow — autonomy level, privacy boundaries, operational constraints. These are the guardrails that ensure safe behavior.

Session state: What was happening in the last session — active work, next steps, any blockers. This enables seamless continuity.

System state: What infrastructure is available — which services are running, which tools are deployed, what the current system health looks like.

Reading these in parallel (not sequentially) cuts boot time dramatically. The AI does not need to read everything in the brain — just the keys that establish context. Everything else can be loaded on-demand.

Memory Technologies Compared

Several technologies can serve as your AI's brain. Each has tradeoffs:

Supabase (PostgreSQL): Full SQL database with built-in auth, RLS, and pgvector for embeddings. Free tier available. Best for production convergence systems. Like One uses this architecture.

SQLite: Local file-based database. No server needed. Zero-latency reads. Best for local-only systems or prototyping. Limitation: cannot be accessed by multiple machines simultaneously.

JSON files: The simplest option — just read and write a JSON file. Good for getting started in an afternoon. Limitation: no querying, no concurrency, no vector search. You will outgrow this in a week.

Redis: In-memory key-value store. Extremely fast reads. Best for short-term and shared memory where speed matters. Limitation: data can be lost on restart unless persistence is configured.

Start with whatever gets you building fastest. Migrate to Supabase when you need persistence, vector search, or multi-device access. The brain schema (key, value, category, updated_at) is the same regardless of technology.

Try It Yourself

Design a memory schema for your own AI brain. Start with these categories:

identity.* — Who you are, your preferences, your voice directive.* — Rules the AI must always follow system.* — Technical infrastructure and tools session.* — Current work state and next steps project.* — Active project details and history Write 3-5 keys for each category. This becomes your AI's permanent knowledge base — the foundation of convergence.

Key concepts.

Persistent memory architecture quiz.

Memory and the Context Window

The context window is the AI's working memory — the total text it can process at once. Claude's window is up to 200K tokens (roughly 150,000 words). This sounds large, but a busy agent session can fill it in under an hour of active tool use.

When the context window fills, old information gets pushed out. The AI literally forgets what happened at the beginning of the session. This is the "goldfish problem" — and persistent memory is the solution.

The fix: Checkpoint critical state to the brain every 15-20 minutes of active work. When the context window gets heavy, start a fresh session. The AI reads the brain on boot and has full context in seconds — no information lost, no re-explanation needed.

This pattern — working memory in the context window, permanent memory in the brain — mirrors how human memory works. You hold the current task in your head. You store important facts in notebooks and systems. The combination is more powerful than either alone.

Securing Your Brain

Your brain database contains your most sensitive information. Securing it is not optional — it is the first priority:

Row-Level Security (RLS): Enable RLS on every table. This ensures that even if someone gets your database URL, they cannot read your data without the correct authentication. Supabase makes this a checkbox.

Service role vs. anon key: Your database has two keys — anon (limited) and service role (full access). Use the anon key for public-facing functions. Use the service role key only in secure, server-side contexts. If an anon key leaks, damage is limited. If a service key leaks, everything is exposed.

Environment variables: Never store database credentials in code. Use .env files locally and environment variable configuration in deployment platforms. Add .env to .gitignore immediately. One accidental commit of a service key to GitHub can expose your entire brain.

Regular key rotation: Rotate your database keys every 90 days. If a key was ever exposed or you suspect a breach, rotate immediately. Update all agents and services that use the key.

Memory Is the Foundation

Every other capability in this course — autonomous action, values alignment, digital twins, life operating systems — depends on persistent memory. Without it, none of them work. An autonomous agent without memory repeats mistakes. A twin without memory has amnesia. A life OS without memory starts from scratch every day.

This is why persistent memory architecture is Lesson 2 — right after the concept of convergence itself. It is the foundation everything else is built on. Get this right and every subsequent lesson becomes dramatically easier. Get it wrong and nothing else matters.

Build your brain first. Populate it with your identity, your rules, and your current state. Then layer on autonomy, values, and the twin. The brain is the bridge that makes convergence possible.

Academy
Built with soul — likeone.ai