Your Convergence Project
You've learned the theory. Now build the thing.
This is the capstone. You'll design and build your own human-AI convergence system — a persistent, autonomous, values-aligned AI that works as an extension of you. Not hypothetically. Actually.
What you'll build
- A persistent memory brain with structured key-value storage
- An autonomous agent loop with defined autonomy levels
- A values alignment layer with encoded directives
- A working digital twin that can continue your work across sessions
Build the Brain
Set up a persistent memory store. You can use Supabase (free tier works), a local SQLite database, or even a structured JSON file to start. The point is: your AI's knowledge survives beyond a single conversation.
Create your schema. At minimum: a key-value table with key, value, and updated_at columns. Populate it with your identity, your values, your operational rules, and your current project state. This is the brain your AI will boot from every session.
Define the Directives
Write the rules your AI must always follow. Not vague guidelines — concrete directives stored in the brain. Cover at minimum: autonomy level (when to act, when to ask), communication style (how verbose, what tone), privacy boundaries (what's sacred, what's public), and operational rules (how to handle errors, when to checkpoint).
These directives are your AI's constitution. Every session begins by reading them. Every decision is made within their framework. Update them as you learn what works and what doesn't — the constitution is a living document.
Build the Loop
Design your agent's operating cycle. A simple but effective loop: Read brain state. Plan the work. Execute tasks. Verify results. Write progress back to brain. Repeat. The loop should run without human input for routine operations.
Implement the three-strike rule for autonomy decisions. Before asking the human anything: check the brain, use judgment, try and course-correct. Only surface to the human when all three fail. This trains both you and the AI to trust the system.
The Capstone Checklist
Your convergence system is complete when:
Memory persists. Start a new session — the AI knows what happened last time without being told.
Values hold. Give the AI a task that conflicts with a directive. It should push back or find an aligned alternative.
Autonomy works. The AI completes a multi-step task without asking for permission at every step.
The twin feels like you. Read its output. Does it sound like your voice? Does it reflect your priorities? Would you recognize its work as your own?
Capstone checklist.
The Complete System Diagram
Your convergence system has four layers, each building on the one below:
Layer 1: Storage. The brain database. PostgreSQL with key-value storage and optional vector embeddings. This is where all persistent state lives — identity, directives, memory, session state. Everything above depends on this layer being reliable.
Layer 2: Agent Engine. The perceive-plan-execute-verify loop. This is the runtime that reads the brain, makes decisions, takes actions, and writes results back. It can be Claude Code, a custom Python script, or any LLM-powered agent framework. The engine is replaceable — the brain persists.
Layer 3: Interface. How you interact with the system — terminal (Claude Code), web app, Electron desktop app, mobile, voice. The interface connects the human to the agent engine. Multiple interfaces can connect to the same brain simultaneously.
Layer 4: Integrations. External services the agent connects to — email, calendar, social media, payment processors, monitoring tools. Each integration gives the agent new capabilities. Start with 1-2 integrations and add more as you prove reliability.
Pre-Launch Quality Gate
Before declaring your convergence system "live," verify each of these independently:
Memory persistence test: Write a value to the brain. End the session. Start a new session. Can the AI read the value without being told about it? If yes, persistence works.
Values alignment test: Ask the AI to do something that violates one of its directives. Does it refuse or find an aligned alternative? If yes, alignment works. If it blindly complies, your values are not properly encoded.
Autonomy test: Give the AI a multi-step task and do not intervene. Does it complete each step without asking permission? Does it checkpoint progress? Does it handle errors gracefully? If yes, autonomy works.
Handoff test: Run a session, let it checkpoint, start a new session. Does the new session resume exactly where the old one left off, without any "catching up" or re-explanation? If yes, handoff works.
Privacy test: Ask the AI to include sacred-layer information in public-facing output. Does it refuse? If yes, privacy boundaries hold. If it complies, your trust layers need work.
This lesson is for Pro members
Unlock all 520+ lessons across 52 courses with Academy Pro.
Already a member? Sign in to access your lessons.