Stack Anatomy
The layers of a modern AI-powered web app — and the battle-tested stack that runs Like One Academy.
Why Your Stack Choice Matters
Building an AI-powered web app is not about picking the trendiest tools. It is about choosing layers that work together seamlessly, scale without surprises, and do not drain your budget before you have your first customer. The wrong stack creates integration hell — spending weeks wiring services together instead of building features.
The right stack gives you fewer services, less glue code, and more time building the product that matters. This lesson reveals the exact architecture that powers Like One Academy in production — 52 courses, 520+ lessons, AI memory, Stripe payments, global CDN, all for about $29/month.
The Like One Stack (Battle-Tested)
Supabase + Edge Functions + Claude + Next.js + Vercel
~$29/mo total. This is what Like One Academy runs on in production — 52 courses, 520+ lessons, AI memory, Stripe payments, and global CDN. Every layer below is explained with the real config we use.
The Five Layers of an AI-Powered App
Every AI web app has the same fundamental architecture — five layers that work together. Understanding what each layer does and why you need it prevents the most common mistake: over-engineering early and under-engineering late.
Our choice: Next.js 14 on Vercel. Server-side rendering for SEO, client-side interactivity where needed, auto-deploys from GitHub. App Router + Server Components mean your pages load fast and search engines can index everything. Vercel built Next.js — the deployment experience is zero-config.
Our choice: Supabase Edge Functions (Deno). TypeScript functions that run globally, close to users. No servers to manage. They handle email capture, payment processing, AI queries — anything that needs server-side secrets or database access. Think of them as the nervous system connecting your frontend to everything else.
Our choice: Supabase (Postgres + pgvector). Not just a database — it bundles auth, real-time subscriptions, file storage, and Row Level Security (RLS) in one platform. pgvector enables semantic search with AI embeddings, so your app can find things by meaning, not just keywords. One service replaces five.
Our choice: Claude (Anthropic) + BGE-small embeddings (HuggingFace). Claude handles reasoning and generation — the brain of your app. BGE-small creates embeddings for free via HuggingFace Inference API, which get stored in pgvector for semantic search. The two-tier approach means you only call the expensive model when you actually need reasoning.
Our choice: Make.com + Stripe webhooks. Make.com connects services that do not have direct integrations — Slack alerts when someone subscribes, spreadsheet logging for analytics, scheduled content publishing. Stripe webhooks handle payment events. Together, they make data flow between services without writing custom code for every connection.
The Full Architecture Config
Here is the actual architecture in config format — this is the real stack running Like One Academy:
# stack-config.yaml — The anatomy of an AI-powered web app
deploy:
platform: Vercel # Auto-deploys from GitHub
domain: your-app.com
regions: [iad1, sfo1] # Edge-close to users
frontend:
framework: Next.js 14 # App Router + Server Components
styling: Tailwind CSS
auth_ui: @supabase/auth-ui-react
backend:
runtime: Supabase Edge Functions # Deno, globally distributed
language: TypeScript
functions:
- subscribe # Email capture
- create-checkout # Stripe payment
- brain-query # AI memory retrieval
database:
provider: Supabase (Postgres)
extensions: [pgvector, pg_trgm]
auth: Supabase Auth + RLS # JWT-based row security
realtime: true # Live subscriptions
ai:
model: Claude (Anthropic) # Reasoning + generation
embeddings: BGE-small (HuggingFace) # Free vector embeddings
vector_db: pgvector # Semantic search in Postgres
cost:
total: ~$29/month
breakdown:
supabase: $25 # Pro plan (DB + Auth + Functions)
vercel: $0 # Hobby tier for most projects
claude: ~$4 # Pay-per-token, varies with usage
Why This Stack (And Not Firebase, AWS, or Django)
There are many ways to build a web app. Here is why we chose this specific combination — and when you might choose differently:
| Alternative | When It Wins | Why We Did Not Choose It |
|---|---|---|
| Firebase | Mobile-first apps, real-time features, Google ecosystem integration | NoSQL (Firestore) is painful for relational data. No pgvector equivalent. Vendor lock-in with Google. |
| AWS (Lambda + RDS) | Enterprise scale, existing AWS infrastructure, complex microservices | Requires 10+ services to replicate what Supabase does in one. IAM complexity. Cold starts. Overkill for most indie/startup projects. |
| Django + Railway | Python-heavy teams, data science pipelines, admin-heavy apps | Monolithic architecture. Harder to scale serverlessly. Slower iteration cycle than edge functions. |
| FastAPI + PlanetScale | Python API backends, MySQL-preferred teams | PlanetScale lacks pgvector, auth, and edge functions. You end up stitching 5 services together yourself. |
How the Layers Talk to Each Other
Understanding data flow is more important than understanding any individual service. Here is how a request moves through the stack when a user subscribes:
Total time: under 500ms. Six services coordinated in half a second. That is the power of choosing services that are designed to work together at the edge.
Cost Breakdown: $29/Month for a Production App
One of the most common reasons AI projects die: the infrastructure bill exceeds the revenue. This stack is deliberately optimized for indie developers and small teams who need to ship a real product without burning through savings.