📚Academy
likeone
online

Validation and Prototyping

Build the demo before you build the product.

The fastest way to waste six months is to build something nobody wants. The fastest way to avoid that is to fake it first.

What you'll learn

  • How to validate an AI product idea in under a week
  • The "Wizard of Oz" method for AI prototyping
  • When to use no-code tools vs. writing code
  • What signals tell you to proceed — or pivot

The Wizard of Oz Prototype

Before you integrate a single API, simulate the AI experience manually. Create a form where users submit inputs. You (a human) process them using ChatGPT or Claude behind the scenes. Deliver the output as if the product did it automatically.

This tests the only question that matters: does the output actually solve the user's problem? If people don't care about the result even when a human curates it, no amount of automation will save the idea.

The 48-Hour Validation Sprint

Hour 0-4: Write the magic trick sentence. Build a landing page describing the outcome. Include a waitlist signup or a "try the beta" button.

Hour 4-12: Share the landing page in 5 communities where your target users hang out. Reddit, Discord, LinkedIn, niche forums. Don't pitch — describe the problem and ask if others face it.

Hour 12-36: For anyone who signs up, manually deliver the AI result using existing tools. ChatGPT, Claude, a Python script — whatever gets the output into their hands.

Hour 36-48: Collect feedback. Did they use it? Did they come back? Did they share it? These behavioral signals are worth more than any survey response.

Validation Signals That Matter

Go signal: Users come back unprompted and ask "when is the full version ready?"

Go signal: Users share it with colleagues without being asked

Caution: Users say "this is cool" but don't actually use it again

Stop signal: Users try it once and ghost. No follow-up. No questions.

Prototyping Without Code

You don't need to code a prototype. Use Typeform for input collection. Use Make or Zapier to connect it to an AI API. Use Notion or Airtable to store results. Use email to deliver outputs. The entire flow can be built in an afternoon.

The goal isn't a beautiful product. The goal is to learn whether the output is valuable. Ugly prototypes that deliver real value always beat polished products that solve imaginary problems.

Proceed, Pivot, or Kill

Proceed when users demonstrate behavior, not just words. They return, they pay, they share, they ask for more. Behavior is truth.

Pivot when the problem is real but your solution misses the mark. Users engage but the output isn't quite right. Adjust the output format, the input method, or the scope.

Kill when the problem itself isn't painful enough. If users shrug at a hand-curated result, automation won't help. Move on. The graveyard of startups is full of solutions to problems nobody has.

The Five Validation Levels

Not all validation is equal. Each level provides increasingly strong evidence that your idea is worth building. Don't skip levels — each one filters out a different kind of bad idea.

Level 1 — Problem validation: Does the problem actually exist? Talk to 10 people in your target audience. Don't pitch your solution — describe the problem and ask if they experience it. If fewer than 7 out of 10 recognize the pain, the problem isn't universal enough.

Level 2 — Solution validation: Does AI solve it better than alternatives? Show people the output — not the product, just the output. A summary. A categorized list. A generated draft. Ask: "Is this useful? Would you use this regularly?" If they say "I can do this myself in 10 minutes," your AI advantage isn't strong enough.

Level 3 — Willingness-to-pay validation: Will people actually pay? The simplest test: create a Stripe payment link for your product before it exists. Put it on a landing page. Track clicks. Even if you refund everyone, you now know who was willing to enter credit card details.

Level 4 — Usage validation: Do people use it more than once? This is where the Wizard of Oz prototype earns its keep. Deliver results manually to 20 people. Track how many come back for a second request without prompting. Repeat usage is the strongest pre-product signal.

Level 5 — Referral validation: Do people share it unprompted? If users send it to colleagues without being asked, you have something real. Referral behavior is the highest-fidelity signal because it costs the user social capital — they won't recommend garbage.

The Concierge MVP for AI Products

A concierge MVP goes beyond Wizard of Oz. Instead of just simulating the AI output, you simulate the entire product experience — onboarding, delivery, follow-up — by doing everything manually.

The process: Recruit 5-10 users through direct outreach. Set up a simple intake form (Typeform, Google Forms). When they submit a request, you manually produce the result using AI tools, then deliver it via email or a shared document within a defined SLA (e.g., 2 hours).

Why it works: You learn things a prototype never teaches you. What questions do users ask during onboarding? What format do they actually want the output in? How often do they come back? What do they complain about? These insights shape your real product in ways that no amount of theorizing can match.

When to graduate: When you're spending more time fulfilling requests than you have available — that's demand exceeding your manual capacity. That's the green light to automate.

Building a Validation Landing Page

Your landing page has one job: convert visitors into signal. Not into customers — into data points that tell you whether to build or not.

Headline: State the transformation, not the technology. "Stop spending 3 hours on expense reports" beats "AI-powered expense automation." The headline should make your target user think "that's me."

Before/after: Show the pain (current state) and the relief (future state). A screenshot of a messy spreadsheet next to a clean, categorized report. Visual transformation is more convincing than any paragraph of text.

Social proof: Even pre-launch, you can use proof. "47 accountants are waiting for early access." "Built by a team that processed 10,000 expense reports the hard way." Credibility signals reduce skepticism.

The ask: Email signup is the minimum. A paid pre-order or deposit is stronger signal. A short survey ("how much time do you spend on this weekly?") gives you segmentation data. The more friction in your ask, the higher-fidelity your signal — but the fewer responses you'll get.

Validation Anti-Patterns

Asking friends and family. They'll say it's great because they love you. Their validation is worthless. Test with strangers who have no social obligation to be kind.

Validating the technology instead of the product. "Can AI summarize documents?" is a technology question — the answer is obviously yes. "Will busy executives pay $39/month to never read a full report again?" is a product question — the answer is unknown. Validate the second one.

Building too much before validating. If you've written more than 200 lines of code before talking to a single potential user, you're building on assumptions. Assumptions are comfortable but often wrong. One afternoon of user interviews saves months of misdirected engineering.

Treating "cool" as validation. "This is really cool!" is the most dangerous feedback you can receive. Cool means interesting. It doesn't mean useful. It doesn't mean valuable. It definitely doesn't mean "I'll pay for this." Dig deeper: "Would you use this tomorrow? What would you use it for? What would you pay?"

Measuring Validation Success

Validation needs numbers, not feelings. Set concrete thresholds before you start so you can't rationalize a weak result into a go decision.

Landing page conversion: If fewer than 5% of visitors sign up for your waitlist, either the value proposition isn't clear or the audience isn't right. Above 10% is a strong signal. Above 20% means you've hit a nerve — move to building immediately.

Manual delivery retention: Of users who receive a Wizard of Oz output, at least 50% should request a second one within 7 days. Below that, the output isn't valuable enough to build habits around.

Willingness to pay: At least 20% of beta users should express willingness to pay when you describe the pricing. If you show them a payment page, at least 5% should actually click "subscribe" or enter payment details (even if you don't charge them yet).

Referral rate: At least 10% of beta users should share the product with someone else unprompted within the first two weeks. Referral is the strongest signal that your product solves a real, shareable problem.

Write these thresholds down before you start validating. After the sprint, compare results to thresholds. If you hit 3 of 4, proceed. If you hit 2 of 4, pivot the approach. If you hit 0-1, kill the idea and move on.

From Validation to Product Requirements

Validation doesn't just tell you whether to build — it tells you what to build. The insights from your validation sprint should directly shape your MVP specifications.

Input format: During Wizard of Oz testing, what format did users naturally submit their data in? If they all sent PDFs, don't build a text-paste interface. If they all used mobile, don't build desktop-first. Follow the behavior you observed.

Output expectations: What did users do with the AI output? Did they paste it into emails? Drop it into spreadsheets? Share it with colleagues? The downstream use of your output determines the format, length, and style your product should generate.

Frequency pattern: How often did beta users come back? Daily users need a different product than weekly users. Daily use demands speed, keyboard shortcuts, and persistent state. Weekly use demands re-onboarding, context restoration, and email reminders.

Feature requests: What did beta users ask for that you didn't offer? List every request. The requests that appear 3+ times are candidates for your MVP. The requests that appear once are nice-to-haves for version 2. The requests that never appear are features you imagined users would want but they actually don't.

Your validation sprint produces a document: the product specification, written by user behavior rather than assumptions. This document is the foundation of your MVP.

Speed as a Validation Weapon

The faster you validate, the more ideas you can test. The more ideas you test, the higher your chances of finding one that works. Speed in validation isn't about cutting corners — it's about eliminating waste.

Time-box everything: Give yourself 48 hours for a landing page test, one week for a Wizard of Oz trial, two weeks for a concierge MVP. If the idea can't produce signal within these windows, either the idea lacks urgency or you're testing the wrong signal.

Kill fast, learn faster: The most successful AI product builders aren't the ones who get the first idea right — they're the ones who test and kill bad ideas fastest. Testing 5 ideas in 3 months beats perfecting 1 idea for 3 months. Each failed validation teaches you something about the market that makes the next idea stronger.

Reuse your infrastructure: Build your landing page template once. Reuse it for every idea. Build your feedback collection system once. Reuse it. Build your Wizard of Oz delivery pipeline once. Reuse it. The cost of testing idea #5 should be a fraction of the cost of testing idea #1 because your validation infrastructure is already built.

Treat validation as a skill, not a chore. Like any skill, it improves with practice. Your tenth validation sprint will be dramatically faster and more insightful than your first. Embrace the process and trust that volume leads to quality.

The Validation Mindset

The entire purpose of validation is to reduce risk before you invest significant time and money. A week of validation can save six months of building the wrong thing. That's a return on investment no other activity in product development can match.

The hardest part of validation isn't the process — it's the emotional discipline to accept negative results. When you've fallen in love with an idea, a failed validation sprint feels like a personal rejection. It's not. It's the market saving you from a mistake. Be grateful for fast failures — they're cheap lessons.

The best founders treat validation as a scientific process: form a hypothesis, design an experiment, collect data, and let the data decide. Remove ego from the equation. The market doesn't care about your vision. It cares about its own problems. Your job is to find the intersection between what you can build and what the market desperately needs.

The frameworks in this lesson — Wizard of Oz, 48-hour sprint, concierge MVP, five validation levels — are all different tools for the same job: converting uncertainty into evidence. Pick the one that matches your stage, run it with discipline, and let the results guide your next move. Evidence-driven builders outlast intuition-driven builders every time.

With your idea validated, you're ready to make the architecture decisions that will determine your product's foundation. Lesson 4 covers how to choose the right technology stack — models, databases, and infrastructure — for an AI product that scales.

Take your validation results with you. The behavioral data you collected — what users did, how often they returned, what they paid, what they complained about — should directly inform every architecture and product decision you make next. Validation isn't just a gate to pass through. It's the foundation your entire product is built on.

The discipline of validation is the discipline of humility. It's admitting that you don't know whether your idea will work — and having the courage to find out. That courage is what separates product builders from dreamers.

Validation in Practice: A Real Example

Imagine you want to build an AI tool that converts podcast episodes into newsletter content. Here's how the validation process looks in practice.

Day 1 — Problem validation: Post in three podcasting communities: "How do you currently repurpose your episodes into written content?" Responses: 40% say they don't because it takes too long. 30% say they pay a freelancer $100-200 per episode. 20% say they do it manually and hate it. 10% use existing tools and find them mediocre. Strong problem signal.

Day 2 — Solution validation: Take 5 public podcast episodes. Use Claude to convert each into a newsletter draft. Send the drafts to 10 podcasters. "If a tool produced this automatically from your episode, would you use it?" 8 out of 10 say yes. 3 ask "when can I get this?" Very strong solution signal.

Day 3 — Willingness-to-pay: Create a Stripe payment page: "$29/month — AI turns your podcast episodes into ready-to-send newsletters." Share in the same communities. 6 people click the payment button. Even without completing payment, the click-through rate tells you the price point is in the right range.

Day 4-7 — Usage validation: Manually process episodes for 5 beta users. 4 come back with a second episode within the week. 2 share it with fellow podcasters. One asks about annual pricing. Verdict: proceed to building the MVP.

The Validation Toolkit

You don't need expensive tools to validate an AI product. Here's the stack that covers every validation need for under $50 total.

Landing page: Carrd ($19/year) or Framer (free tier). Build a one-page site in under an hour. Include headline, before/after, and a signup form. Don't spend more than 2 hours on design — ugly pages that convert prove more than beautiful pages that don't.

Form collection: Typeform (free tier — 10 responses/month) or Google Forms (completely free). Collect user inputs for your Wizard of Oz prototype. Keep forms short — 3-5 fields maximum.

Payment testing: Stripe payment links (free to create, 2.9% per transaction). Create a payment page to test willingness to pay. Refund anyone who actually pays — you're testing intent, not collecting revenue.

Communication: Email for delivery, Discord or Slack for community. Create a small beta group where users can give feedback in real time. The conversations in this group are more valuable than any analytics dashboard.

AI for manual delivery: Claude or ChatGPT for producing outputs during Wizard of Oz testing. Your cost is essentially $20/month for the AI subscription — compare that to months of wasted engineering on an unvalidated idea.

Try It Yourself

Pick your strongest idea from Lesson 2 and run the 48-hour sprint:

1. Write the magic trick sentence 2. Build a one-page site (Carrd, Framer, or even a Google Form) 3. Manually deliver AI results to 5 real users 4. Track: Did they use it? Did they come back? Did they tell anyone?
Academy
Built with soul — likeone.ai